Compare commits

...

347 Commits

Author SHA1 Message Date
Brent Westbrook
c2bc15bc15 Bump 0.12.11 (#20136) 2025-08-28 09:45:01 -04:00
David Peter
e586f6dcc4 [ty] Benchmarks for problematic implicit instance attributes cases (#20133)
## Summary

Add regression benchmarks for the problematic cases in
https://github.com/astral-sh/ty/issues/758. I'd like to merge this
before https://github.com/astral-sh/ruff/pull/20128 to measure the
impact (local tests show that this will "solve" both cases).
2025-08-28 15:25:25 +02:00
Takayuki Maeda
76a6b7e3e2 [pyflakes] Fix allowed-unused-imports matching for top-level modules (F401) (#20115)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Fixes #19664

Fix allowed unused imports matching for top-level modules.

I've simply replaced `from_dotted_name` with `user_defined`. Since
QualifiedName for imports is created in
crates/ruff_python_semantic/src/imports.rs, I guess it's acceptable to
use `user_defined` here. Please tell me if there is better way.


0c5089ed9e/crates/ruff_python_semantic/src/imports.rs (L62)

## Test Plan

<!-- How was it tested? -->

I've added a snapshot test
`f401_allowed_unused_imports_top_level_module`.
2025-08-28 13:02:50 +00:00
Brent Westbrook
1ce65714c0 Move GitLab output rendering to ruff_db (#20117)
## Summary

This PR is a first step toward adding a GitLab output format to ty. It
converts the `GitlabEmitter` from `ruff_linter` to a `GitlabRenderer` in
`ruff_db` and updates its implementation to handle non-Ruff files and
diagnostics without primary spans. I tried to break up the changes here
so that they're easy to review commit-by-commit, or at least in groups
of commits:
- [preparatory changes in-place in `ruff_linter` and a `ruff_db`
skeleton](0761b73a61)
- [moving the code over with no implementation changes mixed
in](0761b73a61..8f909ea0bb)
- [tidying up the code now in
`ruff_db`](9f047c4f9f..e5e217fcd6)

This wasn't strictly necessary, but I also added some `Serialize`
structs instead of calling `json!` to make it a little clearer that we
weren't modifying the schema (e4c4bee35d).

I plan to follow this up with a separate PR exposing this output format
in the ty CLI, which should be quite straightforward.

## Test Plan

Existing tests, especially the two that show up in the diff as renamed
nearly without changes
2025-08-28 08:56:33 -04:00
Shaygan Hooshyari
d9aaacd01f [ty] Evaluate reachability of non-definitely-bound to Ambiguous (#19579)
## Summary

closes https://github.com/astral-sh/ty/issues/692

If the expression (or any child expressions) is not definitely bound the
reachability constraint evaluation is determined as ambiguous.

This fixes the infinite cycles panic in the following code:

```py
from typing import Literal

class Toggle:
    def __init__(self: "Toggle"):
        if not self.x:
            self.x: Literal[True] = True
```

Credit of this solution is for David.

## Test Plan

- Added a test case with too many cycle iterations panic.
- Previous tests.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-08-28 14:34:49 +02:00
Jelle Zijlstra
18eaa659c1 [ty] Introduce a representation for the top/bottom materialization of an invariant generic (#20076)
Part of #994. This adds a new field to the Specialization struct to
record when we're dealing with the top or bottom materialization of an
invariant generic. It also implements subtyping and assignability for
these objects.

Next planned steps after this is done are to implement other operations
on top/bottom materializations; probably attribute access is an
important one.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-27 17:53:57 -07:00
Amethyst Reese
af259faed5 [flake8-async] Implement blocking-http-call-httpx (ASYNC212) (#20091)
## Summary

Adds new rule to find and report use of `httpx.Client` in synchronous
functions.

See issue #8451

## Test Plan

New snapshots for `ASYNC212.py` with `cargo insta test`.
2025-08-27 15:19:01 -07:00
Leandro Braga
d75ef3823c [ty] print diagnostics with fully qualified name to disambiguate some cases (#19850)
There are some situations that we have a confusing diagnostics due to
identical class names.

## Class with same name from different modules

```python
import pandas
import polars

df: pandas.DataFrame = polars.DataFrame()
```

This yields the following error:

**Actual:**
error: [invalid-assignment] "Object of type `DataFrame` is not
assignable to `DataFrame`"
**Expected**:
error: [invalid-assignment] "Object of type `polars.DataFrame` is not
assignable to `pandas.DataFrame`"

## Nested classes

```python
from enum import Enum

class A:
    class B(Enum):
        ACTIVE = "active"
        INACTIVE = "inactive"

class C:
    class B(Enum):
        ACTIVE = "active"
        INACTIVE = "inactive"
```

**Actual**:
error: [invalid-assignment] "Object of type `Literal[B.ACTIVE]` is not
assignable to `B`"
**Expected**:
error: [invalid-assignment] "Object of type
`Literal[my_module.C.B.ACTIVE]` is not assignable to `my_module.A.B`"

## Solution

In this MR we added an heuristics to detect when to use a fully
qualified name:
- There is an invalid assignment and;
- They are two different classes and;
- They have the same name

The fully qualified name always includes:
- module name
- nested classes name
- actual class name

There was no `QualifiedDisplay` so I had to implement it from scratch.
I'm very new to the codebase, so I might have done things inefficiently,
so I appreciate feedback.

Should we pre-compute the fully qualified name or do it on demand? 

## Not implemented

### Function-local classes

Should we approach this in a different PR?

**Example**:
```python 
# t.py
from __future__ import annotations


def function() -> A:
    class A:
        pass

    return A()


class A:
    pass


a: A = function()
```

#### mypy

```console
t.py:8: error: Incompatible return value type (got "t.A@5", expected "t.A")  [return-value]
```

From my testing the 5 in `A@5` comes from the like number. 

#### ty

```console
error[invalid-return-type]: Return type does not match returned value
 --> t.py:4:19
  |
4 | def function() -> A:
  |                   - Expected `A` because of return type
5 |     class A:
6 |         pass
7 |
8 |     return A()
  |            ^^^ expected `A`, found `A`
  |
info: rule `invalid-return-type` is enabled by default
```

Fixes https://github.com/astral-sh/ty/issues/848

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-27 20:46:07 +00:00
Dan Parizher
89ca493fd9 [ruff] Preserve relative whitespace in multi-line expressions (RUF033) (#19647)
## Summary

Fixes #19581

I decided to add in a `indent_first_line` function into
[`textwrap.rs`](https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_trivia/src/textwrap.rs),
as it solely focuses on text manipulation utilities. It follows the same
design as `indent()`, and there may be situations in the future where it
can be reused as well.

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-27 19:15:44 +00:00
David Peter
4b80f5fa4f [ty] Optimize TDD atom ordering (#20098)
## Summary

While looking at some logging output that I added to
`ReachabilityConstraintBuilder::add_and_constraint` in order to debug
https://github.com/astral-sh/ty/issues/1091, I noticed that it seemed to
suggest that the TDD was built in an imbalanced way for code like the
following, where we have a sequence of non-nested `if` conditions:

```py
def f(t1, t2, t3, t4, …):
    x = 0
    if t1:
        x = 1
    if t2:
        x = 2
    if t3:
        x = 3
    if t4:
        x = 4
    …
```

To understand this a bit better, I added some code to the
`ReachabilityConstraintBuilder` to render the resulting TDD. On `main`,
we get a tree that looks like the following, where you can see a pattern
of N sub-trees that grow linearly with N (number of `if` statements).
This results in an overall tree structure that has N² nodes (see graph
below):

<img alt="normal order"
src="https://github.com/user-attachments/assets/aab40ce9-e82a-4fcd-823a-811f05f15f66"
/>

If we zoom in to one of these subgraphs, we can see what the problem is.
When we add new constraints that represent combinations like `t1 AND ~t2
AND ~t3 AND t4 AND …`, they start with the evaluation of "early"
conditions (`t1`, `t2`, …). This means that we have to create new
subgraphs for each new `if` condition because there is little sharing
with the previous structure. We evaluate the Boolean condition in a
right-associative way: `t1 AND (~t2 AND (~t3 AND t4)))`:

<img width="500" align="center"
src="https://github.com/user-attachments/assets/31ea7182-9e00-4975-83df-d980464f545d"
/>

If we change the ordering of TDD atoms, we can change that to a
left-associative evaluation: `(((t1 AND ~t2) AND ~t3) AND t4) …`. This
means that we can re-use previous subgraphs `(t1 AND ~t2)`, which
results in a much more compact graph structure overall (note how "late"
conditions are now at the top, and "early" conditions are further down
in the graph):

<img alt="reverse order"
src="https://github.com/user-attachments/assets/96a6b7c1-3d35-4192-a917-0b2d24c6b144"
/>

If we count the number of TDD nodes for a growing number if `if`
statements, we can see that this change results in a slower growth. It's
worth noting that the growth is still superlinear, though:

<img width="800" height="600" alt="plot"
src="https://github.com/user-attachments/assets/22e8394f-e74e-4a9e-9687-0d41f94f2303"
/>

On the actual code from the referenced ticket (the `t_main.py` file
reduced to its main function, with the main function limited to 2000
lines instead of 11000 to allow the version on `main` to run to
completion), the effect is much more dramatic. Instead of 26 million TDD
nodes (`main`), we now only create 250 thousand (this branch), which is
slightly less than 1%.

The change in this PR allows us to build the semantic index and
type-check the problematic `t_main.py` file in
https://github.com/astral-sh/ty/issues/1091 in 9 seconds. This is still
not great, but an obvious improvement compared to running out of memory
after *minutes* of execution.

An open question remains whether this change is beneficial for all kinds
of code patterns, or just this linear sequence of `if` statements. It
does not seem unreasonable to think that referring to "earlier"
conditions is generally a good idea, but I learned from Doug that it's
generally not possible to find a TDD-construction heuristic that is
non-pathological for all kinds of inputs. Fortunately, it seems like
this change here results in performance improvements across *all of our
benchmarks*, which should increase the confidence in this change:

| Benchmark           | Improvement |
|---------------------|-------------------------|
| hydra-zen           | +13%                    |
| DateType            | +5%                     |
| sympy (walltime)    | +4%                     |
| attrs               | +4%                     |
| pydantic (walltime) | +2%                     |
| pandas (walltime)   | +2%                     |
| altair (walltime)   | +2%                     |
| static-frame        | +2%                     |
| anyio               | +1%                     |
| freqtrade           | +1%                     |
| colour-science      | +1%                     |
| tanjun              | +1%                     |

closes https://github.com/astral-sh/ty/issues/1091

---------

Co-authored-by: Douglas Creager <dcreager@dcreager.net>
2025-08-27 20:42:09 +02:00
Wei Lee
5663426d73 [airflow] Extend AIR311 and AIR312 rules (#20082)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Extend the following rules.

### AIR311
* `airflow.sensors.base.BaseSensorOperator` →
airflow.sdk.bases.sensor.BaseSensorOperator`
* `airflow.sensors.base.PokeReturnValue` →
airflow.sdk.bases.sensor.PokeReturnValue`
* `airflow.sensors.base.poke_mode_only` →
airflow.sdk.bases.sensor.poke_mode_only`
* `airflow.decorators.base.DecoratedOperator` →
airflow.sdk.bases.decorator.DecoratedOperator`
* `airflow.models.param.Param` → airflow.sdk.definitions.param.Param`
* `airflow.decorators.base.DecoratedMappedOperator` →
`airflow.sdk.bases.decorator.DecoratedMappedOperator`
* `airflow.decorators.base.DecoratedOperator` →
`airflow.sdk.bases.decorator.DecoratedOperator`
* `airflow.decorators.base.TaskDecorator` →
`airflow.sdk.bases.decorator.TaskDecorator`
* `airflow.decorators.base.get_unique_task_id` →
`airflow.sdk.bases.decorator.get_unique_task_id`
* `airflow.decorators.base.task_decorator_factory` →
`airflow.sdk.bases.decorator.task_decorator_factory`


### AIR312
* `airflow.sensors.bash.BashSensor` →
`airflow.providers.standard.sensor.bash.BashSensor`
* `airflow.sensors.python.PythonSensor` →
`airflow.providers.standard.sensors.python.PythonSensor`



## Test Plan

<!-- How was it tested? -->

update the test fixture accordingly in the second commit and reorg in
the third
2025-08-27 14:11:22 -04:00
David Peter
0b3548755c [ty] Preserve qualifiers when accessing attributes on unions/intersections (#20114)
## Summary

Properly preserve type qualifiers when accessing attributes on unions
and intersections. This is a prerequisite for
https://github.com/astral-sh/ruff/pull/19579.

Also fix a completely wrong implementation of
`map_with_boundness_and_qualifiers`. It now closely follows
`map_with_boundness` (just above).

## Test Plan

I thought about it, but didn't find any easy way to test this. This only
affected `Type::member`. Things like validation of attribute writes
(where type qualifiers like `ClassVar` and `Final` are important) were
already handling things correctly.
2025-08-27 20:01:45 +02:00
Alex Waygood
ce1dc21e7e [ty] Fix the inferred interface of specialized generic protocols (#19866) 2025-08-27 18:16:15 +01:00
Alex Waygood
7d0c8e045c [ty] Infer slightly more precise types for comprehensions (#20111) 2025-08-27 13:21:47 +01:00
Alex Waygood
d71518b369 [ty] Add more tests for protocols (#20095)
Co-authored-by: Shunsuke Shibayama <sbym1346@gmail.com>
2025-08-27 12:56:14 +01:00
Carl Meyer
9ab276b345 [ty] don't eagerly unpack aliases in user-authored unions (#20055)
## Summary

Add a subtly different test case for recursive PEP 695 type aliases,
which does require that we relax our union simplification, so we don't
eagerly unpack aliases from user-provided union annotations.

## Test Plan

Added mdtest.
2025-08-26 16:29:45 -07:00
chiri
a60fb3f2c8 [flake8-use-pathlib] Update links to the table showing the correspondence between os and pathlib (#20103)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Part of https://github.com/astral-sh/ruff/pull/20100 |
https://github.com/astral-sh/ruff/pull/20100#issuecomment-3225349156
2025-08-26 17:33:33 -04:00
chiri
f558bf721c [flake8-use-pathlib] Make PTH100 fix unsafe because it can change behavior (#20100)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary
Fixes https://github.com/astral-sh/ruff/issues/20088
<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

`cargo nextest run flake8_use_pathlib`

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-26 18:59:12 +00:00
chiri
ea1c080881 [flake8-use-pathlib] Delete unused Rule::OsSymlink enabled check (#20099)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary
Part of #20009 (i forgot to delete it in this PR)
<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->
2025-08-26 12:05:52 -04:00
Renkai Ge
73720c73be [ty] Add search paths info to unresolved import diagnostics (#20040)
Fixes https://github.com/astral-sh/ty/issues/457

---------

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-08-26 11:01:16 -04:00
Hamir Mahal
136abace92 [flake8-logging-format] Add auto-fix for f-string logging calls (G004) (#19303)
Closes #19302

<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary
This adds an auto-fix for `Logging statement uses f-string` Ruff G004,
so users don't have to resolve it manually.
<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan
I ran the auto-fixes on a Python file locally and and it worked as
expected.
<!-- How was it tested? -->

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-08-26 10:51:24 -04:00
Brent Westbrook
bc7274d148 Add a ScopeKind for the __class__ cell (#20048)
Summary
--

This PR aims to resolve (or help to resolve) #18442 and #19357 by
encoding the CPython semantics around the `__class__` cell in our
semantic model. Namely,

> `__class__` is an implicit closure reference created by the compiler
if any methods in a class body refer to either `__class__` or super.

from the Python
[docs](https://docs.python.org/3/reference/datamodel.html#creating-the-class-object).

As noted in the variant docs by @AlexWaygood, we don't fully model this
behavior, opting always to create the `__class__` cell binding in a new
`ScopeKind::DunderClassCell` around each method definition, without
checking if any method in the class body actually refers to `__class__`
or `super`.

As such, this PR fixes #18442 but not #19357.

Test Plan
--

Existing tests, plus the tests from #19783, which now pass without any
rule-specific code.

Note that we opted not to alter the behavior of F841 here because
flagging `__class__` in these cases still seems helpful. See the
discussion in
https://github.com/astral-sh/ruff/pull/20048#discussion_r2296252395 and
in the test comments for more information.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Mikko Leppänen <mleppan23@gmail.com>
2025-08-26 09:49:08 -04:00
Avasam
911d5cc973 Fix incorrect D413 links in docstrings convention FAQ (#20089)
## Summary

D413 in this section was incorrectly linking to D410.

I haven't checked if this issue happens anywhere else in the docs.

## Test Plan

Look at docs
2025-08-26 10:24:58 +05:30
Matthew Mckee
8d6dc7d3a3 [ty] Refactor inlay hints structure to use separate parts (#20052)
## Summary

Our internal inlay hints structure (`ty_ide::InlayHint`) now more
closely resembles `lsp_types::InlayHint`.

This mainly allows us to convert to `lsp_types::InlayHint` with less
hassle, but it also allows us to manage the different parts of the inlay
hint better, which in the future will allow us to implement features
like goto on the type part of the type inlay hint.

It also really isn't important to store a specific `Type` instance in
the `InlayHintContent`. So we remove this and use `InlayHintLabel`
instead which just shows the representation of the type (along with
other information).

We see a similar structure used in rust-analyzer too.
2025-08-26 10:21:31 +05:30
Dylan
ef4897f9f3 [ty] Add support for PEP 750 t-strings (#20085)
This PR attempts to adds support for inferring`string.templatelib.Template` for t-string literals.
2025-08-25 18:49:49 +00:00
Alex Waygood
ecf3c4ca11 [ty] Add support for PEP 800 (#20084) 2025-08-25 19:39:05 +01:00
Carl Meyer
33c5f6f4f8 [ty] don't mark entire type-alias scopes as Deferred (#20086)
## Summary

This has been here for awhile (since our initial PEP 695 type alias
support) but isn't really correct. The right-hand-side of a PEP 695 type
alias is a distinct scope, and we don't mark it as an "eager" nested
scope, so it automatically gets "deferred" resolution of names from
outer scopes (just like a nested function). Thus it's
redundant/unnecessary for us to use `DeferredExpressionState::Deferred`
for resolving that RHS expression -- that's for deferring resolution of
individual names within a scope. Using it here causes us to wrongly
ignore applicable outer-scope narrowing.

## Test Plan

Added mdtest that failed before this PR (the second snippet -- the first
snippet always passed.)
2025-08-25 11:32:18 -07:00
github-actions[bot]
ba47010150 [ty] Sync vendored typeshed stubs (#20083)
Co-authored-by: typeshedbot <>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-08-25 17:01:51 +00:00
Wei Lee
db423ee978 [airflow] replace wrong path airflow.io.stroage as airflow.io.store (AIR311) (#20081)
## Summary

`airflow.io.storage` is not the correct path. it should be
`airflow.io.store` instead
2025-08-25 10:15:34 -05:00
Alex Waygood
a04823cfad [ty] Completely ignore typeshed's stub for Any (#20079) 2025-08-25 15:27:55 +01:00
Brent Westbrook
d0bcf56bd9 Improve diff rendering for notebooks (#20036)
## Summary

As noted in a code TODO, our `Diff` rendering code previously didn't
have any
special handling for notebooks. This was particularly obvious when the
diffs
were rendered right next to the corresponding diagnostic because the
diagnostic
used cell-based line numbers, while the diff was still using line
numbers from
the concatenated source. This PR updates the diff rendering to handle
notebooks
too.

The main improvements shown in the example below are:

- Line numbers are now remapped to be relative to their cell
- Context lines from other cells are suppressed

```
error[unused-import][*]: `math` imported but unused                             
 --> notebook.ipynb:cell 2:2:8                                                  
  |                                                                             
1 | # cell 2                                                                    
2 | import math                                                                 
  |        ^^^^                                                                 
3 |                                                                             
4 | print('hello world')                                                        
  |                                                                             
help: Remove unused import: `math`                                              
                                                                                
ℹ Safe fix                                                                      
1 1 | # cell 2                                                                  
2   |-import math                                                               
3 2 |                                                                           
4 3 | print('hello world')                                                      
```

I tried a few different approaches here before finally just splitting
the notebook into separate text ranges by cell and diffing each one
separately. It seems to work and passes all of our tests, but I don't
know if it's actually enforced anywhere that a single edit doesn't span
cells. Such an edit would silently be dropped right now since it would
fail the `contains_range` check. I also feel like I may have overlooked
an existing way to partition a file into cells like this.

## Test Plan

Existing notebook tests, plus a new one in `ruff_db`
2025-08-25 09:20:42 -04:00
Eric Jolibois
f9bbee33f6 [ty] validate constructor call of TypedDict (#19810)
## Summary
Implement validation for `TypedDict` constructor calls and dictionary
literal assignments, including support for `total=False` and proper
field management.
Also add support for `Required` and `NotRequired` type qualifiers in
`TypedDict` classes, along with proper inheritance behavior and the
`total=` parameter.
Support both constructor calls and dict literal syntax

part of https://github.com/astral-sh/ty/issues/154

### Basic Required Field Validation
```py
class Person(TypedDict):
    name: str
    age: int | None

# Error: Missing required field 'name' in TypedDict `Person` constructor
incomplete = Person(age=25)

# Error: Invalid argument to key "name" with declared type `str` on TypedDict `Person`
wrong_type = Person(name=123, age=25)

# Error: Invalid key access on TypedDict `Person`: Unknown key "extra"
extra_field = Person(name="Bob", age=25, extra=True)
```
<img width="773" height="191" alt="Screenshot 2025-08-07 at 17 59 22"
src="https://github.com/user-attachments/assets/79076d98-e85f-4495-93d6-a731aa72a5c9"
/>

### Support for `total=False`
```py
class OptionalPerson(TypedDict, total=False):
    name: str
    age: int | None

# All valid - all fields are optional with total=False
charlie = OptionalPerson()
david = OptionalPerson(name="David")
emily = OptionalPerson(age=30)
frank = OptionalPerson(name="Frank", age=25)

# But type validation and extra fields still apply
invalid_type = OptionalPerson(name=123)  # Error: Invalid argument type
invalid_extra = OptionalPerson(extra=True)  # Error: Invalid key access
```

### Dictionary Literal Validation
```py
# Type checking works for both constructors and dict literals
person: Person = {"name": "Alice", "age": 30}

reveal_type(person["name"])  # revealed: str
reveal_type(person["age"])   # revealed: int | None

# Error: Invalid key access on TypedDict `Person`: Unknown key "non_existing"
reveal_type(person["non_existing"])  # revealed: Unknown
```

### `Required`, `NotRequired`, `total`
```python
from typing import TypedDict
from typing_extensions import Required, NotRequired

class PartialUser(TypedDict, total=False):
    name: Required[str]      # Required despite total=False
    age: int                 # Optional due to total=False
    email: NotRequired[str]  # Explicitly optional (redundant)

class User(TypedDict):
    name: Required[str]      # Explicitly required (redundant)
    age: int                 # Required due to total=True
    bio: NotRequired[str]    # Optional despite total=True

# Valid constructions
partial = PartialUser(name="Alice")  # name required, age optional
full = User(name="Bob", age=25)      # name and age required, bio optional

# Inheritance maintains original field requirements
class Employee(PartialUser):
    department: str                  # Required (new field)
    # name: still Required (inherited)
    # age: still optional (inherited)

emp = Employee(name="Charlie", department="Engineering")  # 
Employee(department="Engineering")  # 
e: Employee = {"age": 1}  # 
```

<img width="898" height="683" alt="Screenshot 2025-08-11 at 22 02 57"
src="https://github.com/user-attachments/assets/4c1b18cd-cb2e-493a-a948-51589d121738"
/>

## Implementation
The implementation reuses existing validation logic done in
https://github.com/astral-sh/ruff/pull/19782

### ℹ️ Why I did NOT synthesize an `__init__` for `TypedDict`:

`TypedDict` inherits `dict.__init__(self, *args, **kwargs)` that accepts
all arguments.
The type resolution system finds this inherited signature **before**
looking for synthesized members.
So `own_synthesized_member()` is never called because a signature
already exists.

To force synthesis, you'd have to override Python’s inheritance
mechanism, which would break compatibility with the existing ecosystem.

This is why I went with ad-hoc validation. IMO it's the only viable
approach that respects Python’s
inheritance semantics while providing the required validation.

### Refacto of `Field`

**Before:**
```rust
struct Field<'db> {
    declared_ty: Type<'db>,
    default_ty: Option<Type<'db>>,     // NamedTuple and dataclass only
    init_only: bool,                   // dataclass only  
    init: bool,                        // dataclass only
    is_required: Option<bool>,         // TypedDict only
}
```

**After:**
```rust
struct Field<'db> {
    declared_ty: Type<'db>,
    kind: FieldKind<'db>,
}

enum FieldKind<'db> {
    NamedTuple { default_ty: Option<Type<'db>> },
    Dataclass { default_ty: Option<Type<'db>>, init_only: bool, init: bool },
    TypedDict { is_required: bool },
}
```

## Test Plan
Updated Markdown tests

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-08-25 14:45:52 +02:00
Dhruv Manilawala
376e3ff395 [ty] Limit argument expansion size for overload call evaluation (#20041)
## Summary

This PR limits the argument type expansion size for an overload call
evaluation to 512.

The limit chosen is arbitrary but I've taken the 256 limit from Pyright
into account and bumped it x2 to start with.

Initially, I actually started out by trying to refactor the entire
argument type expansion to be lazy. Currently, expanding a single
argument at any position eagerly creates the combination (argument
lists) and returns that (`Vec<CallArguments>`) but I thought we could
make it lazier by converting the return type of `expand` from
`Iterator<Item = Vec<CallArguments>>` to `Iterator<Item = Iterator<Item
= CallArguments>>` but that's proving to be difficult to implement
mainly because we **need** to maintain the previous expansion to
generate the next expansion which is the main reason to use
`std::iter::successors` in the first place.

Another approach would be to eagerly expand all the argument types and
then use the `combinations` from `itertools` to generate the
combinations but we would need to find the "boundary" between arguments
lists produced from expanding argument at position 1 and position 2
because that's important for the algorithm.

Closes: https://github.com/astral-sh/ty/issues/868

## Test Plan

Add test case to demonstrate the limit along with the diagnostic
snapshot stating that the limit has been reached.
2025-08-25 09:43:04 +00:00
renovate[bot]
b57cc5be33 Update actions/checkout digest to ff7abcd (#20061)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| actions/checkout | action | digest | `08c6903` -> `ff7abcd` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2025-08-25 11:04:07 +05:30
renovate[bot]
3c1fe12259 Update Rust crate regex-automata to v0.4.10 (#20068)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[regex-automata](https://redirect.github.com/rust-lang/regex/tree/master/regex-automata)
([source](https://redirect.github.com/rust-lang/regex)) |
workspace.dependencies | patch | `0.4.9` -> `0.4.10` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>rust-lang/regex (regex-automata)</summary>

###
[`v0.4.10`](https://redirect.github.com/rust-lang/regex/compare/regex-automata-0.4.9...regex-automata-0.4.10)

[Compare
Source](https://redirect.github.com/rust-lang/regex/compare/regex-automata-0.4.9...regex-automata-0.4.10)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:44:59 +05:30
renovate[bot]
e4289deb5a Update Rust crate regex to v1.11.2 (#20067)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [regex](https://redirect.github.com/rust-lang/regex) |
workspace.dependencies | patch | `1.11.1` -> `1.11.2` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>rust-lang/regex (regex)</summary>

###
[`v1.11.2`](https://redirect.github.com/rust-lang/regex/blob/HEAD/CHANGELOG.md#1112-2025-08-24)

[Compare
Source](https://redirect.github.com/rust-lang/regex/compare/1.11.1...1.11.2)

\===================
This is a new patch release of `regex` with some minor fixes. A larger
number
of typo or lint fix patches were merged. Also, we now finally recommend
using
`std::sync::LazyLock`.

Improvements:

- [BUG
#&#8203;1217](https://redirect.github.com/rust-lang/regex/issues/1217):
  Switch recommendation from `once_cell` to `std::sync::LazyLock`.
- [BUG
#&#8203;1225](https://redirect.github.com/rust-lang/regex/issues/1225):
  Add `DFA::set_prefilter` to `regex-automata`.

Bug fixes:

- [BUG
#&#8203;1165](https://redirect.github.com/rust-lang/regex/pull/1150):
Remove `std` dependency from `perf-literal-multisubstring` crate
feature.
- [BUG
#&#8203;1165](https://redirect.github.com/rust-lang/regex/pull/1165):
  Clarify the meaning of `(?R)$` in the documentation.
- [BUG
#&#8203;1281](https://redirect.github.com/rust-lang/regex/pull/1281):
Remove `fuzz/` and `record/` directories from published crate on
crates.io.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:44:47 +05:30
renovate[bot]
dbbcb7f452 Update cargo-bins/cargo-binstall action to v1.15.1 (#20075)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[cargo-bins/cargo-binstall](https://redirect.github.com/cargo-bins/cargo-binstall)
| action | minor | `v1.14.4` -> `v1.15.1` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>cargo-bins/cargo-binstall (cargo-bins/cargo-binstall)</summary>

###
[`v1.15.1`](https://redirect.github.com/cargo-bins/cargo-binstall/releases/tag/v1.15.1)

[Compare
Source](https://redirect.github.com/cargo-bins/cargo-binstall/compare/v1.15.0...v1.15.1)

*Binstall is a tool to fetch and install Rust-based executables as
binaries. It aims to be a drop-in replacement for `cargo install` in
most cases. Install it today with `cargo install cargo-binstall`, from
the binaries below, or if you already have it, upgrade with `cargo
binstall cargo-binstall`.*

##### In this release:

- fix failure to create settings manifest file
([#&#8203;2268](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2268)
[#&#8203;2271](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2271))
- fix race condition in creating, loading and writing of settings
manifest file
([#&#8203;2272](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2272))
- fix infinite hang on `--self-install` due to the quickinstall consent
([#&#8203;2269](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2269)
[#&#8203;2273](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2273))

###
[`v1.15.0`](https://redirect.github.com/cargo-bins/cargo-binstall/releases/tag/v1.15.0)

[Compare
Source](https://redirect.github.com/cargo-bins/cargo-binstall/compare/v1.14.4...v1.15.0)

*Binstall is a tool to fetch and install Rust-based executables as
binaries. It aims to be a drop-in replacement for `cargo install` in
most cases. Install it today with `cargo install cargo-binstall`, from
the binaries below, or if you already have it, upgrade with `cargo
binstall cargo-binstall`.*

##### In this release:

- confirm on the first time using quickinstall
([#&#8203;2223](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2223))

##### Other changes:

- upgrade dependencies

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:43:27 +05:30
renovate[bot]
c6dfdb1d39 Update Rust crate url to v2.5.7 (#20073)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [url](https://redirect.github.com/servo/rust-url) |
workspace.dependencies | patch | `2.5.4` -> `2.5.7` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>servo/rust-url (url)</summary>

###
[`v2.5.5`](https://redirect.github.com/servo/rust-url/releases/tag/v2.5.5)

[Compare
Source](https://redirect.github.com/servo/rust-url/compare/v2.5.4...v2.5.5)

##### What's Changed

- ci: downgrade crates when building for Rust 1.67.0 by
[@&#8203;mxinden](https://redirect.github.com/mxinden) in
[https://github.com/servo/rust-url/pull/1003](https://redirect.github.com/servo/rust-url/pull/1003)
- ci: run unit tests with sanitizers by
[@&#8203;mxinden](https://redirect.github.com/mxinden) in
[https://github.com/servo/rust-url/pull/1002](https://redirect.github.com/servo/rust-url/pull/1002)
- fix small typo by [@&#8203;hkBst](https://redirect.github.com/hkBst)
in
[https://github.com/servo/rust-url/pull/1011](https://redirect.github.com/servo/rust-url/pull/1011)
- chore: fix clippy errors on main by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1019](https://redirect.github.com/servo/rust-url/pull/1019)
- perf: remove heap allocation in parse\_query by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1020](https://redirect.github.com/servo/rust-url/pull/1020)
- perf: slightly improve parsing a port by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1022](https://redirect.github.com/servo/rust-url/pull/1022)
- perf: improve to\_file\_path() by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1018](https://redirect.github.com/servo/rust-url/pull/1018)
- perf: make parse\_scheme slightly faster by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1025](https://redirect.github.com/servo/rust-url/pull/1025)
- update LICENSE-MIT by
[@&#8203;wmjae](https://redirect.github.com/wmjae) in
[https://github.com/servo/rust-url/pull/1029](https://redirect.github.com/servo/rust-url/pull/1029)
- perf: url encode path segments in longer string slices by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1026](https://redirect.github.com/servo/rust-url/pull/1026)
- Disable the default features on serde by
[@&#8203;rilipco](https://redirect.github.com/rilipco) in
[https://github.com/servo/rust-url/pull/1033](https://redirect.github.com/servo/rust-url/pull/1033)
- docs: base url relative join by
[@&#8203;tisonkun](https://redirect.github.com/tisonkun) in
[https://github.com/servo/rust-url/pull/1013](https://redirect.github.com/servo/rust-url/pull/1013)
- perf: remove heap allocation in parse\_host by
[@&#8203;dsherret](https://redirect.github.com/dsherret) in
[https://github.com/servo/rust-url/pull/1021](https://redirect.github.com/servo/rust-url/pull/1021)
- Update tests to Unicode 16.0 by
[@&#8203;hsivonen](https://redirect.github.com/hsivonen) in
[https://github.com/servo/rust-url/pull/1045](https://redirect.github.com/servo/rust-url/pull/1045)
- Add some some basic functions to `Mime` by
[@&#8203;mrobinson](https://redirect.github.com/mrobinson) in
[https://github.com/servo/rust-url/pull/1047](https://redirect.github.com/servo/rust-url/pull/1047)
- ran `cargo clippy --fix -- -Wclippy::use_self` by
[@&#8203;mrobinson](https://redirect.github.com/mrobinson) in
[https://github.com/servo/rust-url/pull/1048](https://redirect.github.com/servo/rust-url/pull/1048)
- Fix MSRV and clippy CI by
[@&#8203;Manishearth](https://redirect.github.com/Manishearth) in
[https://github.com/servo/rust-url/pull/1058](https://redirect.github.com/servo/rust-url/pull/1058)
- Update `Url::domain` docs to show that it includes subdomain by
[@&#8203;supercoolspy](https://redirect.github.com/supercoolspy) in
[https://github.com/servo/rust-url/pull/1057](https://redirect.github.com/servo/rust-url/pull/1057)
- set\_hostname should error when encountering colon ':' by
[@&#8203;edgul](https://redirect.github.com/edgul) in
[https://github.com/servo/rust-url/pull/1060](https://redirect.github.com/servo/rust-url/pull/1060)
- version bump to 2.5.5 by
[@&#8203;edgul](https://redirect.github.com/edgul) in
[https://github.com/servo/rust-url/pull/1061](https://redirect.github.com/servo/rust-url/pull/1061)

##### New Contributors

- [@&#8203;mxinden](https://redirect.github.com/mxinden) made their
first contribution in
[https://github.com/servo/rust-url/pull/1003](https://redirect.github.com/servo/rust-url/pull/1003)
- [@&#8203;hkBst](https://redirect.github.com/hkBst) made their first
contribution in
[https://github.com/servo/rust-url/pull/1011](https://redirect.github.com/servo/rust-url/pull/1011)
- [@&#8203;wmjae](https://redirect.github.com/wmjae) made their first
contribution in
[https://github.com/servo/rust-url/pull/1029](https://redirect.github.com/servo/rust-url/pull/1029)
- [@&#8203;rilipco](https://redirect.github.com/rilipco) made their
first contribution in
[https://github.com/servo/rust-url/pull/1033](https://redirect.github.com/servo/rust-url/pull/1033)
- [@&#8203;tisonkun](https://redirect.github.com/tisonkun) made their
first contribution in
[https://github.com/servo/rust-url/pull/1013](https://redirect.github.com/servo/rust-url/pull/1013)
- [@&#8203;supercoolspy](https://redirect.github.com/supercoolspy) made
their first contribution in
[https://github.com/servo/rust-url/pull/1057](https://redirect.github.com/servo/rust-url/pull/1057)

**Full Changelog**:
https://github.com/servo/rust-url/compare/v2.5.4...v2.5.5

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:42:38 +05:30
renovate[bot]
c65029f9a5 Update Rust crate syn to v2.0.106 (#20070)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [syn](https://redirect.github.com/dtolnay/syn) |
workspace.dependencies | patch | `2.0.104` -> `2.0.106` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>dtolnay/syn (syn)</summary>

###
[`v2.0.106`](https://redirect.github.com/dtolnay/syn/releases/tag/2.0.106)

[Compare
Source](https://redirect.github.com/dtolnay/syn/compare/2.0.105...2.0.106)

- Replace `~const` syntax with `[const]` conditionally const syntax in
trait bounds
([#&#8203;1896](https://redirect.github.com/dtolnay/syn/issues/1896),
[rust-lang/rust#139858](https://redirect.github.com/rust-lang/rust/pull/139858))
- Support conditionally const impl Trait types
([#&#8203;1897](https://redirect.github.com/dtolnay/syn/issues/1897))
- Reject polarity modifier and lifetime binder used in the same trait
bound
([#&#8203;1899](https://redirect.github.com/dtolnay/syn/issues/1899),
[rust-lang/rust#127054](https://redirect.github.com/rust-lang/rust/pull/127054))
- Parse const trait bounds with bound lifetimes
([#&#8203;1902](https://redirect.github.com/dtolnay/syn/issues/1902))
- Parse bound lifetimes with lifetime bounds
([#&#8203;1903](https://redirect.github.com/dtolnay/syn/issues/1903))
- Allow type parameters and const parameters in trait bounds and generic
closures
([#&#8203;1904](https://redirect.github.com/dtolnay/syn/issues/1904),
[#&#8203;1907](https://redirect.github.com/dtolnay/syn/issues/1907),
[#&#8203;1908](https://redirect.github.com/dtolnay/syn/issues/1908),
[#&#8203;1909](https://redirect.github.com/dtolnay/syn/issues/1909))

###
[`v2.0.105`](https://redirect.github.com/dtolnay/syn/releases/tag/2.0.105)

[Compare
Source](https://redirect.github.com/dtolnay/syn/compare/2.0.104...2.0.105)

- Disallow "negative" inherent impls like `impl !T {}`
([#&#8203;1881](https://redirect.github.com/dtolnay/syn/issues/1881),
[rust-lang/rust#144386](https://redirect.github.com/rust-lang/rust/pull/144386))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:41:48 +05:30
renovate[bot]
a0bba718f6 Update Rust crate tracing-indicatif to v0.3.13 (#20072)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[tracing-indicatif](https://redirect.github.com/emersonford/tracing-indicatif)
| workspace.dependencies | patch | `0.3.12` -> `0.3.13` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>emersonford/tracing-indicatif (tracing-indicatif)</summary>

###
[`v0.3.13`](https://redirect.github.com/emersonford/tracing-indicatif/blob/HEAD/CHANGELOG.md#0313---2025-08-15)

[Compare
Source](https://redirect.github.com/emersonford/tracing-indicatif/compare/0.3.12...0.3.13)

- eliminate panics on internal lock poison

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:41:33 +05:30
renovate[bot]
1eab4dbd95 Update Rust crate thiserror to v2.0.16 (#20071)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [thiserror](https://redirect.github.com/dtolnay/thiserror) |
workspace.dependencies | patch | `2.0.12` -> `2.0.16` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>dtolnay/thiserror (thiserror)</summary>

###
[`v2.0.16`](https://redirect.github.com/dtolnay/thiserror/releases/tag/2.0.16)

[Compare
Source](https://redirect.github.com/dtolnay/thiserror/compare/2.0.15...2.0.16)

- Add to "no-std" crates.io category
([#&#8203;429](https://redirect.github.com/dtolnay/thiserror/issues/429))

###
[`v2.0.15`](https://redirect.github.com/dtolnay/thiserror/releases/tag/2.0.15)

[Compare
Source](https://redirect.github.com/dtolnay/thiserror/compare/2.0.14...2.0.15)

- Prevent `Error::provide` API becoming unavailable from a future new
compiler lint
([#&#8203;427](https://redirect.github.com/dtolnay/thiserror/issues/427))

###
[`v2.0.14`](https://redirect.github.com/dtolnay/thiserror/releases/tag/2.0.14)

[Compare
Source](https://redirect.github.com/dtolnay/thiserror/compare/2.0.13...2.0.14)

- Allow build-script cleanup failure with NFSv3 output directory to be
non-fatal
([#&#8203;426](https://redirect.github.com/dtolnay/thiserror/issues/426))

###
[`v2.0.13`](https://redirect.github.com/dtolnay/thiserror/releases/tag/2.0.13)

[Compare
Source](https://redirect.github.com/dtolnay/thiserror/compare/2.0.12...2.0.13)

- Documentation improvements

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:41:18 +05:30
renovate[bot]
3eb3c3572b Update astral-sh/setup-uv action to v6.6.0 (#20074)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [astral-sh/setup-uv](https://redirect.github.com/astral-sh/setup-uv) |
action | minor | `v6.4.3` -> `v6.6.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>astral-sh/setup-uv (astral-sh/setup-uv)</summary>

###
[`v6.6.0`](https://redirect.github.com/astral-sh/setup-uv/releases/tag/v6.6.0):
🌈 Support for .tools-versions

[Compare
Source](https://redirect.github.com/astral-sh/setup-uv/compare/v6.5.0...v6.6.0)

##### Changes

This release adds support for [asdf](https://asdf-vm.com/)
`.tool-versions` in the `version-file` input

##### 🐛 Bug fixes

- Add log message before long API calls to GitHub
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;530](https://redirect.github.com/astral-sh/setup-uv/issues/530))

##### 🚀 Enhancements

- Add support for .tools-versions
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;531](https://redirect.github.com/astral-sh/setup-uv/issues/531))

##### 🧰 Maintenance

- Bump dependencies
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;532](https://redirect.github.com/astral-sh/setup-uv/issues/532))
- chore: update known versions for 0.8.12
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;529](https://redirect.github.com/astral-sh/setup-uv/issues/529))
- chore: update known versions for 0.8.11
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;526](https://redirect.github.com/astral-sh/setup-uv/issues/526))
- chore: update known versions for 0.8.10
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;525](https://redirect.github.com/astral-sh/setup-uv/issues/525))

###
[`v6.5.0`](https://redirect.github.com/astral-sh/setup-uv/releases/tag/v6.5.0):
🌈 Better error messages, bug fixes and copilot agent settings

[Compare
Source](https://redirect.github.com/astral-sh/setup-uv/compare/v6.4.3...v6.5.0)

##### Changes

This release brings better error messages in case the GitHub API is
impacted, fixes a few bugs and allows to disable [problem
matchers](https://redirect.github.com/actions/toolkit/blob/main/docs/problem-matchers.md)
for better use in Copilot Agent workspaces.

##### 🐛 Bug fixes

- Improve error messages on GitHub API errors
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;518](https://redirect.github.com/astral-sh/setup-uv/issues/518))
- Ignore backslashes and whitespace in requirements
[@&#8203;axm2](https://redirect.github.com/axm2)
([#&#8203;501](https://redirect.github.com/astral-sh/setup-uv/issues/501))

##### 🚀 Enhancements

- Add input add-problem-matchers
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;517](https://redirect.github.com/astral-sh/setup-uv/issues/517))

##### 🧰 Maintenance

- chore: update known versions for 0.8.9
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;512](https://redirect.github.com/astral-sh/setup-uv/issues/512))
- chore: update known versions for 0.8.6-0.8.8
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;510](https://redirect.github.com/astral-sh/setup-uv/issues/510))
- chore: update known versions for 0.8.5
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;509](https://redirect.github.com/astral-sh/setup-uv/issues/509))
- chore: update known versions for 0.8.4
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;505](https://redirect.github.com/astral-sh/setup-uv/issues/505))
- chore: update known versions for 0.8.3
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;502](https://redirect.github.com/astral-sh/setup-uv/issues/502))

##### 📚 Documentation

- add note on caching to read disable-cache-pruning
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;506](https://redirect.github.com/astral-sh/setup-uv/issues/506))

##### ⬆️ Dependency updates

- Bump actions/checkout from 4 to 5
@&#8203;[dependabot\[bot\]](https://redirect.github.com/apps/dependabot)
([#&#8203;514](https://redirect.github.com/astral-sh/setup-uv/issues/514))
- bump dependencies
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;516](https://redirect.github.com/astral-sh/setup-uv/issues/516))
- Bump biome to v2
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;515](https://redirect.github.com/astral-sh/setup-uv/issues/515))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS44Mi43IiwidXBkYXRlZEluVmVyIjoiNDEuODIuNyIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 10:40:46 +05:30
renovate[bot]
41bb24a87e Update Rust crate serde_json to v1.0.143 (#20069) 2025-08-24 22:20:39 -04:00
renovate[bot]
87f0da139a Update Rust crate proc-macro2 to v1.0.101 (#20066) 2025-08-24 22:20:24 -04:00
renovate[bot]
f9bfc9ab5b Update Rust crate ordermap to v0.5.9 (#20065) 2025-08-24 22:20:17 -04:00
renovate[bot]
59f7102606 Update Rust crate bitflags to v2.9.3 (#20063) 2025-08-24 22:19:54 -04:00
renovate[bot]
862d2d0687 Update dependency ruff to v0.12.10 (#20062) 2025-08-24 22:19:44 -04:00
renovate[bot]
48edf46f3b Update Rust crate filetime to v0.2.26 (#20064) 2025-08-24 22:19:34 -04:00
Jelle Zijlstra
ec86a4e960 [ty] Add Top[] and Bottom[] special forms, replacing top_materialization_of() function (#20054)
Part of astral-sh/ty#994

## Summary

Add new special forms to `ty_extensions`, `Top[T]` and `Bottom[T]`.
Remove `ty_extensions.top_materialization` and
`ty_extensions.bottom_materialization`.

## Test Plan

Converted the existing `materialization.md` mdtest to the new syntax.
Added some tests for invalid use of the new special form.
2025-08-23 11:20:56 -07:00
Andrew Gallant
e7237652a9 [ty] Lightly refactor document symbols AST visitor
This makes use of early continue/return to keep rightward drift under
control. (I also find it easier to read.)
2025-08-23 12:53:41 -04:00
Andrew Gallant
205eae14d2 [ty] Rejigger workspace symbols for more efficient caching
In effect, we make the Salsa query aspect keyed only on whether we want
global symbols. We move everything else (hierarchical and querying) to
an aggregate step *after* the query.

This was a somewhat involved change since we want to return a flattened
list from visiting the source while also preserving enough information
to reform the symbols into a hierarchical structure that the LSP
expects. But I think overall the API has gotten simpler and we encode
more invariants into the type system. (For example, previously you got a
runtime assertion if you tried to provide a query string while enabling
hierarchical mode. But now that's prevented by construction.)
2025-08-23 12:53:41 -04:00
Andrew Gallant
f407f12f4c [ty] Parallelize workspace symbols
This is a pretty naive approach, but it makes cold times for listing
workspace symbols in home-assistant under 1s on my machine.

Courtesy of Micha:
https://github.com/astral-sh/ruff/pull/20030#discussion_r2292924171
2025-08-23 12:53:41 -04:00
Andrew Gallant
fb2d0af18c [ty] Optimize "workspace symbols" retrieval
Basically, this splits the implementation into two pieces:
the first piece does the traversal and finds *all* symbols
across the workspace. The second piece does filtering based
on a user provided query string. Only the first piece is
cached by Salsa.

This brings warm "workspace symbols" requests down from
500-600ms to 100-200ms.
2025-08-23 12:53:41 -04:00
Andrew Gallant
8ead02e0b1 [ty] Optimize query string matching
While this doesn't typically matter, when ty returns a very
large list of symbols, this can have an impact. Specifically,
when searching `async` in home-assistant, this gets times
closer to 500ms versus closer to 600ms before this change.
It looks like an overall ~50ms improvement (so around 10%),
but variance is all over the place and I didn't do any
statistical tests.

But this does make intuitive sense. Previously, we were
allocating intermediate strings, doing UTF-8 decoding and
consulting Unicode casing tables. Now we're just doing what
is likely a single DFA scan. In effect, we front load all
of the Unicode junk into regex compilation.
2025-08-23 12:53:41 -04:00
Andrew Gallant
330bb4efbf [ty] Add some unit tests for "query matches symbol"
There is a small amount of subtlety to this matching routine,
and it could be implemented in a faster way. So let's right some
tests for what we have to ensure we don't break anything when
we optimize it.
2025-08-23 12:53:41 -04:00
Andrew Gallant
ad8c98117a [ty] Move query filtering outside of symbol visitor
This is prep work for turning this into a Salsa query.
Specifically, we don't want the Salsa query to be
dependent upon the query string.
2025-08-23 12:53:41 -04:00
Andrew Gallant
06dbec8479 [ty] Add debug trace for workspace symbol elapsed time
Useful for ad hoc debugging, but it's also useful to have permanently to
enable serendipitous discovery of performance problems.
2025-08-23 12:53:41 -04:00
Andrew Gallant
85931ab594 [ty] Add a TODO for linting on todo! 2025-08-23 12:53:41 -04:00
Ibraheem Ahmed
b3cc733f06 [ty] Remove duplicate global lint registry (#20053)
## Summary

Looks like an oversight at some point that led to two identical globals,
the one in `ty_project` just calls `ty_python_semantic::register_lints`.
2025-08-22 19:43:12 -04:00
Ibraheem Ahmed
7abc41727b [ty] Shrink size of AstNodeRef (#20028)
## Summary

Removes the `module_ptr` field from `AstNodeRef` in release mode, and
change `NodeIndex` to a `NonZeroU32` to reduce the size of
`Option<AstNodeRef<_>>` fields.

I believe CI runs in debug mode, so this won't show up in the memory
report, but this reduces memory by ~2% in release mode.
2025-08-22 17:03:22 -04:00
chiri
886c4e4773 [flake8-use-pathlib] Fix PTH211 autofix (#20049)
## Summary
Part of #20009
2025-08-22 13:35:08 -05:00
Alex Waygood
bc6ea68733 [ty] Add precise iteration and unpacking inference for string literals and bytes literals (#20023)
## Summary

Previously we held off from doing this because we weren't sure that it
was worth the added complexity cost. But our code has changed in the
months since we made that initial decision, and I think the structure of
the code is such that it no longer really leads to much added complexity
to add precise inference when unpacking a string literal or a bytes
literal.

The improved inference we gain from this has real benefits to users (see
the mypy_primer report), and this PR doesn't appear to have a
performance impact.

## Test plan

mdtests
2025-08-22 19:33:08 +01:00
Micha Reiser
796819e7a0 [ty] Disallow std::env and io methods in most ty crates (#20046)
## Summary

We use the `System` abstraction in ty to abstract away the host/system
on which ty runs.
This has a few benefits:

* Tests can run in full isolation using a memory system (that uses an
in-memory file system)
* The LSP has a custom implementation where `read_to_string` returns the
content as seen by the editor (e.g. unsaved changes) instead of always
returning the content as it is stored on disk
* We don't require any file system polyfills for wasm in the browser


However, it does require extra care that we don't accidentally use
`std::fs` or `std::env` (etc.) methods in ty's code base (which is very
easy).

This PR sets up Clippy and disallows the most common methods, instead
pointing users towards the corresponding `System` methods.

The setup is a bit awkward because clippy doesn't support inheriting
configurations. That means, a crate can only override the entire
workspace configuration or not at all.
The approach taken in this PR is:

* Configure the disallowed methods at the workspace level
* Allow `disallowed_methods` at the workspace level
* Enable the lint at the crate level using the warn attribute (in code)


The obvious downside is that it won't work if we ever want to disallow
other methods, but we can figure that out once we reach that point.

What about false positives: Just add an `allow` and move on with your
life :) This isn't something that we have to enforce strictly; the goal
is to catch accidental misuse.

## Test Plan

Clippy found a place where we incorrectly used `std::fs::read_to_string`
2025-08-22 11:13:47 -07:00
Vivek Dasari
5508e8e528 Add testing helper to compare stable vs preview snapshots (#19715)
## Summary
This PR implements a diff test helper `assert_diagnostics_diff` as
described in #19351. The diff file includes both the settings ( e.g.
`+linter.preview = enabled`) and the snapshot data itself.

The current implementation looks for each old diagnostic in the new
snapshot. This works when the preview behavior adds/removes a couple
diagnostics. This implementation does not work well when every
diagnostic is modified (e.g. a "fix" is added).
https://github.com/astral-sh/ruff/pull/19715#discussion_r2259410763 has
ideas for future improvements to this implementation.

The example usage in this PR writes the diff to `preview_diff` file
instead of `preview` file, which might be a useful convention to keep.


## Test Plan
- Included a unit test at:
https://github.com/astral-sh/ruff/pull/19715/files#diff-d49487fe3e8a8585529f62c2df2a2b0a4c44267a1f93d1e859dff1d9f8771d36R523
- Example usage of this new test helper:
https://github.com/astral-sh/ruff/pull/19715/files#diff-2a33ac11146d1794c01a29549a6041d3af6fb6f9b423a31ade12a88d1951b0c2R1
2025-08-22 12:49:34 -05:00
chiri
0be3e1fbbf [flake8-use-pathlib] Add autofix for PTH211 (#20009)
## Summary
Part of https://github.com/astral-sh/ruff/issues/2331
2025-08-22 12:38:37 -05:00
Micha Reiser
5d217b7f46 [ty] Add type as detail to completion items (#20047)
## Summary

@BurntSushi was so kind as to find me an easy task to do some coding
before I'm off to PTO.

This PR adds the type to completion items (see the gray little text at
the end of a completion item).



https://github.com/user-attachments/assets/c0a86061-fa12-47b4-b43c-3c646771a69d
2025-08-22 12:32:53 -04:00
Dylan
0b6ce1c788 [ruff] Handle empty t-strings in unnecessary-empty-iterable-within-deque-call (RUF037) (#20045)
Adds a method to `TStringValue` to detect whether the t-string is empty
_as an iterable_. Note the subtlety here that, unlike f-strings, an
empty t-string is still truthy (i.e. `bool(t"")==True`).

Closes #19951
2025-08-22 10:23:49 -05:00
Matthew Mckee
0e9d77e43a Fix incorrect lsp inlay hint type (#20044) 2025-08-22 17:12:49 +02:00
Carl Meyer
8b827c3c6c [ty] rename BareTypeAliasType to ManualPEP695TypeAliasType (#20037)
## Summary

Rename `TypeAliasType::Bare` to `TypeAliasType::ManualPEP695`, and
`BareTypeAliasType` to `ManualPEP695TypeAliasType`.

Why?

Both existing variants of `TypeAliasType` are specific to features added
in PEP 695 (which introduced both the `type` statement and
`types.TypeAliasType`), so it doesn't make sense to name one with the
name `PEP695` and not the other.

A "bare" type alias, in my mind, is a legacy type alias like `IntOrStr =
int | str`, which is "bare" in that there is nothing at all
distinguishing it as a type alias. I will want to use the "bare" name
for this variant, in a future PR.

The renamed variant here describes a type alias created with `IntOrStr =
types.TypeAliasType("IntOrStr", int | str)`, which is not "bare", it's
just "manually" instantiated instead of using the `type` statement
syntax sugar. (This is useful when using the `typing_extensions`
backport of `TypeAliasType` on older Python versions.)

## Test Plan

Pure rename, existing tests pass.
2025-08-22 07:40:29 -07:00
Max Mynter
c22395dbc6 [ruff] Fix false positive for t-strings in default-factory-kwarg (RUF026) (#20032)
Closes #19993

## Summary
Recognize t strings as never being callable to avoid false positives on
RUF026.
2025-08-22 09:29:42 -05:00
Micha Reiser
11f521c768 [ty] Close signature help after ) (#20017) 2025-08-22 16:09:22 +02:00
Micha Reiser
c5e05df966 [ty] Cancel background tasks when shutdown is requested (#20039) 2025-08-22 10:20:13 +02:00
github-actions[bot]
7a44ea680e [ty] Sync vendored typeshed stubs (#20031)
Co-authored-by: typeshedbot <>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-08-21 21:32:48 +00:00
Alex Waygood
f82025d919 [ty] Improve diagnostics for bad calls to functions (#20022) 2025-08-21 22:00:44 +01:00
Micha Reiser
365f521c37 [ty] Fix incorrect docstring in call signature completion (#20021)
## Summary

This PR fixes https://github.com/astral-sh/ty/issues/1071

The core issue is that `CallableType` is a salsa interned but
`Signature` (which `CallableType` stores) ignores the `Definition` in
its `Eq` and `Hash` implementation.

This PR tries to simplest fix by removing the custom `Eq` and `Hash`
implementation. The main downside of this fix is that it can increase
memory usage because `CallableType`s that are equal except for their
`Definition` are now interned separately.

The alternative is to remove `Definition` from `CallableType` and
instead, call `bindings` directly on the callee (call_expression.func).
However, this would require
addressing the TODO 

here
39ee71c2a5/crates/ty_python_semantic/src/types.rs (L4582-L4586)

This might probably be worth addressing anyway, but is the more involved
fix. That's why I opted for removing the custom `Eq` implementation.

We already "ignore" the definition during normalization, thank's to
Alex's work in https://github.com/astral-sh/ruff/pull/19615

## Test Plan



https://github.com/user-attachments/assets/248d1cb1-12fd-4441-adab-b7e0866d23eb
2025-08-21 16:36:40 -04:00
Aria Desires
fc5321e000 [ty] fix GotoTargets for keyword args in nested function calls (#20013)
While implementing similar logic for initializers I noticed that this
code appeared to be walking the ancestors in the wrong direction, and so
if you have nested function calls it would always grab the outermost one
instead of the closest-ancestor.

The four copies of the test are because there's something really evil in
our caching that can't seem to be demonstrated in our cursor testing
framework, which I'm filing a followup for.
2025-08-21 20:19:52 +00:00
Dylan
c68ff8d90b Bump 0.12.10 (#20025) 2025-08-21 13:09:31 -05:00
Andrew Gallant
5931a5207d [ty] Stop running every mdtest twice
This was an accidental oversight introduced in commit
468eb37d75.
2025-08-21 13:37:08 -04:00
Brent Westbrook
692be72f5a Move diff rendering to ruff_db (#20006)
Summary
--

This is a preparatory PR in support of #19919. It moves our `Diff`
rendering code from `ruff_linter` to `ruff_db`, where we have direct
access to the `DiagnosticStylesheet` used by our other diagnostic
rendering code. As shown by the tests, this shouldn't cause any visible
changes. The colors aren't exactly the same, as I note in a TODO
comment, but I don't think there's any existing way to see those, even
in tests.

The `Diff` implementation is mostly unchanged. I just switched from a
Ruff-specific `SourceFile` to a `DiagnosticSource` (removing an
`expect_ruff_source_file` call) and updated the `LineStyle` struct and
other styling calls to use `fmt_styled` and our existing stylesheet.

In support of these changes, I added three styles to our stylesheet:
`insertion` and `deletion` for the corresponding diff operations, and
`underline`, which apparently we _can_ use, as I hoped on Discord. This
isn't supported in all terminals, though. It worked in ghostty but not
in st for me.

I moved the `calculate_print_width` function from the now-deleted
`diff.rs` to a method on `OneIndexed`, where it was available everywhere
we needed it. I'm not sure if that's desirable, or if my other changes
to the function are either (using `ilog10` instead of a loop). This does
make it `const` and slightly simplifies things in my opinion, but I'm
happy to revert it if preferred.

I also inlined a version of `show_nonprinting` from the
`ShowNonprinting` trait in `ruff_linter`:


f4be05a83b/crates/ruff_linter/src/text_helpers.rs (L3-L5)

This trait is now only used in `source_kind.rs`, so I'm not sure it's
worth having the trait or the macro-generated implementation (which is
only called once). This is obviously closely related to our unprintable
character handling in diagnostic rendering, but the usage seems
different enough not to try to combine them.


f4be05a83b/crates/ruff_db/src/diagnostic/render.rs (L990-L998)

We could also move the trait to another crate where we can use it in
`ruff_db` instead of inlining here, of course.

Finally, this PR makes `TextEmitter` a very thin wrapper around a
`DisplayDiagnosticsConfig`. It's still used in a few places, though,
unlike the other emitters we've replaced, so I figured it was worth
keeping around. It's a pretty nice API for setting all of the options on
the config and then passing that along to a `DisplayDiagnostics`.

Test Plan
--

Existing snapshot tests with diffs
2025-08-21 09:47:00 -04:00
Douglas Creager
14fe1228e7 [ty] Perform assignability etc checks using new Constraints trait (#19838)
"Why would you do this? This looks like you just replaced `bool` with an
overly complex trait"

Yes that's correct!

This should be a no-op refactoring. It replaces all of the logic in our
assignability, subtyping, equivalence, and disjointness methods to work
over an arbitrary `Constraints` trait instead of only working on `bool`.

The methods that `Constraints` provides looks very much like what we get
from `bool`. But soon we will add a new impl of this trait, and some new
methods, that let us express "fuzzy" constraints that aren't always true
or false. (In particular, a constraint will express the upper and lower
bounds of the allowed specializations of a typevar.)

Even once we have that, most of the operations that we perform on
constraint sets will be the usual boolean operations, just on sets.
(`false` becomes empty/never; `true` becomes universe/always; `or`
becomes union; `and` becomes intersection; `not` becomes negation.) So
it's helpful to have this separate PR to refactor how we invoke those
operations without introducing the new functionality yet.

Note that we also have translations of `Option::is_some_and` and
`is_none_or`, and of `Iterator::any` and `all`, and that the `and`,
`or`, `when_any`, and `when_all` methods are meant to short-circuit,
just like the corresponding boolean operations. For constraint sets,
that depends on being able to implement the `is_always` and `is_never`
trait methods.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-08-21 09:30:09 -04:00
Micha Reiser
045cba382a [ty] Use dedent in cursor tests (#20019) 2025-08-21 10:31:54 +02:00
Brent Westbrook
a5cbca156c Fix rust feature activation (#20012) 2025-08-21 09:26:06 +02:00
Dhruv Manilawala
d43a3d34dd [ty] Avoid unnecessary argument type expansion (#19999)
## Summary

Part of: https://github.com/astral-sh/ty/issues/868

This PR adds a heuristic to avoid argument type expansion if it's going
to eventually lead to no matching overload.

This is done by checking whether the non-expandable argument types are
assignable to the corresponding annotated parameter type. If one of them
is not assignable to all of the remaining overloads, then argument type
expansion isn't going to help.

## Test Plan

Add mdtest that would otherwise take a long time because of the number
of arguments that it would need to expand (30).
2025-08-21 06:13:11 +00:00
Aria Desires
99111961c0 [ty] Add link for namespaces being partial (#20015)
As requested
2025-08-20 21:28:57 -07:00
Aria Desires
859475f017 [ty] add docstrings to completions based on type (#20008)
This is a fairly simple but effective way to add docstrings to like 95%
of completions from initial experimentation.

Fixes https://github.com/astral-sh/ty/issues/1036

Although ironically this approach *does not* work specifically for
`print` and I haven't looked into why.
2025-08-20 17:00:09 -04:00
Igor Drokin
7b75aee21d [pyupgrade] Avoid reporting __future__ features as unnecessary when they are used (UP010) (#19769)
## Summary
Resolves #19561

Fixes the [unnecessary-future-import
(UP010)](https://docs.astral.sh/ruff/rules/unnecessary-future-import/)
rule to correctly identify when imported __future__ modules are actually
used in the code, preventing false positives.

I assume there is no way to check usage in `analyze::statements`,
because we don't have any usage bindings for imports. To determine
unused imports, we have to fully scan the file to create bindings and
then check usage, similar to [unused-import
(F401)](https://docs.astral.sh/ruff/rules/unused-import/#unused-import-f401).
So, `Rule::UnnecessaryFutureImport` was moved from the
`analyze::statements` to the `analyze::deferred_scopes` stage. This
caused the need to change the logic of future import handling to a
bindings-based approach.

Also, the diagnostic report was changed.
Before
```
  |
1 | from __future__ import nested_scopes, generators
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP010
```
after
```
  |
1 | from __future__ import nested_scopes, generators
  |                        ^^^^^^^^^^^^^ UP010
```

I believe this is the correct way, because `generators` may be used, but
`nested_scopes` is not.

### Special case
I've found out about some specific case.
```python
from __future__ import nested_scopes

nested_scopes = 1
```
Here we can treat `nested_scopes` as an unused import because the
variable `nested_scopes` shadows it and we can safely remove the future
import (my fix does it).

But
[F401](https://docs.astral.sh/ruff/rules/unused-import/#unused-import-f401)
not triggered for such case
([sandbox](https://play.ruff.rs/296d9c7e-0f02-4659-b0c0-78cc21f3de76))
```
from foo import print_function

print_function = 1
```
In my mind, `print_function` here is an unused import and should be
deleted (my IDE highlight it). What do you think?

## Test Plan

Added test cases and snapshots:
- Split test file into separate _0 and _1 files for appropriate checks.
- Added test cases to verify fixes when future module are used.

---------

Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
2025-08-20 15:22:03 -04:00
chiri
d04dcd991b [flake8-use-pathlib] Add fixes for PTH102 and PTH103 (#19514)
## Summary

Part of https://github.com/astral-sh/ruff/issues/2331

## Test Plan

<!-- How was it tested? -->
`cargo nextest run flake8_use_pathlib`
2025-08-20 14:36:07 -04:00
Leandro Braga
39ee71c2a5 [ty] correctly ignore field specifiers when not specified (#20002)
This commit corrects the type checker's behavior when handling
`dataclass_transform` decorators that don't explicitly specify
`field_specifiers`. According to [PEP 681 (Data Class
Transforms)](https://peps.python.org/pep-0681/#dataclass-transform-parameters),
when `field_specifiers` is not provided, it defaults to an empty tuple,
meaning no field specifiers are supported and
`dataclasses.field`/`dataclasses.Field` calls should be ignored.

Fixes https://github.com/astral-sh/ty/issues/980
2025-08-20 11:33:23 -07:00
Brent Westbrook
1a38831d53 Option::unwrap is now const (#20007)
Summary
--

I noticed while working on #20006 that we had a custom `unwrap` function
for `Option`. This has been const on stable since 1.83
([docs](https://doc.rust-lang.org/std/option/enum.Option.html#method.unwrap),
[release notes](https://blog.rust-lang.org/2024/11/28/Rust-1.83.0/)), so
I think it's safe to use now. I grepped a bit for related todos and
found this one for `AsciiCharSet` but no others.

Test Plan
--

Existing tests
2025-08-20 13:40:49 -04:00
Andrew Gallant
ddd4bab67c [ty] Re-arrange "list modules" implementation for Salsa caching
This basically splits `list_modules` into a higher level "aggregation"
routine and a lower level "get modules for one search path" routine.
This permits Salsa to cache the lower level components, e.g., many
search paths refer to directories that rarely change. This saves us
interaction with the system.

This did require a fair bit of surgery in terms of being careful about
adding file roots. Namely, now that we rely even more on file roots
existing for correct handling of cache invalidation, there were several
spots in our code that needed to be updated to add roots (that we
weren't previously doing). This feels Not Great, and it would be better
if we had some kind of abstraction that handled this for us. But it
isn't clear to me at this time what that looks like.
2025-08-20 10:41:47 -04:00
Andrew Gallant
468eb37d75 [ty] Test "list modules" versus "resolve module" in every mdtest
This ensures there is some level of consistency between the APIs.

This did require exposing a couple more things on `Module` for good
error messages. This also motivated a switch to an interned struct
instead of a tracked struct. This ensures that `list_modules` and
`resolve_modules` reuse the same `Module` values when the inputs are the
same.

Ref https://github.com/astral-sh/ruff/pull/19883#discussion_r2272520194
2025-08-20 10:27:54 -04:00
Andrew Gallant
2e9c241d7e [ty] Wire up "list modules" API to make module completions work
This makes `import <CURSOR>` and `from <CURSOR>` completions work.

This also makes `import os.<CURSOR>` and `from os.<CURSOR>`
completions work. In this case, we are careful to only offer
submodule completions.
2025-08-20 10:27:54 -04:00
Andrew Gallant
05478d5cc7 [ty] Tweak some completion tests
These tests were added as a regression check that a panic
didn't occur. So we were asserting a bit more than necessary.
In particular, these will soon return completions for modules,
which creates large snapshots that we don't need.

So modify these to just check there is sensible output that
doesn't panic.
2025-08-20 10:27:54 -04:00
Andrew Gallant
4db20f459c [ty] Add "list modules" implementation
The actual implementation wasn't too bad. It's not long
but pretty fiddly. I copied over the tests from the existing
module resolver and adapted them to work with this API. Then
I added a number of my own tests as well.
2025-08-20 10:27:54 -04:00
Andrew Gallant
ec7c2efef9 [ty] Lightly expose FileModule and NamespacePackage fields
This will make it easier to emit this info into snapshots for
testing.
2025-08-20 10:27:54 -04:00
Andrew Gallant
79b2754215 [ty] Add some more helper routines to ModulePath 2025-08-20 10:27:54 -04:00
Andrew Gallant
a0ddf1f7c4 [ty] Fix a bug when converting ModulePath to ModuleName
Previously, if the module was just `foo-stubs`, we'd skip over
stripping the `-stubs` suffix which would lead to us returning
`None`.

This function is now a little convoluted and could be simpler
if we did an intermediate allocation. But I kept the iterative
approach and added a special case to handle `foo-stubs`.
2025-08-20 10:27:54 -04:00
Andrew Gallant
5b00ec981b [ty] Split out another constructor for ModuleName
This makes it a little more flexible to call. For example,
we might have a `StmtImport` and not a `StmtImportFrom`.
2025-08-20 10:27:54 -04:00
Andrew Gallant
306ef3bb02 [ty] Add stub-file tests to existing module resolver
These tests capture existing behavior.

I added these when I stumbled upon what I thought was an
oddity: we prioritize `foo.pyi` over `foo.py`, but
prioritize `foo/__init__.py` over `foo.pyi`.

(I plan to investigate this more closely in follow-up
work. Particularly, to look at other type checkers. It
seems like we may want to change this to always prioritize
stubs.)
2025-08-20 10:27:54 -04:00
Andrew Gallant
a4cd13c6e2 [ty] Expose some routines in the module resolver
We'll want to use these when implementing the
"list modules" API.
2025-08-20 10:27:54 -04:00
Andrew Gallant
e0c98874e2 [ty] Add more path helper functions
This makes it easier to do exhaustive case analysis
on a `SearchPath` depending on whether it is a vendored
or system path.
2025-08-20 10:27:54 -04:00
Andrey
f4be05a83b [flake8-annotations] Remove unused import in example (ANN401) (#20000)
## Summary

Remove unused import in the  "Use instead" example.

## Test Plan

It's just a text description, no test needed
2025-08-20 09:19:18 -04:00
Aria Desires
1d2128f918 [ty] distinguish base conda from child conda (#19990)
This is a port of the logic in https://github.com/astral-sh/uv/pull/7691

The basic idea is we use CONDA_DEFAULT_ENV as a signal for whether
CONDA_PREFIX is just the ambient system conda install, or the user has
explicitly activated a custom one. If the former, then the conda is
treated like a system install (having lowest priority). If the latter,
the conda is treated like an activated venv (having priority over
everything but an Actual activated venv).

Fixes https://github.com/astral-sh/ty/issues/611
2025-08-20 09:07:42 -04:00
Micha Reiser
276405b44e [ty] Fix server hang (#19991) 2025-08-20 10:28:30 +02:00
Dhruv Manilawala
f019cfd15f [ty] Use specialized parameter type for overload filter (#19964)
## Summary

Closes: https://github.com/astral-sh/ty/issues/669

(This turned out to be simpler that I thought :))

## Test Plan

Update existing test cases.

### Ecosystem report

Most of them are basically because ty has now started inferring more
precise types for the return type to an overloaded call and a lot of the
types are defined using type aliases, here's some examples:

<details><summary>Details</summary>
<p>

> attrs (https://github.com/python-attrs/attrs)
> + tests/test_make.py:146:14: error[unresolved-attribute] Type
`Literal[42]` has no attribute `default`
> - Found 555 diagnostics
> + Found 556 diagnostics

This is accurate now that we infer the type as `Literal[42]` instead of
`Unknown` (Pyright infers it as `int`)

> optuna (https://github.com/optuna/optuna)
> + optuna/_gp/search_space.py:181:53: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `tuple[int | float, int | float]`, found `tuple[Unknown |
ndarray[Unknown, <class 'float'>], Unknown | ndarray[Unknown, <class
'float'>]]`
> + optuna/_gp/search_space.py:181:83: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `int | float`, found `Unknown | ndarray[Unknown, <class
'float'>]`
> + tests/gp_tests/test_search_space.py:109:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `tuple[int | float, int
| float]`, found `Unknown | ndarray[Unknown, <class 'float'>]`
> + tests/gp_tests/test_search_space.py:110:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `int | float`, found
`Unknown | ndarray[Unknown, <class 'float'>]`
> - Found 559 diagnostics
> + Found 563 diagnostics

Same as above where ty is now inferring a more precise type like
`Unknown | ndarray[tuple[int, int], <class 'float'>]` instead of just
`Unknown` as before

> jinja (https://github.com/pallets/jinja)
> + src/jinja2/bccache.py:298:39: error[invalid-argument-type] Argument
to bound method `write_bytecode` is incorrect: Expected `IO[bytes]`,
found `_TemporaryFileWrapper[str]`
> - Found 186 diagnostics
> + Found 187 diagnostics

This requires support for type aliases to match the correct overload.

> hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen)
> + src/hydra_zen/wrapper/_implementations.py:945:16:
error[invalid-return-type] Return type does not match returned value:
expected `DataClass_ | type[@Todo(type[T] for protocols)] | ListConfig |
DictConfig`, found `@Todo(unsupported type[X] special form) | (((...) ->
Any) & dict[Unknown, Unknown]) | (DataClass_ & dict[Unknown, Unknown]) |
dict[Any, Any] | (ListConfig & dict[Unknown, Unknown]) | (DictConfig &
dict[Unknown, Unknown]) | (((...) -> Any) & list[Unknown]) | (DataClass_
& list[Unknown]) | list[Any] | (ListConfig & list[Unknown]) |
(DictConfig & list[Unknown])`
> + tests/annotations/behaviors.py:60:28: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/behaviors.py:64:21: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:167:17: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:524:17:
error[unresolved-attribute] Type `<class 'int'>` has no attribute
`_target_`
> - Found 561 diagnostics
> + Found 566 diagnostics

Same as above, this requires support for type aliases to match the
correct overload.

> paasta (https://github.com/yelp/paasta)
> + paasta_tools/utils.py:4188:19: warning[redundant-cast] Value is
already of type `list[str]`
> - Found 888 diagnostics
> + Found 889 diagnostics

This is correct.

> colour (https://github.com/colour-science/colour)
> + colour/plotting/diagrams.py:448:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/diagrams.py:462:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/models.py:419:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:230:9: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:474:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:495:17: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:513:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> + colour/plotting/temperature.py:514:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> - Found 480 diagnostics
> + Found 488 diagnostics

Most of them are correct except for the last two diagnostics which I'm
not sure
what's happening, it's trying to index into an `np.ndarray` type (which
is
inferred correctly) but I think it might be picking up an incorrect
overload
for the `__getitem__` method.

Scipy's diagnostics also requires support for type alises to pick the
correct overload.

</p>
</details>
2025-08-20 09:39:05 +05:30
Eric Mark Martin
33030b34cd [ty] linear variance inference for PEP-695 type parameters (#18713)
## Summary

Implement linear-time variance inference for type variables
(https://github.com/astral-sh/ty/issues/488).

Inspired by Martin Huschenbett's [PyCon 2025
Talk](https://www.youtube.com/watch?v=7uixlNTOY4s&t=9705s).

## Test Plan

update tests, add new tests, including for mutually recursive classes

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-19 17:54:09 -07:00
Alex Waygood
656fc335f2 [ty] Strict validation of protocol members (#17750) 2025-08-19 22:45:41 +00:00
Dan Parizher
e0f4cec7a1 [pyupgrade] Handle nested Optionals (UP045) (#19770)
## Summary

Fixes #19746

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-19 18:12:15 -04:00
Alex Waygood
662d18bd05 [ty] Add precise inference for unpacking a TypeVar if the TypeVar has an upper bound with a precise tuple spec (#19985) 2025-08-19 22:11:30 +01:00
Aria Desires
c82e255ca8 [ty] Fix namespace packages that behave like partial stubs (#19994)
In implementing partial stubs I had observed that this continue in the
namespace package code seemed erroneous since the same continue for
partial stubs didn't work. Unfortunately I wasn't confident enough to
push on that hunch. Fortunately I remembered that hunch to make this an
easy fix.

The issue with the continue is that it bails out of the current
search-path without testing any .py files. This breaks when for example
`google` and `google-stubs`/`types-google` are both in the same
site-packages dir -- failing to find a module in `types-google` has us
completely skip over `google`!

Fixes https://github.com/astral-sh/ty/issues/520
2025-08-19 16:34:39 -04:00
Eric Jolibois
58efd19f11 [ty] apply KW_ONLY sentinel only to local fields (#19986)
fix https://github.com/astral-sh/ty/issues/1047

## Summary

This PR fixes how `KW_ONLY` is applied in dataclasses. Previously, the
sentinel leaked into subclasses and incorrectly marked their fields as
keyword-only; now it only affects fields declared in the same class.

```py
from dataclasses import dataclass, KW_ONLY

@dataclass
class D:
    x: int
    _: KW_ONLY
    y: str

@dataclass
class E(D):
    z: bytes

# This should work: x=1 (positional), z=b"foo" (positional), y="foo" (keyword-only)
E(1, b"foo", y="foo")

reveal_type(E.__init__)  # revealed: (self: E, x: int, z: bytes, *, y: str) -> None
```

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->
mdtests
2025-08-19 11:01:35 -07:00
Aria Desires
c6dcfe36d0 [ty] introduce multiline pretty printer (#19979)
Requires some iteration, but this includes the most tedious part --
threading a new concept of DisplaySettings through every type display
impl. Currently it only holds a boolean for multiline, but in the future
it could also take other things like "render to markdown" or "here's
your base indent if you make a newline".

For types which have exposed display functions I've left the old
signature as a compatibility polyfill to avoid having to audit
everywhere that prints types right off the bat (notably I originally
tried doing multiline functions unconditionally and a ton of things
churned that clearly weren't ready for multi-line (diagnostics).

The only real use of this API in this PR is to multiline render function
types in hovers, which is the highest impact (see snapshot changes).

Fixes https://github.com/astral-sh/ty/issues/1000
2025-08-19 17:31:44 +00:00
Avasam
59b078b1bf Update outdated links to https://typing.python.org/en/latest/source/stubs.html (#19992) 2025-08-19 18:12:08 +01:00
Andrew Gallant
5e943d3539 [ty] Ask the LSP client to watch all project search paths
This change rejiggers how we register globs for file watching with the
LSP client. Previously, we registered a few globs like `**/*.py`,
`**/pyproject.toml` and more. There were two problems with this
approach.

Firstly, it only watches files within the project root. Search paths may
be outside the project root. Such as virtualenv directory.

Secondly, there is variation on how tools interact with virtual
environments. In the case of uv, depending on its link mode, we might
not get any file change notifications after running `uv add foo` or
`uv remove foo`.

To remedy this, we instead just list for file change notifications on
all files for all search paths. This simplifies the globs we use, but
does potentially increase the number of notifications we'll get.
However, given the somewhat simplistic interface supported by the LSP
protocol, I think this is unavoidable (unless we used our own file
watcher, which has its own considerably downsides). Moreover, this is
seemingly consistent with how `ty check --watch` works.

This also required moving file watcher registration to *after*
workspaces are initialized, or else we don't know what the right search
paths are.

This change is in service of #19883, which in order for cache
invalidation to work right, the LSP client needs to send notifications
whenever a dependency is added or removed. This change should make that
possible.

I tried this patch with #19883 in addition to my work to activate Salsa
caching, and everything seems to work as I'd expect. That is,
completions no longer show stale results after a dependency is added or
removed.
2025-08-19 10:57:07 -04:00
renovate[bot]
0967e7e088 Update Rust crate glob to v0.3.3 (#19959)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [glob](https://redirect.github.com/rust-lang/glob) |
workspace.dependencies | patch | `0.3.2` -> `0.3.3` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>rust-lang/glob (glob)</summary>

###
[`v0.3.3`](https://redirect.github.com/rust-lang/glob/blob/HEAD/CHANGELOG.md#033---2025-08-11)

[Compare
Source](https://redirect.github.com/rust-lang/glob/compare/v0.3.2...v0.3.3)

- Optimize memory allocations
([#&#8203;147](https://redirect.github.com/rust-lang/glob/pull/147))
- Bump the MSRV to 1.63
([#&#8203;172](https://redirect.github.com/rust-lang/glob/pull/172))
- Fix spelling in pattern documentation
([#&#8203;164](https://redirect.github.com/rust-lang/glob/pull/164))
- Fix version numbers and some formatting
([#&#8203;157](https://redirect.github.com/rust-lang/glob/pull/157))
- Style fixes
([#&#8203;137](https://redirect.github.com/rust-lang/glob/pull/137))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS43MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNzEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-19 10:39:23 -04:00
Alex Waygood
600245478c [ty] Look for site-packages directories in <sys.prefix>/lib64/ as well as <sys.prefix>/lib/ on non-Windows systems (#19978) 2025-08-19 11:53:06 +00:00
Alex Waygood
e5c091b850 [ty] Fix protocol interface inference for stub protocols and subprotocols (#19950) 2025-08-19 10:31:11 +00:00
David Peter
10301f6190 [ty] Enable virtual terminal on Windows (#19984)
## Summary

Should hopefully fix https://github.com/astral-sh/ty/issues/1045
2025-08-19 09:13:03 +00:00
Alex Waygood
4242905b36 [ty] Detect NamedTuple classes where fields without default values follow fields with default values (#19945) 2025-08-19 08:56:08 +00:00
Aria Desires
c20d906503 [ty] improve goto/hover for definitions (#19976)
By computing the actual Definition for, well, definitions, we unlock a
bunch of richer machinery in the goto/hover subsystems for free.

Fixes https://github.com/astral-sh/ty/issues/1001
Fixes https://github.com/astral-sh/ty/issues/1004
2025-08-18 21:42:53 -04:00
Carl Meyer
a04375173c [ty] fix unpacking a type alias with detailed tuple spec (#19981)
## Summary

Fixes https://github.com/astral-sh/ty/issues/1046

We special-case iteration of certain types because they may have a more
detailed tuple-spec. Now that type aliases are a distinct type variant,
we need to handle them as well.

I don't love that `Type::TypeAlias` means we have to remember to add a
case for it basically anywhere we are special-casing a certain kind of
type, but at the moment I don't have a better plan. It's another
argument for avoiding fallback cases in `Type` matches, which we usually
prefer; I've updated this match statement to be comprehensive.

## Test Plan

Added mdtest.
2025-08-18 17:54:05 -07:00
Alex Waygood
e6dcdd29f2 [ty] Add a Todo-type branch for type[P] where P is a protocol class (#19947) 2025-08-18 20:38:19 +00:00
Matthew Mckee
24f6d2dc13 [ty] Infer the correct type of Enum __eq__ and __ne__ comparisions (#19666)
## Summary

Resolves https://github.com/astral-sh/ty/issues/920

## Test Plan

Update `enums.md`

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-08-18 19:45:44 +02:00
Alex Waygood
3314cf90ed [ty] Add more regression tests for tuple (#19974) 2025-08-18 18:30:05 +01:00
Aria Desires
0cb1abc1fc [ty] Implement partial stubs (#19931)
Fixes https://github.com/astral-sh/ty/issues/184
2025-08-18 13:14:13 -04:00
Brent Westbrook
f6491cacd1 Add full output format changes to the changelog (#19968)
Summary
--

I thought this might warrant a small blog-style writeup, especially
since we already got a question about it (#19966), but I'm happy to
switch back to a one-liner under `### Other changes` if preferred.

I'll copy whatever we add here to the release notes too.

Do we need a note at the top about the late addition?
2025-08-18 11:46:16 -04:00
Alex Waygood
e4f1b587cc Upgrade mypy_primer pin (#19967) 2025-08-18 13:27:54 +01:00
Alex Waygood
fbf24be8ae [ty] Detect illegal multiple inheritance with NamedTuple (#19943) 2025-08-18 12:03:01 +00:00
Micha Reiser
5e4fa9e442 [ty] Speedup tracing checks (#19965) 2025-08-18 12:56:06 +02:00
Micha Reiser
67529edad6 [ty] Short-circuit inlayhints request if disabled in settings (#19963) 2025-08-18 10:35:40 +00:00
Alex Waygood
4ac2b2c222 [ty] Have SemanticIndex::place_table() and SemanticIndex::use_def_map return references (#19944) 2025-08-18 11:30:52 +01:00
renovate[bot]
083bb85d9d Update actions/checkout to v5.0.0 (#19952)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-18 07:31:07 +00:00
Micha Reiser
c7af595fc1 [ty] Use debug builds for conformance tests and run them single threaded (#19938) 2025-08-18 07:20:49 +00:00
Micha Reiser
7d8f7c20da [ty] Log server version at info level (#19961) 2025-08-18 07:16:53 +00:00
renovate[bot]
76c933d10e Update dependency ruff to v0.12.9 (#19954)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 08:54:23 +02:00
renovate[bot]
d423191d94 Update Rust crate bitflags to v2.9.2 (#19957)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 08:54:09 +02:00
renovate[bot]
c8d155b2b9 Update Rust crate clap to v4.5.45 (#19958)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 08:53:51 +02:00
renovate[bot]
a5339a52c3 Update Rust crate libc to v0.2.175 (#19960)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 08:53:31 +02:00
renovate[bot]
48772c04d7 Update Rust crate anyhow to v1.0.99 (#19956)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 08:53:10 +02:00
renovate[bot]
510a07dee2 Update PyO3/maturin-action action to v1.49.4 (#19955)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 08:44:00 +02:00
gkowzan
47d44e5f7b Fix description of global config file discovery strategy (#19143) (#19188)
Contrary to docs, ruff uses etcetera's base strategy rather than the
native strategy.
2025-08-17 18:35:37 -05:00
Alex Waygood
ec3163781c [ty] Remove unused code (#19949) 2025-08-17 18:54:24 +01:00
Douglas Creager
b892e4548e [ty] Track when type variables are inferable or not (#19786)
`Type::TypeVar` now distinguishes whether the typevar in question is
inferable or not.

A typevar is _not inferable_ inside the body of the generic class or
function that binds it:

```py
def f[T](t: T) -> T:
    return t
```

The infered type of `t` in the function body is `TypeVar(T,
NotInferable)`. This represents how e.g. assignability checks need to be
valid for all possible specializations of the typevar. Most of the
existing assignability/etc logic only applies to non-inferable typevars.

Outside of the function body, the typevar is _inferable_:

```py
f(4)
```

Here, the parameter type of `f` is `TypeVar(T, Inferable)`. This
represents how e.g. assignability doesn't need to hold for _all_
specializations; instead, we need to find the constraints under which
this specific assignability check holds.

This is in support of starting to perform specialization inference _as
part of_ performing the assignability check at the call site.

In the [[POPL2015][]] paper, this concept is called _monomorphic_ /
_polymorphic_, but I thought _non-inferable_ / _inferable_ would be
clearer for us.

Depends on #19784 

[POPL2015]: https://doi.org/10.1145/2676726.2676991

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-16 18:25:03 -04:00
Alex Waygood
9ac39cee98 [ty] Ban protocols from inheriting from non-protocol generic classes (#19941) 2025-08-16 19:38:43 +01:00
Alex Waygood
f4d8826428 [ty] Fix error message for invalidly providing type arguments to NamedTuple when it occurs in a type expression (#19940) 2025-08-16 17:45:15 +00:00
Micha Reiser
527a690a73 [ty] Fix example in environment docs (#19937) 2025-08-16 14:37:28 +00:00
Dan Parizher
f0e9c1d8f9 [isort] Handle multiple continuation lines after module docstring (I002) (#19818)
## Summary

Fixes #19815

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-08-15 17:17:50 -04:00
Frazer McLean
2e1d6623cd [flake8-simplify] Implement fix for maxsplit without separator (SIM905) (#19851)
**Stacked on top of #19849; diff will include that PR until it is
merged.**

---

<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

As part of #19849, I noticed this fix could be implemented.

## Test Plan

Tests added based on CPython behaviour.
2025-08-15 15:18:06 -04:00
Dan Parizher
2dc2f68b0f [pycodestyle] Make E731 fix unsafe instead of display-only for class assignments (#19700)
## Summary

Fixes #19650

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-15 19:09:55 +00:00
Alex Waygood
26d6c3831f [ty] Represent NamedTuple as an opaque special form, not a class (#19915) 2025-08-15 18:20:14 +01:00
Alex Waygood
9ced219ffc [ty] Remove incorrect type narrowing for if type(x) is C[int] (#19926) 2025-08-15 17:52:14 +01:00
Micha Reiser
f344dda82c Bump Rust MSRV to 1.87 (#19924) 2025-08-15 17:55:38 +02:00
Alex Waygood
6de84ed56e Add else-branch narrowing for if type(a) is A when A is @final (#19925) 2025-08-15 14:52:30 +01:00
github-actions[bot]
bd4506aac5 [ty] Sync vendored typeshed stubs (#19923)
Close and reopen this PR to trigger CI

---------

Co-authored-by: typeshedbot <>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-14 18:09:35 -07:00
Shunsuke Shibayama
0e5577ab56 [ty] fix lazy snapshot sweeping in nested scopes (#19908)
## Summary

This PR closes astral-sh/ty#955.

## Test Plan

New test cases in `narrowing/conditionals/nested.md`.
2025-08-14 17:52:52 -07:00
Andrii Turov
957320c0f1 [ty] Add diagnostics for invalid await expressions (#19711)
## Summary

This PR adds a new lint, `invalid-await`, for all sorts of reasons why
an object may not be `await`able, as discussed in astral-sh/ty#919.
Precisely, `__await__` is guarded against being missing, possibly
unbound, or improperly defined (expects additional arguments or doesn't
return an iterator).

Of course, diagnostics need to be fine-tuned. If `__await__` cannot be
called with no extra arguments, it indicates an error (or a quirk?) in
the method signature, not at the call site. Without any doubt, such an
object is not `Awaitable`, but I feel like talking about arguments for
an *implicit* call is a bit leaky.
I didn't reference any actual diagnostic messages in the lint
definition, because I want to hear feedback first.

Also, there's no mention of the actual required method signature for
`__await__` anywhere in the docs. The only reference I had is the
`typing` stub. I basically ended up linking `[Awaitable]` to ["must
implement
`__await__`"](https://docs.python.org/3/library/collections.abc.html#collections.abc.Awaitable),
which is insufficient on its own.

## Test Plan

The following code was tested:
```python
import asyncio
import typing


class Awaitable:
    def __await__(self) -> typing.Generator[typing.Any, None, int]:
        yield None
        return 5


class NoDunderMethod:
    pass


class InvalidAwaitArgs:
    def __await__(self, value: int) -> int:
        return value


class InvalidAwaitReturn:
    def __await__(self) -> int:
        return 5


class InvalidAwaitReturnImplicit:
    def __await__(self):
        pass


async def main() -> None:
    result = await Awaitable()  # valid
    result = await NoDunderMethod()  # `__await__` is missing
    result = await InvalidAwaitReturn()  # `__await__` returns `int`, which is not a valid iterator 
    result = await InvalidAwaitArgs()  # `__await__` expects additional arguments and cannot be called implicitly
    result = await InvalidAwaitReturnImplicit()  # `__await__` returns `Unknown`, which is not a valid iterator


asyncio.run(main())
```

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-14 14:38:33 -07:00
Alex Waygood
f6093452ed [ty] Synthesize read-only properties for all declared members on NamedTuple classes (#19899) 2025-08-14 21:25:45 +00:00
Alex Waygood
82350a398e [ty] Remove use of ClassBase::try_from_type from super() machinery (#19902) 2025-08-14 22:14:31 +01:00
Micha Reiser
ce938fe205 [ty] Speedup project file discovery (#19913) 2025-08-14 19:38:39 +01:00
Brent Westbrook
7f8f1ab2c1 [pyflakes] Add secondary annotation showing previous definition (F811) (#19900)
## Summary

This is a second attempt at a first use of a new diagnostic feature
after #19886. I'll blame rustc for this one because it also has a
similar diagnostic:

<img width="735" height="335" alt="image"
src="https://github.com/user-attachments/assets/572fe1c3-1742-4ce4-b575-1d9196ff0932"
/>

We end up with a very similar diagnostic:

<img width="764" height="401" alt="image"
src="https://github.com/user-attachments/assets/01eaf0c7-2567-467b-a5d8-a27206b2c74c"
/>

## Test Plan

New snapshots and manual tests above
2025-08-14 13:23:43 -04:00
Brent Westbrook
ef422460de Bump 0.12.9 (#19917) 2025-08-14 11:54:44 -04:00
justin
dc2e8ab377 [ty] support kw_only=True for dataclass() and field() (#19677)
## Summary
https://github.com/astral-sh/ty/issues/111

adds support for `@dataclass(kw_only=True)`
(https://docs.python.org/3/library/dataclasses.html)

## Test Plan
- new mdtests
- triaged conformance diffs (notes here:
https://diffswarm.dev/d-01k2gknwyq82f6x17zqf3apjxc)
- `mypy_primer` no-op
2025-08-14 08:02:55 -07:00
ffgan
9aaa82d037 Feature/build riscv64 bin (#19819) 2025-08-14 16:11:14 +02:00
Alex Waygood
3288ac2dfb [ty] Add caching to CodeGeneratorKind::matches() (#19912) 2025-08-14 11:54:11 +01:00
Dhruv Manilawala
1167ed61cf [ty] Rename functionArgumentNames to callArgumentNames inlay hint setting (#19911)
## Summary

This PR renames `ty.inlayHints.functionArgumentNames` to
`ty.inlayHints.callArgumentNames` which would contain both function
calls and class initialization calls i.e., it represents a generic call
expression.
2025-08-14 14:21:38 +05:30
Dhruv Manilawala
2ee47d87b6 [ty] Default ty.inlayHints.* server settings to true (#19910)
## Summary

This PR changes the default of `ty.inlayHints.*` settings to `true`.

I somehow missed this in my initial PR.

This is marked as `internal` because it's not yet released.
2025-08-14 14:12:03 +05:30
Alex Waygood
d324cedfc2 [ty] Remove py-fuzzer skips for seeds that are no longer slow (#19906) 2025-08-14 00:23:45 +01:00
Carl Meyer
5a570c8e6d [ty] fix deferred name loading in PEP695 generic classes/functions (#19888)
## Summary

For PEP 695 generic functions and classes, there is an extra "type
params scope" (a child of the outer scope, and wrapping the body scope)
in which the type parameters are defined; class bases and function
parameter/return annotations are resolved in that type-params scope.

This PR fixes some longstanding bugs in how we resolve name loads from
inside these PEP 695 type parameter scopes, and also defers type
inference of PEP 695 typevar bounds/constraints/default, so we can
handle cycles without panicking.

We were previously treating these type-param scopes as lazy nested
scopes, which is wrong. In fact they are eager nested scopes; the class
`C` here inherits `int`, not `str`, and previously we got that wrong:

```py
Base = int

class C[T](Base): ...

Base = str
```

But certain syntactic positions within type param scopes (typevar
bounds/constraints/defaults) are lazy at runtime, and we should use
deferred name resolution for them. This also means they can have cycles;
in order to handle that without panicking in type inference, we need to
actually defer their type inference until after we have constructed the
`TypeVarInstance`.

PEP 695 does specify that typevar bounds and constraints cannot be
generic, and that typevar defaults can only reference prior typevars,
not later ones. This reduces the scope of (valid from the type-system
perspective) cycles somewhat, although cycles are still possible (e.g.
`class C[T: list[C]]`). And this is a type-system-only restriction; from
the runtime perspective an "invalid" case like `class C[T: T]` actually
works fine.

I debated whether to implement the PEP 695 restrictions as a way to
avoid some cycles up-front, but I ended up deciding against that; I'd
rather model the runtime name-resolution semantics accurately, and
implement the PEP 695 restrictions as a separate diagnostic on top.
(This PR doesn't yet implement those diagnostics, thus some `# TODO:
error` in the added tests.)

Introducing the possibility of cyclic typevars made typevar display
potentially stack overflow. For now I've handled this by simply removing
typevar details (bounds/constraints/default) from typevar display. This
impacts display of two kinds of types. If you `reveal_type(T)` on an
unbound `T` you now get just `typing.TypeVar` instead of
`typing.TypeVar("T", ...)` where `...` is the bound/constraints/default.
This matches pyright and mypy; pyrefly uses `type[TypeVar[T]]` which
seems a bit confusing, but does include the name. (We could easily
include the name without cycle issues, if there's a syntax we like for
that.)

It also means that displaying a generic function type like `def f[T:
int](x: T) -> T: ...` now displays as `f[T](x: T) -> T` instead of `f[T:
int](x: T) -> T`. This matches pyright and pyrefly; mypy does include
bound/constraints/defaults of typevars in function/callable type
display. If we wanted to add this, we would either need to thread a
visitor through all the type display code, or add a `decycle` type
transformation that replaced recursive reoccurrence of a type with a
marker.

## Test Plan

Added mdtests and modified existing tests to improve their correctness.

After this PR, there's only a single remaining py-fuzzer seed in the
0-500 range that panics! (Before this PR, there were 10; the fuzzer
likes to generate cyclic PEP 695 syntax.)

## Ecosystem report

It's all just the changes to `TypeVar` display.
2025-08-13 15:51:59 -07:00
Douglas Creager
baadb5a78d [ty] Add some additional type safety to CycleDetector (#19903)
This PR adds a type tag to the `CycleDetector` visitor (and its
aliases).

There are some places where we implement e.g. an equivalence check by
making a disjointness check. Both `is_equivalent_to` and
`is_disjoint_from` use a `PairVisitor` to handle cycles, but they should
not use the same visitor. I was finding it tedious to remember when it
was appropriate to pass on a visitor and when not to. This adds a
`PhantomData` type tag to ensure that we can't pass on one method's
visitor to a different method.

For `has_relation` and `apply_type_mapping`, we have an existing type
that we can use as the tag. For the other methods, I've added empty
structs (`Normalized`, `IsDisjointFrom`, `IsEquivalentTo`) to use as
tags.
2025-08-13 17:32:35 -04:00
Roman Kitaev
df0648aae0 [flake8-blind-except] Fix BLE001 false-positive on raise ... from None (#19755)
## Summary

- Refactored `BLE001` logic for clarity and minor speed-up.
- Improved documentation and comments (previously, `BLE001` docs claimed
it catches bare `except:`s, but it doesn't).
- Fixed a false-positive bug with `from None` cause:

```python
# somefile.py

try:
    pass
except BaseException as e:
    raise e from None
```

### main branch
```
somefile.py:3:8: BLE001 Do not catch blind exception: `BaseException`
  |
1 | try:
2 |     pass
3 | except BaseException as e:
  |        ^^^^^^^^^^^^^ BLE001
4 |     raise e from None
  |

Found 1 error.
```

### this change

```cargo run -p ruff -- check somefile.py --no-cache --select=BLE001```

```
All checks passed!
```

## Test Plan

- Added a test case to cover `raise X from Y` clause
- Added a test case to cover `raise X from None` clause
2025-08-13 13:01:47 -04:00
Aria Desires
f0b03c3e86 [ty] resolve docstrings for modules (#19898)
This also reintroduces the `ResolvedDefinition::Module` variant because
reverse-engineering it in several places is a bit confusing. In an ideal
world we wouldn't have `ResolvedDefinition::FileWithRange` as it kinda
kills the ability to do richer analysis, so I want to chip away at its
scope wherever I can (currently it's used to point at asname parts of
import statements when doing `ImportAliasResolution::PreserveAliases`,
and also keyword arguments).

This also makes a kind of odd change to allow a hover to *only* produce
a docstring. This works around an oddity where hovering over a module
name in an import fails to resolve to a `ty` even though hovering over
uses of that imported name *does*.

The two fixed tests reflect the two interesting cases here.
2025-08-13 12:24:01 -04:00
Alex Waygood
9f6146a13d [ty] Add precise inference for indexing, slicing and unpacking NamedTuple instances (#19560)
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-13 15:19:44 +00:00
Brent Westbrook
11d2cb6d56 Add rule code to GitLab description (#19896)
## Summary

Fixes #19881. While I was here, I also made a couple of related tweaks
to the output format. First, we don't need to strip the `SyntaxError: `
prefix anymore since that's not added directly to the diagnostic message
after #19644. Second, we can use `secondary_code_or_id` to fall back on
the lint ID for syntax errors, which changes the `check_name` from
`syntax-error` to `invalid-syntax`. And then the main change requested
in the issue, prepending the `check_name` to the description.

## Test Plan

Existing tests and a new screenshot from GitLab:

<img width="362" height="113" alt="image"
src="https://github.com/user-attachments/assets/97654ad4-a639-4489-8c90-8661c7355097"
/>
2025-08-13 11:19:26 -04:00
Aria Desires
d59282ebb5 [ty] render docstrings in hover (#19882)
This PR has several components:

* Introduce a Docstring String wrapper type that has render_plaintext
and render_markdown methods, to force docstring handlers to pick a
rendering format
* Implement [PEP-257](https://peps.python.org/pep-0257/) docstring
trimming for it
* The markdown rendering just renders the content in a plaintext
codeblock for now (followup work)
* Introduce a `DefinitionsOrTargets` type representing the partial
evaluation of `GotoTarget::get_definition_targets` to ideally stop at
getting `ResolvedDefinitions`
* Add `declaration_targets`, `definition_targets`, and `docstring`
methods to `DefinitionsOrTargets` for the 3 usecases we have for this
operation
* `docstring` is of course the key addition here, it uses the same basic
logic that `signature_help` was using: first check the goto-declaration
for docstrings, then check the goto-definition for docstrings.
* Refactor `signature_help` to use the new APIs instead of implementing
it itself
* Not fixed in this PR: an issue I found where `signature_help` will
erroneously cache docs between functions that have the same type (hover
docs don't have this bug)
* A handful of new tests and additions to tests to add docstrings in
various places and see which get caught


Examples of it working with stdlib, third party, and local definitions:
<img width="597" height="120" alt="Screenshot 2025-08-12 at 2 13 55 PM"
src="https://github.com/user-attachments/assets/eae54efd-882e-4b50-b5b4-721595224232"
/>
<img width="598" height="281" alt="Screenshot 2025-08-12 at 2 14 06 PM"
src="https://github.com/user-attachments/assets/5c9740d5-a06b-4c22-9349-da6eb9a9ba5a"
/>
<img width="327" height="180" alt="Screenshot 2025-08-12 at 2 14 18 PM"
src="https://github.com/user-attachments/assets/3b5647b9-2cdd-4c5b-bb7d-da23bff1bcb5"
/>

Notably modules don't work yet (followup work):
<img width="224" height="83" alt="Screenshot 2025-08-12 at 2 14 37 PM"
src="https://github.com/user-attachments/assets/7e9dcb70-a10e-46d9-a85c-9fe52c3b7e7b"
/>

Notably we don't show docs for an item if you hover its actual
definition (followup work, but also, not the most important):
<img width="324" height="69" alt="Screenshot 2025-08-12 at 2 16 54 PM"
src="https://github.com/user-attachments/assets/d4ddcdd8-c3fc-4120-ac93-cefdf57933b4"
/>
2025-08-13 14:59:20 +00:00
Carl Meyer
e12747a903 [ty] simplify return type of place_from_declarations (#19884)
## Summary

A [passing
comment](https://github.com/astral-sh/ruff/pull/19711#issuecomment-3169312014)
led me to explore why we didn't report a class attribute as possibly
unbound if it was a method and defined in two different conditional
branches.

I found that the reason was because of our handling of "conflicting
declarations" in `place_from_declarations`. It returned a `Result` which
would be `Err` in case of conflicting declarations.

But we only actually care about conflicting declarations when we are
actually doing type inference on that scope and might emit a diagnostic
about it. And in all cases (including that one), we want to otherwise
proceed with the union of the declared types, as if there was no
conflict.

In several cases we were failing to handle the union of declared types
in the same way as a normal declared type if there was a declared-types
conflict. The `Result` return type made this mistake really easy to
make, as we'd match on e.g. `Ok(Place::Type(...))` and do one thing,
then match on `Err(...)` and do another, even though really both of
those cases should be handled the same.

This PR refactors `place_from_declarations` to instead return a struct
which always represents the declared type we should use in the same way,
as well as carrying the conflicting declared types, if any. This struct
has a method to allow us to explicitly ignore the declared-types
conflict (which is what we want in most cases), as well as a method to
get the declared type and the conflict information, in the case where we
want to emit a diagnostic on the conflict.

## Test Plan

Existing CI; added a test showing that we now understand a
multiply-conditionally-defined method as possibly-unbound.

This does trigger issues on a couple new fuzzer seeds, but the issues
are just new instances of an already-known (and rarely occurring)
problem which I already plan to address in a future PR, so I think it's
OK to land as-is.

I happened to build this initially on top of
https://github.com/astral-sh/ruff/pull/19711, which adds invalid-await
diagnostics, so I also updated some invalid-syntax tests to not await on
an invalid type, since the purpose of those tests is to check the
syntactic location of the `await`, not the validity of the awaited type.
2025-08-13 14:17:08 +00:00
Alex Waygood
5725c4b17f [ty] Various minor cleanups to tuple internals (#19891) 2025-08-13 13:46:22 +00:00
Alex Waygood
2f3c7ad1fc [ty] Improve sys.version_info special casing (#19894) 2025-08-13 14:39:13 +01:00
Brent Westbrook
79c949f0f7 Don't cache files with diagnostics (#19869)
Summary
--

To take advantage of the new diagnostics, we need to update our caching
model to include all of the information supported by `ruff_db`'s
diagnostic type. Instead of trying to serialize all of this information,
Micha suggested simply not caching files with diagnostics, like we
already do for files with syntax errors. This PR is an attempt at that
approach.

This has the added benefit of trimming down our `Rule` derives since
this was the last place the `FromStr`/`strum_macros::EnumString`
implementation was used, as well as the (de)serialization macros and
`CacheKey`.

Test Plan
--

Existing tests, with their input updated not to include a diagnostic,
plus a new test showing that files with lint diagnostics are not cached.

Benchmarks
--

In addition to tests, we wanted to check that this doesn't degrade
performance too much. I posted part of this new analysis in
https://github.com/astral-sh/ruff/issues/18198#issuecomment-3175048672,
but I'll duplicate it here. In short, there's not much difference
between `main` and this branch for projects with few diagnostics
(`home-assistant`, `airflow`), as expected. The difference for projects
with many diagnostics (`cpython`) is quite a bit bigger (~300 ms vs ~220
ms), but most projects that run ruff regularly are likely to have very
few diagnostics, so this may not be a problem practically.

I guess GitHub isn't really rendering this as I intended, but the extra
separator line is meant to separate the benchmarks on `main` (above the
line) from this branch (below the line).

| Command | Mean [ms] | Min [ms] | Max [ms] |

|:--------------------------------------------------------------|----------:|---------:|---------:|
| `ruff check cpython --no-cache --isolated --exit-zero` | 322.0 | 317.5
| 326.2 |
| `ruff check cpython --isolated --exit-zero` | 217.3 | 209.8 | 237.9 |
| `ruff check home-assistant --no-cache --isolated --exit-zero` | 279.5
| 277.0 | 283.6 |
| `ruff check home-assistant --isolated --exit-zero` | 37.2 | 35.7 |
40.6 |
| `ruff check airflow --no-cache --isolated --exit-zero` | 133.1 | 130.4
| 146.4 |
| `ruff check airflow --isolated --exit-zero` | 34.7 | 32.9 | 41.6 |

|:--------------------------------------------------------------|----------:|---------:|---------:|
| `ruff check cpython --no-cache --isolated --exit-zero` | 330.1 | 324.5
| 333.6 |
| `ruff check cpython --isolated --exit-zero` | 309.2 | 306.1 | 314.7 |
| `ruff check home-assistant --no-cache --isolated --exit-zero` | 288.6
| 279.4 | 302.3 |
| `ruff check home-assistant --isolated --exit-zero` | 39.8 | 36.9 |
42.4 |
| `ruff check airflow --no-cache --isolated --exit-zero` | 134.5 | 131.3
| 140.6 |
| `ruff check airflow --isolated --exit-zero` | 39.1 | 37.2 | 44.3 |

I had Claude adapt one of the
[scripts](https://github.com/sharkdp/hyperfine/blob/master/scripts/plot_whisker.py)
from the hyperfine repo to make this plot, so it's not quite perfect,
but maybe it's still useful. The table is probably more reliable for
close comparisons. I'll put more details about the benchmarks below for
the sake of future reproducibility.

<img width="4472" height="2368" alt="image"
src="https://github.com/user-attachments/assets/1c42d13e-818a-44e7-b34c-247340a936d7"
/>

<details><summary>Benchmark details</summary>
<p>

The versions of each project:
- CPython: 6322edd260e8cad4b09636e05ddfb794a96a0451, the 3.10 branch
from the contributing docs
- `home-assistant`: 5585376b406f099fb29a970b160877b57e5efcb0
- `airflow`: 29a1cb0cfde9d99b1774571688ed86cb60123896

The last two are just the main branches at the time I cloned the repos.

I don't think our Ruff config should be applied since I used
`--isolated`, but these are cloned into my copy of Ruff at
`crates/ruff_linter/resources/test`, and I trimmed the
`./target/release/` prefix from each of the commands, but these are
builds of Ruff in release mode.

And here's the script with the `hyperfine` invocation:

```shell
#!/bin/bash

cargo build --release --bin ruff

# git clone --depth 1 https://github.com/home-assistant/core crates/ruff_linter/resources/test/home-assistant
# git clone --depth 1 https://github.com/apache/airflow crates/ruff_linter/resources/test/airflow

bin=./target/release/ruff
resources=./crates/ruff_linter/resources/test
cpython=$resources/cpython
home_assistant=$resources/home-assistant
airflow=$resources/airflow

base=${1:-bench}

hyperfine --warmup 10 --export-json $base.json --export-markdown $base.md \
		  "$bin check $cpython --no-cache --isolated --exit-zero" \
		  "$bin check $cpython --isolated --exit-zero" \
		  "$bin check $home_assistant --no-cache --isolated --exit-zero" \
		  "$bin check $home_assistant --isolated --exit-zero" \
		  "$bin check $airflow --no-cache --isolated --exit-zero" \
		  "$bin check $airflow --isolated --exit-zero"
```

I ran this once on `main` (`baseline` in the graph, top half of the
table) and once on this branch (`nocache` and bottom of the table).

</p>
</details>
2025-08-12 15:28:44 -04:00
Carl Meyer
13bdba5d28 [ty] support recursive type aliases (#19805)
## Summary

Support recursive type aliases by adding a `Type::TypeAlias` type
variant, which allows referring to a type alias directly as a type
without eagerly unpacking it to its value.

We still unpack type aliases when they are added to intersections and
unions, so that we can simplify the intersection/union appropriately
based on the unpacked value of the type alias.

This introduces new possible recursive types, and so also requires
expanding our usage of recursion-detecting visitors in Type methods. The
use of these visitors is still not fully comprehensive in this PR, and
will require further expansion to support recursion in more kinds of
types (I already have further work on this locally), but I think it may
be better to do this incrementally in multiple PRs.

## Test Plan

Added some recursive type-alias tests and made them pass.
2025-08-12 09:03:10 -07:00
Alex Waygood
d76fd103ae [ty] Remove unsafe salsa::Update implementations in tuple.rs (#19880) 2025-08-12 15:53:34 +01:00
Matthew Mckee
ad28b80f96 [ty] Function argument inlay hints (#19269) 2025-08-12 13:56:54 +00:00
Alex Waygood
3458f365da [ty] Remove Salsa interning for TypedDictType (#19879) 2025-08-12 14:35:26 +01:00
Harutaka Kawamura
94cfdf4b40 Fix lint.future-annotations link (#19876) 2025-08-12 14:45:06 +02:00
Alex Waygood
498a04804d [ty] Reduce memory usage of TupleSpec and TupleType (#19872) 2025-08-12 12:51:16 +01:00
Ibraheem Ahmed
f34b65b7a0 [ty] Track heap usage of salsa structs (#19790)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-12 13:28:44 +02:00
Micha Reiser
6a05d46ef6 Update salsa to pull in tracked struct changes (#19843) 2025-08-12 13:17:46 +02:00
Carl Meyer
28820db1cd [ty] simplify CycleDetector::visit signature (#19873)
## Summary

After https://github.com/astral-sh/ruff/pull/19871, I realized that now
that we are passing around shared references to `CycleDetector`
visitors, we can now also simplify the `visit` callback signature; we
don't need to smuggle a single visitor reference through it anymore.
This is a pretty minor simplification, and it doesn't really make
anything shorter since I typically used a very short name (`v`) for the
smuggled reference, but I think it reduces cognitive overhead in reading
these `visit` usages; the extra variable would likely be confusing
otherwise for a reader.

## Test Plan

Existing CI.
2025-08-11 17:12:26 -07:00
Carl Meyer
ea1aa9ebfe [ty] use interior mutability in type visitors (#19871)
## Summary

Type visitors are conceptually immutable, they just internally track the
types they've seen (and some maintain a cache of results.) Passing
around mutable visitors everywhere can get us into borrow-checker
trouble in some cases, where we need to recursively pass along the
visitor inside more than one closure with non-disjoint lifetime.

Use interior mutability (via `RefCell` and `Cell`) inside the visitors
instead, to allow us to pass around shared references.

## Test Plan

Existing tests.
2025-08-11 15:42:53 -07:00
Anh-Dung Nguyen
e72f10be2d [ty] Fix tool name is None when no ty path is given in ty_benchmark (#19870)
## Summary

When running the ty_benchmark, I found out that the Ty Tool name is None
when no ty_path is given as str(None)='None'
<img width="1011" height="168" alt="image"
src="https://github.com/user-attachments/assets/cf3e6d98-2329-48e9-b180-c72e4f01ccb6"
/>

## Test Plan
Minor fix, tested local
<img width="1105" height="218" alt="image"
src="https://github.com/user-attachments/assets/173128c9-dcfa-49f1-a58d-1b39a6c6b53b"
/>
2025-08-11 21:26:30 +00:00
Alex Waygood
d2fbf2af8f [ty] Remove Type::Tuple (#19669) 2025-08-11 22:03:32 +01:00
Micha Reiser
2abd683376 [ty] Short circuit ReachabilityConstraints::analyze_single for dynamic types (#19867) 2025-08-11 21:58:34 +02:00
Douglas Creager
dc84645c36 [ty] Use separate Rust types for bound and unbound type variables (#19796)
This PR creates separate Rust types for bound and unbound type
variables, as proposed in https://github.com/astral-sh/ty/issues/926.

Closes https://github.com/astral-sh/ty/issues/926

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-11 15:29:58 -04:00
Alex Waygood
f3f4db7104 [ty] Add static-frame as a walltime benchmark (#19844) 2025-08-11 15:38:56 +01:00
Matthew Mckee
5063a73d7f [ty] Update goto range for attribute access to only target the attribute (#19848) 2025-08-11 16:24:14 +02:00
Sneha Prabhu
6bc52f2855 Add AIR301 rule (#17707)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Add "airflow.secrets.cache.SecretCache" →
"airflow.sdk.cache.SecretCache" rule

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->

---------

Co-authored-by: Wei Lee <weilee.rx@gmail.com>
2025-08-11 09:14:43 -04:00
Brent Westbrook
c433865801 Avoid underflow in default ranges before a BOM (#19839)
Summary
--

This fixes a regression caused by the BOM handling in #19806. Most
diagnostics already account for the BOM in their ranges, but those that
use `TextRange::default` to mean the beginning of the file do not,
causing an underflow in `RenderableAnnotation::new` when subtracting the
BOM-shifted `snippet_start` from the annotation range.

I ran into this when trying to run benchmarks on CPython in preparation
for caching work. The file `cpython/Lib/test/bad_coding2.py` was causing
a crash because it had a default-range `I002` diagnostic, with a BOM.


7cc3f1ebe9/crates/ruff_linter/src/rules/isort/rules/add_required_imports.rs (L122-L126)

The fix here is just to saturate to zero instead of panicking. I
considered adding a `TextRange::saturating_sub` method, but I wasn't
sure it was worth it for this one use. I'm happy to do that if
preferred, though.

Saturating seemed easier than shifting the affected annotations over,
but that could be another solution.

Test Plan
--

A new `ruff_db` test that reproduced the issue and manual testing
against the CPython file mentioned above
2025-08-11 08:52:27 -04:00
renovate[bot]
5b6d0d17f1 Update actions/download-artifact digest to de96f46 (#19852)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-11 06:34:09 +00:00
renovate[bot]
5124cb393f Update docker/login-action action to v3.5.0 (#19860)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:33:58 +02:00
renovate[bot]
11eb8d8f9f Update rui314/setup-mold digest to 7344740 (#19853)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:33:39 +02:00
renovate[bot]
37617d1e37 Update cargo-bins/cargo-binstall action to v1.14.4 (#19855)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:33:30 +02:00
renovate[bot]
14f6a3f133 Update actions/cache action to v4.2.4 (#19854)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:32:58 +02:00
renovate[bot]
ec65ca379d Update Rust crate hashbrown to v0.15.5 (#19858)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:26:45 +02:00
renovate[bot]
02c0db6781 Update Rust crate camino to v1.1.11 (#19857)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:25:44 +02:00
renovate[bot]
18f2b27a55 Update Rust crate proc-macro2 to v1.0.96 (#19859)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:25:34 +02:00
renovate[bot]
618692cfd2 Update dependency ruff to v0.12.8 (#19856)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 08:25:19 +02:00
Frazer McLean
b8a9b1994b SIM905: Fix handling of U+001C..U+001F whitespace (#19849)
Fixes #19845

## Summary

The linked issue explains it well, Rust and Python do not agree on what
whitespace is for the purposes of `str.split`.
2025-08-11 03:43:04 +00:00
Frazer McLean
4d8ccb6125 RUF064: offer a safe fix for multi-digit zeros (#19847)
Fixes #19010

## Summary

See #19010. `0` was not considered a violation, but `000` was. The
latter will now be fixed to `0o000`.
2025-08-10 20:35:27 +00:00
Brent Westbrook
8230b79829 Clean up unused rendering code in ruff_linter (#19832)
## Summary

This is a follow-up to
https://github.com/astral-sh/ruff/pull/19415#discussion_r2263456740 to
remove some unused code. As Micha noticed,
`GroupedEmitter::with_show_source` was only used in local unit tests[^1]
and was safe to remove. This allowed deleting `MessageCodeFrame` and a
lot more helper code previously shared with the `full` output format.

I also moved some other code from `text.rs` and `message/mod.rs` into
`grouped.rs` that is now only used for the `grouped` format. With a
little refactoring of the `concise` rendering logic in `ruff_db`, we
could probably remove `RuleCodeAndBody` too. The only difference I see
from the `concise` output is whether we print the filename next to the
row and column or not:

```shell
> ruff check --output-format concise
try.py:1:8: F401 [*] `math` imported but unused
> ruff check --output-format grouped
try.py:
  1:8 F401 [*] `math` imported but unused
```

But I didn't try to do that here.

## Test Plan

Existing tests, with the source code no longer displayed. I also deleted
one test, as it was now a duplicate of the `default` test.

[^1]: "Local unit tests" as opposed to all of our linter snapshot tests,
as is the case for `TextEmitter::with_show_fix_diff`. We also want to
expose that to users eventually
(https://github.com/astral-sh/ruff/issues/7352), which I don't believe
is the case for the `grouped` format.
2025-08-09 14:20:48 -04:00
Alex Waygood
5a116e48c3 [ty] Add Salsa caching to TupleType::to_class_type (#19840) 2025-08-09 09:29:26 +01:00
Douglas Creager
3a542a80f6 [ty] Handle cycles when finding implicit attributes (#19833)
The [minimal
reproduction](https://gist.github.com/dcreager/fc53c59b30d7ce71d478dcb2c1c56444)
of https://github.com/astral-sh/ty/issues/948 is an example of a class
with implicit attributes whose types end up depending on themselves. Our
existing cycle detection for `infer_expression_types` is usually enough
to handle this situation correctly, but when there are very many of
these implicit attributes, we get a combinatorial explosion of running
time and memory usage.

Adding a separate cycle handler for `ClassLiteral::implicit_attribute`
lets us catch and recover from this situation earlier.

Closes https://github.com/astral-sh/ty/issues/948
2025-08-08 17:01:17 -04:00
Aria Desires
4be6fc0979 [ty] fix goto-definition on imports (#19834)
The stub mapper wasn't being passed into this codepath. It is now being
used. A previously messed up test result I intentionally checked in was
subsequently fixed.
2025-08-08 16:46:28 -04:00
Aria Desires
7cc3f1ebe9 [ty] Implement stdlib stub mapping (#19529)
by using essentially the same logic for system site-packages, on the
assumption that system site-packages are always a subdir of the stdlib
we were looking for.
2025-08-08 15:52:15 -04:00
Dan Parizher
0ec4801b0d [flake8-comprehensions] Fix false positive for C420 with attribute, subscript, or slice assignment targets (#19513)
## Summary

Fixes #19511
2025-08-08 15:02:30 -04:00
Eric Jolibois
0095ff4c1a [ty] Implement module-level __getattr__ support (#19791)
fix https://github.com/astral-sh/ty/issues/943

## Summary

Add module-level `__getattr__` support for ty's type checker, fixing
issue https://github.com/astral-sh/ty/issues/943.
Module-level `__getattr__` functions ([PEP
562](https://peps.python.org/pep-0562/)) are now respected when
resolving dynamic attributes, matching the behavior of mypy and pyright.

## Implementation

Thanks @sharkdp for the guidance in
https://github.com/astral-sh/ty/issues/943#issuecomment-3157566579
- Adds module-specific `__getattr__` resolution in
`ModuleLiteral.static_member()`
- Maintains proper attribute precedence: explicit attributes >
submodules > `__getattr__`

## Test Plan
- New mdtest covering basic functionality, type annotations, attribute
precedence, and edge cases
(run ```cargo nextest run -p ty_python_semantic
mdtest__import_module_getattr```)
- All new tests pass, verifying `__getattr__` is called correctly and
returns proper types
  - Existing test suite passes, ensuring no regressions introduced
2025-08-08 10:39:37 -07:00
Brent Westbrook
44755e6e86 Move full diagnostic rendering to ruff_db (#19415)
## Summary

This PR switches the `full` output format in Ruff over to use the
rendering code
in `ruff_db`. As proposed in the design doc, this involves a lot of
changes to the snapshot output.

I also had to comment out this assertion with a TODO to replace it after
https://github.com/astral-sh/ruff/issues/19688 because many of Ruff's
"file-level" annotations aren't actually file-level. They just happen to
occur at the start of the file, especially in tests with very short
snippets.


529d81daca/crates/ruff_annotate_snippets/src/renderer/display_list.rs (L1204-L1208)

I broke up the snapshot commits at the end into several blocks, but I
don't think it's enough to help with review. The first few (notebooks,
syntax errors, and test rules) are small enough to look at, but I
couldn't really think of other categories beyond that. I'm happy to
break those up or pick out specific examples beyond what I have below,
if that would help.

The minimal code changes are in this
[range](abd28f1e77),
with the snapshot commits following. Moving the `FullRenderer` and
updating the `EmitterFlags` aren't strictly necessary either. I even
dropped the renderer commit this morning but figured it made sense to
keep it since we have the `full` module for tests. I don't feel strongly
either way.

## Test Plan

I did actually click through all 1700 snapshots individually instead of
accepting them all at once, although I moved through them quickly. There
are a
few main categories:

### Lint diagnostics

```diff
-unused.py:8:19: F401 [*] `pathlib` imported but unused
+F401 [*] `pathlib` imported but unused
+  --> unused.py:8:19
    |
  7 | # Unused, _not_ marked as required (due to the alias).
  8 | import pathlib as non_alias
-   |                   ^^^^^^^^^ F401
+   |                   ^^^^^^^^^
  9 |
 10 | # Unused, marked as required.
    |
-   = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
```

- The filename and line numbers are moved to the second line
- The second noqa code next to the underline is removed

### Syntax errors

These are much like the above.

```diff
-    -:1:16: invalid-syntax: Expected one or more symbol names after import
+    invalid-syntax: Expected one or more symbol names after import
+     --> -:1:16
       |
     1 | from foo import
       |                ^
```

One thing I noticed while reviewing some of these, but I don't think is
strictly syntax-error-related, is that some of the new diagnostics have
a little less context after the error. I don't think this is a problem,
but it's one small discrepancy I hadn't noticed before. Here's a minor
example:

```diff
-syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
+invalid-syntax: Expected one or more symbol names after import
+ --> syntax_errors.py:1:15
   |
 1 | from os import
   |               ^
 2 |
 3 | if call(foo
-4 |     def bar():
   |
```

And one of the biggest examples:

```diff
-E30_syntax_error.py:18:11: invalid-syntax: Expected ')', found newline
+invalid-syntax: Expected ')', found newline
+  --> E30_syntax_error.py:18:11
    |
 16 |         pass
 17 |
 18 | foo = Foo(
    |           ^
-19 |
-20 |
-21 | def top(
    |
```

Similarly, a few of the lint diagnostics showed that the cut indicator
calculation for overly long lines is also slightly different, but I
think that's okay too.

### Full-file diagnostics

```diff
-comment.py:1:1: I002 [*] Missing required import: `from __future__ import annotations`
+I002 [*] Missing required import: `from __future__ import annotations`
+--> comment.py:1:1
+help: Insert required import: `from __future__ import annotations`
+
```

As noted above, these will be much more rare after #19688 too. This case
isn't a true full-file diagnostic and will render a snippet in the
future, but you can see that we're now rendering the help message that
would have been discarded before. In contrast, this is a true full-file
diagnostic and should still look like this after #19688:

```diff
-__init__.py:1:1: A005 Module `logging` shadows a Python standard-library module
+A005 Module `logging` shadows a Python standard-library module
+--> __init__.py:1:1
```

### Jupyter notebooks

There's nothing particularly different about these, just showing off the
cell index again.

```diff
-    Jupyter.ipynb:cell 3:1:7: F821 Undefined name `x`
+    F821 Undefined name `x`
+     --> Jupyter.ipynb:cell 3:1:7
       |
     1 | print(x)
-      |       ^ F821
+      |       ^
       |
```
2025-08-08 12:56:23 -04:00
Alex Waygood
8489816edc [ty] Improve ability to solve TypeVars when they appear in unions (#19829) 2025-08-08 17:50:37 +01:00
Micha Reiser
6b0eadfb4d Update salsa (#19827) 2025-08-08 17:51:51 +02:00
Brent Westbrook
8199154d54 [ty] Fix a few more diagnostic differences from Ruff (#19806)
## Summary

Fixes the remaining range reporting differences between the `ruff_db`
diagnostic rendering and Ruff's existing rendering, as noted in
https://github.com/astral-sh/ruff/pull/19415#issuecomment-3160525595.

This PR is structured as a series of three pairs. The first commit in
each pair adds a test showing the previous behavior, followed by a fix
and the updated snapshot. It's quite a small PR, but that might be
helpful just for the contrast.

You can also look at [this
range](052e656c6c..c3ea51030d)
of commits from #19415 to see the impact on real Ruff diagnostics. I
spun these commits out of that PR.

## Test Plan

New `ruff_db` tests
2025-08-08 11:31:19 -04:00
ember91
50e1ecc086 [pylint] Use lowercase hex characters to match the formatter (PLE2513) (#19808)
PLE2513 --fix changes ESC and SUB to uppercase hexadecimal values such
as \x1B while the formatter changes them to lowercase \x1b

<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-08-08 12:25:11 +00:00
Micha Reiser
fd35435281 [ty] Improve performance of subtyping and assignability checks for protocols (#19824) 2025-08-08 13:05:12 +02:00
Dhruv Manilawala
fc72ff4a94 [ty] Send a single request for registrations/unregistrations (#19822)
## Summary

This is a small refactor to update the server to send a single request
to perform registrations and unregistrations of dynamic capabilities.

## Test Plan

Existing E2E test cases pass, add a new test case to verify multiple
registrations.
2025-08-08 08:42:48 +00:00
Jack O'Connor
827456f977 [ty] more cases for the class body global fallback 2025-08-07 17:30:27 -07:00
Shunsuke Shibayama
462adfd0e6 [ty] fix incorrect member narrowing (#19802)
## Summary

Reported in:
https://github.com/astral-sh/ruff/pull/19795#issuecomment-3161981945

If a root expression is reassigned, narrowing on the member should be
invalidated, but there was an oversight in the current implementation.

This PR fixes that, and also removes some unnecessary handling.

## Test Plan

New tests cases in `narrow/conditionals/nested.md`.
2025-08-07 16:04:07 -07:00
Dylan
f51a228f04 Bump 0.12.8 (#19813) 2025-08-07 13:52:16 -05:00
Andrew Gallant
d5e1b7983e [ty] Fix static assertion size check (#19814)
A `Segment` has a `Box` in it, which has a platform dependent size.
Restrict the check to only 64-bit targets.
2025-08-07 13:38:16 -05:00
Micha Reiser
7dfde3b929 Update Rust toolchain to 1.89 (#19807) 2025-08-07 18:21:50 +02:00
Dhruv Manilawala
b22586fa0e [ty] Add ty.inlayHints.variableTypes server option (#19780)
## Summary

This PR adds a new `ty.inlayHints.variableTypes` server setting to
configure ty to include / exclude inlay hints at variable position.

Currently, we only support inlay hints at this position so this option
basically translates to enabling / disabling inlay hints for now :)

The VS Code extension PR is
https://github.com/astral-sh/ty-vscode/pull/112.

closes: astral-sh/ty#472

## Test Plan

Add E2E tests.
2025-08-07 19:16:51 +05:30
Alex Waygood
c401a6d86e [ty] Add failing tests for tuple subclasses (#19803) 2025-08-07 13:11:15 +00:00
Dhruv Manilawala
7b6abfb030 [ty] Add ty.experimental.rename server setting (#19800)
## Summary

This PR is a follow-up from https://github.com/astral-sh/ruff/pull/19551
and adds a new `ty.experimental.rename` setting to conditionally
register for the rename capability. The complementary PR in ty VS Code
extension is https://github.com/astral-sh/ty-vscode/pull/111.

This is done using dynamic registration after the settings have been
resolved. The experimental group is part of the global settings because
they're applied for all workspaces that are managed by the client.

## Test Plan

Add E2E tests.

In VS Code, with the following setting:
```json
{
	"ty.experimental.rename": "true",
	"python.languageServer": "None"
}
```

I get the relevant log entry:
```
2025-08-07 16:05:40.598709000 DEBUG client_response{id=3 method="client/registerCapability"}: Registered rename capability
```

And, I'm able to rename a symbol. Once I set it to `false`, then I can
see this log entry:

```
2025-08-07 16:08:39.027876000 DEBUG Rename capability is disabled in the client settings
```

And, I don't see the "Rename Symbol" open in the VS Code dropdown.


https://github.com/user-attachments/assets/501659df-ba96-4252-bf51-6f22acb4920b
2025-08-07 12:54:58 +00:00
UnboundVariable
b005cdb7ff [ty] Implemented support for "rename" language server feature (#19551)
This PR adds support for the "rename" language server feature. It builds
upon existing functionality used for "go to references".

The "rename" feature involves two language server requests. The first is
a "prepare rename" request that determines whether renaming should be
possible for the identifier at the current offset. The second is a
"rename" request that returns a list of file ranges where the rename
should be applied.

Care must be taken when attempting to rename symbols that span files,
especially if the symbols are defined in files that are not part of the
project. We don't want to modify code in the user's Python environment
or in the vendored stub files.

I found a few bugs in the "go to references" feature when implementing
"rename", and those bug fixes are included in this PR.

---------

Co-authored-by: UnboundVariable <unbound@gmail.com>
2025-08-07 15:58:18 +05:30
Micha Reiser
b96aa4605b [ty] Reduce size of member table (#19572) 2025-08-07 11:16:04 +02:00
Dhruv Manilawala
cc97579c3b [ty] Move server capabilities creation (#19798) 2025-08-07 04:28:08 +00:00
Matthew Mckee
ef1802b94f [ty] Repurpose FunctionType.into_bound_method_type to return BoundMethodType (#19793)
## Summary

As per our naming scheme (at least for callable types) this should
return a `BoundMethodType`, or be renamed, but it makes more sense to
change the return type.

I also ensure `ClassType.into_callable` returns a `Type::Callable` in
the changed branch.

Ideally we could return a `CallableType` from these `into_callable`
functions (and rename to `into_callable_type` but because of unions we
cannot do this.
2025-08-06 15:24:59 -07:00
David Peter
98df62db79 [ty] Validate writes to TypedDict keys (#19782)
## Summary

Validates writes to `TypedDict` keys, for example:

```py
class Person(TypedDict):
    name: str
    age: int | None


def f(person: Person):
    person["naem"] = "Alice"  # error: [invalid-key]

    person["age"] = "42"  # error: [invalid-assignment]
```

The new specialized `invalid-assignment` diagnostic looks like this:

<img width="1160" height="279" alt="image"
src="https://github.com/user-attachments/assets/51259455-3501-4829-a84e-df26ff90bd89"
/>

## Ecosystem analysis

As far as I can tell, all true positives!

There are some extremely long diagnostic messages. We should truncate
our display of overload sets somehow.

## Test Plan

New Markdown tests
2025-08-06 15:19:13 -07:00
Matthew Mckee
65b39f2ca9 [ty] Add support for using the test command emitted when a mdtest fails (#19794)
## Summary

When seeing a failed test like 

```bash
is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)

  crates/ty_python_semantic/resources/mdtest/type_properties/is_subtype_of.md:1810 unexpected error: [unresolved-reference] "Name `Aa` used when not defined"

To rerun this specific test, set the environment variable: MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)'
MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)' cargo test -p ty_python_semantic --test mdtest -- mdtest__type_properties_is_subtype_of
```

running the following now works

```bash
MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)' cargo test -p ty_python_semantic --test mdtest -- mdtest__type_properties_is_subtype_of
```


## Test Plan

Do we have tests for the test runner? :)
2025-08-06 15:02:10 -07:00
Douglas Creager
585ce12ace [ty] typing.Self is bound by the method, not the class (#19784)
This fixes our logic for binding a legacy typevar with its binding
context. (To recap, a legacy typevar starts out "unbound" when it is
first created, and each time it's used in a generic class or function,
we "bind" it with the corresponding `Definition`.)

We treat `typing.Self` the same as a legacy typevar, and so we apply
this binding logic to it too. Before, we were using the enclosing class
as its binding context. But that's not correct — it's the method where
`typing.Self` is used that binds the typevar. (Each invocation of the
method will find a new specialization of `Self` based on the specific
instance type containing the invoked method.)

This required plumbing through some additional state to the
`in_type_expression` method.

This also revealed that we weren't handling `Self`-typed instance
attributes correctly (but were coincidentally not getting the expected
false positive diagnostics).
2025-08-06 17:26:17 -04:00
Ibraheem Ahmed
21ac16db85 [ty] Avoid overcounting shared memory usage (#19773)
## Summary

Use a global tracker to avoid double counting `Arc` instances.
2025-08-06 15:32:02 -04:00
Dan Parizher
745742e414 [pylint] Mark PLC0207 fixes as unsafe when *args unpacking is present (#19679)
## Summary

Fixes #19660
2025-08-06 14:19:49 -04:00
Dhruv Manilawala
ec5660d786 [ty] Avoid warning for old settings schema too aggresively (#19787)
## Summary

This PR avoids warning the users too aggressively by checking the
structure of the initialization and workspace options and avoids the
warning if they conform to the old schema.

## Test Plan



https://github.com/user-attachments/assets/9ade9dc4-90cb-4fd4-abd0-4bc4177df3db
2025-08-06 16:16:59 +00:00
David Peter
b96929ee19 [ty] Disallow typing.TypedDict in type expressions (#19777)
## Summary

Disallow `typing.TypedDict` in type expressions.

Related reference: https://github.com/python/mypy/issues/11030

## Test Plan

New Markdown tests, checked ecosystem and conformance test impact.
2025-08-06 15:58:35 +02:00
Dhruv Manilawala
fa711fa40f [ty] Warn users if server received unknown options (#19779)
## Summary

This PR updates the client settings handling to recognize unknown
options provided by the user and show a warning popup along with a
warning log message.

## Test Plan

Add E2E tests.
2025-08-06 13:11:13 +00:00
Dhruv Manilawala
1f29a04e9a [ty] Support LSP client settings (#19614)
## Summary

This PR implements support for providing LSP client settings.

The complementary PR in the ty VS Code extension:
astral-sh/ty-vscode#106.

Notes for the previous iteration of this PR is in
https://github.com/astral-sh/ruff/pull/19614#issuecomment-3136477864
(click on "Details").

Specifically, this PR splits the client settings into 3 distinct groups.
Keep in mind that these groups are not visible to the user, they're
merely an implementation detail. The groups are:
1. `GlobalOptions` - these are the options that are global to the
language server and will be the same for all the workspaces that are
handled by the server
2. `WorkspaceOptions` - these are the options that are specific to a
workspace and will be applied only when running any logic for that
workspace
3. `InitializationOptions` - these are the options that can be specified
during initialization

The initialization options are a superset that contains both the global
and workspace options flattened into a 1-dimensional structure. This
means that the user can specify any and all fields present in
`GlobalOptions` and `WorkspaceOptions` in the initialization options in
addition to the fields that are _specific_ to initialization options.

From the current set of available settings, following are only available
during initialization because they are required at that time, are static
during the runtime of the server and changing their values require a
restart to take effect:
- `logLevel`
- `logFile`

And, following are available under `GlobalOptions`:
- `diagnosticMode`

And, following under `WorkspaceOptions`:
- `disableLanguageServices`
- `pythonExtension` (Python environment information that is populated by
the ty VS Code extension)

### `workspace/configuration`

This request allows server to ask the client for configuration to a
specific workspace. But, this is only supported by the client that has
the `workspace.configuration` client capability set to `true`. What to
do for clients that don't support pulling configurations?

In that case, the settings needs to be provided in the initialization
options and updating the values of those settings can only be done by
restarting the server. With the way this is implemented, this means that
if the client does not support pulling workspace configuration then
there's no way to specify settings specific to a workspace. Earlier,
this would've been possible by providing an array of client options with
an additional field which specifies which workspace the options belong
to but that adds complexity and clients that actually do not support
`workspace/configuration` would usually not support multiple workspaces
either.

Now, for the clients that do support this, the server will initiate the
request to get the configuration for all the workspaces at the start of
the server. Once the server receives these options, it will resolve them
for each workspace as follows:
1. Combine the client options sent during initialization with the
options specific to the workspace creating the final client options
that's specific to this workspace
2. Create a global options by combining the global options from (1) for
all workspaces which in turn will also combine the global options sent
during initialization

The global options are resolved into the global settings and are
available on the `Session` which is initialized with the default global
settings. The workspace options are resolved into the workspace settings
and are available on the respective `Workspace`.

The `SessionSnapshot` contains the global settings while the document
snapshot contains the workspace settings. We could add the global
settings to the document snapshot but that's currently not needed.

### Document diagnostic dynamic registration

Currently, the document diagnostic server capability is created based on
the `diagnosticMode` sent during initialization. But, that wouldn't
provide us with the complete picture. This means the server needs to
defer registering the document diagnostic capability at a later point
once the settings have been resolved.

This is done using dynamic registration for clients that support it. For
clients that do not support dynamic registration for document diagnostic
capability, the server advertises itself as always supporting workspace
diagnostics and work done progress token.

This dynamic registration now allows us to change the server capability
for workspace diagnostics based on the resolved `diagnosticMode` value.
In the future, once `workspace/didChangeConfiguration` is supported, we
can avoid the server restart when users have changed any client
settings.

## Test Plan

Add integration tests and recorded videos on the user experience in
various editors:

### VS Code

For VS Code users, the settings experience is unchanged because the
extension defines it's own interface on how the user can specify the
server setting. This means everything is under the `ty.*` namespace as
usual.


https://github.com/user-attachments/assets/c2e5ba5c-7617-406e-a09d-e397ce9c3b93

### Zed

For Zed, the settings experience has changed. Users can specify settings
during initialization:

```json
{
  "lsp": {
    "ty": {
      "initialization_options": {
        "logLevel": "debug",
        "logFile": "~/.cache/ty.log",
        "diagnosticMode": "workspace",
        "disableLanguageServices": true
      }
    },
  }
}
```

Or, can specify the options under the `settings` key:

```json
{
  "lsp": {
    "ty": {
      "settings": {
        "ty": {
          "diagnosticMode": "openFilesOnly",
          "disableLanguageServices": true
        }
      },
      "initialization_options": {
        "logLevel": "debug",
        "logFile": "~/.cache/ty.log"
      }
    },
  }
}
```

The `logLevel` and `logFile` setting still needs to go under the
initialization options because they're required by the server during
initialization.

We can remove the nesting of the settings under the "ty" namespace by
updating the return type of
db9ea0cdfd/src/tychecker.rs (L45-L49)
to be wrapped inside `ty` directly so that users can avoid doing the
double nesting.

There's one issue here which is that if the `diagnosticMode` is
specified in both the initialization option and settings key, then the
resolution is a bit different - if either of them is set to be
`workspace`, then it wins which means that in the following
configuration, the diagnostic mode is `workspace`:

```json
{
  "lsp": {
    "ty": {
      "settings": {
        "ty": {
          "diagnosticMode": "openFilesOnly"
        }
      },
      "initialization_options": {
        "diagnosticMode": "workspace"
      }
    },
  }
}
```

This behavior is mainly a result of combining global options from
various workspace configuration results. Users should not be able to
provide global options in multiple workspaces but that restriction
cannot be done on the server side. The ty VS Code extension restricts
these global settings to only be set in the user settings and not in
workspace settings but we do not control extensions in other editors.


https://github.com/user-attachments/assets/8e2d6c09-18e6-49e5-ab78-6cf942fe1255

### Neovim

Same as in Zed.

### Other

Other editors that do not support `workspace/configuration`, the users
would need to provide the server settings during initialization.
2025-08-06 18:37:21 +05:30
Alex Waygood
529d81daca [ty] Improve subscript narrowing for "safe mutable classes" (#19781)
## Summary

This PR improves the `is_safe_mutable_class` function in `infer.rs` in
several ways:
- It uses `KnownClass::to_instance()` for all "safe mutable classes".
Previously, we were using `SpecialFormType::instance_fallback()` for
some variants -- I'm not totally sure why. Switching to
`KnownClass::to_instance()` for all "safe mutable classes" fixes a
number of TODOs in the `assignment.md` mdtest suite
- Rather than eagerly calling `.to_instance(db)` on all "safe mutable
classes" every time `is_safe_mutable_class` is called, we now only call
it lazily on each element, allowing us to short-circuit more
effectively.
- I removed the entry entirely for `TypedDict` from the list of "safe
mutable classes", as it's not correct.
`SpecialFormType::TypedDict.instance_fallback(db)` just returns an
instance type representing "any instance of `typing._SpecialForm`",
which I don't think was the intent of this code. No tests fail as a
result of removing this entry, as we already check separately whether an
object is an inhabitant of a `TypedDict` type (and consider that object
safe-mutable if so!).

## Test Plan

mdtests updated
2025-08-06 12:26:25 +01:00
David Peter
4887bdf205 [ty] Infer types for key-based access on TypedDicts (#19763)
## Summary

This PR adds type inference for key-based access on `TypedDict`s and a
new diagnostic for invalid subscript accesses:

```py
class Person(TypedDict):
    name: str
    age: int | None

alice = Person(name="Alice", age=25)

reveal_type(alice["name"])  # revealed: str
reveal_type(alice["age"])  # revealed: int | None

alice["naem"]  # Unknown key "naem" - did you mean "name"?
```

## Test Plan

Updated Markdown tests
2025-08-06 09:36:33 +02:00
Dan Parizher
e917d309f1 [flake8_import_conventions] Avoid false positives for NFKC-normalized __debug__ import aliases in ICN001 (#19411)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-06 06:42:51 +00:00
Matthew Mckee
18ad2848e3 Display generic function signature properly (#19544)
## Summary

Resolves https://github.com/astral-sh/ty/issues/817

## Test Plan

Update mdtest

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-05 16:35:08 -07:00
Brent Westbrook
5bfffe1aa7 [ty] Remap Jupyter notebook cell indices in ruff_db (#19698)
## Summary

This PR remaps ranges in Jupyter notebooks from simple `row:column`
indices in the concatenated source code to `cell:row:col` to match
Ruff's output. This is probably not a likely change to land upstream in
`annotate-snippets`, but I didn't see a good way around it.

The remapping logic is taken nearly verbatim from here:


cd6bf1457d/crates/ruff_linter/src/message/text.rs (L212-L222)


## Test Plan

New `full` rendering test for a notebook

I was mainly focused on Ruff, but in local tests this also works for ty:

```
error[invalid-assignment]: Object of type `Literal[1]` is not assignable to `str`
 --> Untitled.ipynb:cell 1:3:1
  |
1 | import math
2 |
3 | x: str = 1
  | ^
  |
info: rule `invalid-assignment` is enabled by default

error[invalid-assignment]: Object of type `Literal[1]` is not assignable to `str`
 --> Untitled.ipynb:cell 2:3:1
  |
1 | import math
2 |
3 | x: str = 1
  | ^
  |
info: rule `invalid-assignment` is enabled by default
```

This isn't a duplicate diagnostic, just an unimaginative example:

```py
# cell 1
import math

x: str = 1
# cell 2
import math

x: str = 1
```
2025-08-05 14:10:35 -04:00
Brent Westbrook
b324ae1be3 Hide empty snippets for full-file diagnostics (#19653)
Summary
--

This is the other commit I wanted to spin off from #19415, currently
stacked on #19644.

This PR suppresses blank snippets for empty ranges at the very beginning
of a file, and for empty ranges in non-existent files. Ruff includes
empty ranges for IO errors, for example.


f4e93b6335/crates/ruff_linter/src/message/text.rs (L100-L110)

The diagnostics now look like this (new snapshot test):

```
error[test-diagnostic]: main diagnostic message
--> example.py:1:1                             
```

Instead of [^*]

```
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
 |
 |
```

Test Plan
--

A new `ruff_db` test showing the expected output format

[^*]: This doesn't correspond precisely to the example in the PR because
of some details of the diagnostic builder helper methods in `ruff_db`,
but you can see another example in the current version of the summary in
#19415.
2025-08-05 11:20:31 -04:00
Brent Westbrook
2db4e5dbea Use fixed hash width for ty_server diagnostics (#19766)
Summary
--

Fixes a snapshot test failure I saw in #19653 locally and in Windows CI
by
padding the hex ID to 16 digits to match the regex in
`filter_result_id`.


78e5fe0a51/crates/ty_server/tests/e2e/pull_diagnostics.rs (L380-L384)

Test Plan
--

I applied this to the branch from #19653 locally and saw that the tests
now
pass. I couldn't reproduce this failure directly on `main` or this
branch,
though.
2025-08-05 10:55:17 -04:00
Alex Waygood
4090297a11 [ty] Fix more false positives related to Generic or Protocol being subscripted with a ParamSpec or TypeVarTuple (#19764) 2025-08-05 15:45:56 +01:00
Simon Lamon
934fd37d2b [ty] Diagnostics for async context managers (#19704)
## Summary

Implements diagnostics for async context managers. Fixes
https://github.com/astral-sh/ty/issues/918.

## Test Plan

Mdtests have been added.
2025-08-05 07:41:37 -07:00
Brent Westbrook
78e5fe0a51 Allow hiding the diagnostic severity in ruff_db (#19644)
## Summary

This PR is a spin-off from https://github.com/astral-sh/ruff/pull/19415.
It enables replacing the severity and lint name in a ty-style
diagnostic:

```
error[unused-import]: `os` imported but unused
```

with the noqa code and optional fix availability icon for a Ruff
diagnostic:

```
F401 [*] `os` imported but unused
F821 Undefined name `a`
```

or nothing at all for a Ruff syntax error:

```
SyntaxError: Expected one or more symbol names after import
```

Ruff adds the `SyntaxError` prefix to these messages manually.

Initially (d912458), I just passed a `hide_severity` flag through a
bunch of calls to get it into `annotate-snippets`, but after looking at
it again today, I think reusing the `None` severity/level gave a nicer
result. As I note in a lengthy code comment, I think all of this code
should be temporary and reverted when Ruff gets real severities, so
hopefully it's okay if it feels a little hacky.

I think the main visible downside of this approach is that we can't
style the asterisk in the fix availabilty icon in cyan, as in Ruff's
current output. It's part of the message in this PR and any styling gets
overwritten in `annotate-snippets`.

<img width="400" height="342" alt="image"
src="https://github.com/user-attachments/assets/57542ec9-a81c-4a01-91c7-bd6d7ec99f99"
/>

Hmm, I guess reusing `Level::None` also means the `F401` isn't red
anymore. Maybe my initial approach was better after all. In any case,
the rest of the PR should be basically the same, it just depends how we
want to toggle the severity.

## Test Plan

New `ruff_db` tests. These snapshots should be compared to the two tests
just above them (`hide_severity_output` vs `output` and
`hide_severity_syntax_errors` against `syntax_errors`).
2025-08-05 09:56:18 -04:00
David Peter
94947cbf65 [ty] Fix merge base calculation for typing-conformance workflow (#19761)
## Summary

Use `$GITHUB_SHA` (the merged state of `feature` + `main` branch)
instead of `{{ github.event.pull_request.head.sha }}` (just the latest
`feature` commit) for building the "new" version of `ty` in the typing
conformance workflow.

## Test Plan

None.
2025-08-05 14:32:47 +02:00
Alex Waygood
7dccb6a98c [ty] Improve effectiveness of KnownClass fast paths in instance.rs (#19762) 2025-08-05 13:26:14 +01:00
David Peter
948f3f856c [ty] Fix attribute access on TypedDicts (#19758)
## Summary

This PR fixes a few inaccuracies in attribute access on `TypedDict`s. It
also changes the return type of `type(person)` to `type[dict[str,
object]]` if `person: Person` is an inhabitant of a `TypedDict`
`Person`. We still use `type[Person]` as the *meta type* of Person,
however (see reasoning
[here](https://github.com/astral-sh/ruff/pull/19733#discussion_r2253297926)).

## Test Plan

Updated Markdown tests.
2025-08-05 13:59:10 +02:00
Alex Waygood
3af0b31de3 [ty] Speedup the known_class_doesnt_fallback_to_unknown_unexpectedly_on_low_python_version test (#19760) 2025-08-05 11:55:11 +00:00
David Peter
7df7be5c7d [ty] Keep track of type qualifiers in stub declarations without right-hand side (#19756)
## Summary

closes https://github.com/astral-sh/ty/issues/937

## Test Plan

Regression test
2025-08-05 12:07:05 +02:00
Dhruv Manilawala
2d2841e20d [ty] Fix typing repository commit output in CI (#19754)
## Summary

This PR fixes the issue mentioned in
https://github.com/astral-sh/ruff/pull/19736#issuecomment-3151903662
~~but I can't test it without merging it on `main` because GitHub
Actions still pickup the old version of the workflow file.~~ and is
tested by manually triggering the workflow, refer to the comment on this
PR
(https://github.com/astral-sh/ruff/pull/19754#issuecomment-3153894179)
which has the commit hash.
2025-08-05 14:53:55 +05:30
David Peter
14fbc2b167 [ty] New Type variant for TypedDict (#19733)
## Summary

This PR adds a new `Type::TypedDict` variant. Before this PR, we treated
`TypedDict`-based types as dynamic Todo-types, and I originally planned
to make this change a no-op. And we do in fact still treat that new
variant similar to a dynamic type when it comes to type properties such
as assignability and subtyping. But then I somehow tricked myself into
implementing some of the things correctly, so here we are. The two main
behavioral changes are: (1) we now also detect generic `TypedDict`s,
which removes a few false positives in the ecosystem, and (2) we now
support *attribute* access (not key-based indexing!) on these types,
i.e. we infer proper types for something like
`MyTypedDict.__required_keys__`. Nothing exciting yet, but gets the
infrastructure into place.

Note that with this PR, the type of (the type) `MyTypedDict` itself is
still represented as a `Type::ClassLiteral` or `Type::GenericAlias` (in
case `MyTypedDict` is generic). Only inhabitants of `MyTypedDict`
(instances of `dict` at runtime) are represented by `Type::TypedDict`.
We may want to revisit this decision in the future, if this turns out to
be too error-prone. Right now, we need to use `.is_typed_dict(db)` in
all the right places to distinguish between actual (generic) classes and
`TypedDict`s. But so far, it seemed unnecessary to add additional `Type`
variants for these as well.

part of https://github.com/astral-sh/ty/issues/154

## Ecosystem impact

The new diagnostics on `cloud-init` look like true positives to me.

## Test Plan

Updated and new Markdown tests
2025-08-05 11:19:49 +02:00
Shunsuke Shibayama
351121c5c5 [ty] fix incorrect lazy scope narrowing (#19744)
## Summary

This is a follow-up to #19321.

Narrowing constraints introduced in a class scope were not applied even
when they can be applied in lazy nested scopes. This PR fixes so that
they are now applied.
Conversely, there were cases where narrowing constraints were being
applied in places where they should not, so it is also fixed.

## Test Plan

Some TODOs in `narrow/conditionals/nested.md` are now work correctly.
2025-08-04 20:32:08 -07:00
Shunsuke Shibayama
64bcc8db2f [ty] fix lookup order of class variables before they are defined (#19743)
## Summary

This is a follow-up to #19321.

If we try to access a class variable before it is defined, the variable
is looked up in the global scope, rather than in any enclosing scopes.

Closes https://github.com/astral-sh/ty/issues/875.

## Test Plan

New tests in `narrow/conditionals/nested.md`.
2025-08-04 20:21:28 -07:00
Roman Kitaev
b0f01ba514 [flake8-blind-except] Change BLE001 to correctly parse exception tuples (#19747)
## Summary

This PR enhances the `BLE001` rule to correctly detect blind exception
handling in tuple exceptions. Previously, the rule only checked single
exception types, but Python allows catching multiple exceptions using
tuples like `except (Exception, ValueError):`.

## Test Plan

It fails the following (whereas the main branch does not):

```bash
cargo run -p ruff -- check somefile.py --no-cache --select=BLE001
```

```python
# somefile.py

try:
    1/0
except (ValueError, Exception) as e:
    print(e)
```

```
somefile.py:3:21: BLE001 Do not catch blind exception: `Exception`
  |
1 | try:
2 |     1/0
3 | except (ValueError, Exception) as e:
  |                     ^^^^^^^^^ BLE001
4 |     print(e)
  |

Found 1 error.
```
2025-08-04 21:12:45 +00:00
Alex Waygood
3a9341f7be [ty] Remove false positives when subscripting Generic or Protocol with a ParamSpec or TypeVarTuple (#19749) 2025-08-04 21:42:46 +01:00
David Peter
739c94f95a [ty] Support as-patterns in reachability analysis (#19728)
## Summary

Support `as` patterns in reachability analysis:

```py
from typing import assert_never


def f(subject: str | int):
    match subject:
        case int() as x:
            pass
        case str():
            pass
        case _:
            assert_never(subject)  # would previously emit an error
```

Note that we still don't support inferring correct types for the bound
name (`x`).

Closes https://github.com/astral-sh/ty/issues/928

## Test Plan

New Markdown tests
2025-08-04 20:13:50 +02:00
Alex Waygood
af8587eabf [ty] Link directly to typing conformance test suite when commenting the diff (#19736) 2025-08-04 15:51:42 +01:00
Alex Waygood
41207ec901 [ty] Infer type[tuple[int, str]] as the meta-type of tuple[int, str] (#19741) 2025-08-04 13:10:47 +00:00
Alex Waygood
bc6e8b58ce [ty] Return Option<TupleType> from infer_tuple_type_expression (#19735)
## Summary

This PR reduces the virality of some of the `Todo` types in
`infer_tuple_type_expression`. Rather than inferring `Todo`, we instead
infer `tuple[Todo, ...]`. This reflects the fact that whatever the
contents of the slice in a `tuple[]` type expression, we would always
infer some kind of tuple type as the result of the type expression. Any
tuple type should be assignable to `tuple[Todo, ...]`, so this shouldn't
introduce any new false positives; this can be seen in the ecosystem
report.

As a result of the change, we are now able to enforce in the signature
of `Type::infer_tuple_type_expression` that it returns an
`Option<TupleType<'db>>`, which is more strongly typed and expresses
clearly the invariant that a tuple type expression should always be
inferred as a `tuple` type. To enable this, it was necessary to refactor
several `TupleType` constructors in `tuple.rs` so that they return
`Option<TupleType>` rather than `Type`; this means that callers of these
constructor functions are now free to either propagate the
`Option<TupleType<'db>>` or convert it to a `Type<'db>`.

## Test Plan

Mdtests updated.
2025-08-04 13:48:19 +01:00
Micha Reiser
e4d6b54a16 [ty] Fix failing test on windows (#19742) 2025-08-04 14:39:36 +02:00
Micha Reiser
17ee2a28ba [ty] Fix workspace diagnostics being recomputed (#19689) 2025-08-04 13:49:38 +02:00
Leandro Braga
de77b29798 [ty] clear the terminal screen in watch mode (#19712)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-04 13:45:37 +02:00
Micha Reiser
f473f6b6e5 [ty] Implement long-polling for workspace diagnsotics (#19670) 2025-08-04 10:26:38 +00:00
Alex Waygood
736c4ab05a Remove myself as a codeowner from some ty crates (#19738) 2025-08-04 10:23:13 +00:00
Micha Reiser
8289432252 [ty] Always refresh diagnostics after a watched files change (#19697) 2025-08-04 12:19:18 +02:00
Micha Reiser
808c94d509 [ty] Implement streaming for workspace diagnostics (#19657) 2025-08-04 09:34:29 +00:00
Micha Reiser
b95d22c08e Don't flag pyrefly pragmas as unused code (ERA001) (#19731) 2025-08-04 10:15:37 +02:00
Micha Reiser
f3e66dd503 Revert "Update NPM Development dependencies" (#19730) 2025-08-04 07:33:58 +00:00
Micha Reiser
6516db7835 [ty] Add progress bar to watch (#19729) 2025-08-04 09:31:13 +02:00
renovate[bot]
03c873765e Update NPM Development dependencies (#19723)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-04 07:03:55 +00:00
renovate[bot]
c90707875e Update react monorepo to v19.1.1 (#19720)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 06:27:11 +00:00
renovate[bot]
8e20e589f1 Update dependency react-resizable-panels to v3.0.4 (#19717)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:21:56 +02:00
renovate[bot]
113e32b956 Update docker/metadata-action action to v5.8.0 (#19722)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:21:37 +02:00
renovate[bot]
fdc18eefc3 Update Swatinem/rust-cache action to v2.8.0 (#19725)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [Swatinem/rust-cache](https://redirect.github.com/Swatinem/rust-cache)
| action | minor | `v2.7.8` -> `v2.8.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>Swatinem/rust-cache (Swatinem/rust-cache)</summary>

###
[`v2.8.0`](https://redirect.github.com/Swatinem/rust-cache/releases/tag/v2.8.0)

[Compare
Source](https://redirect.github.com/Swatinem/rust-cache/compare/v2.7.8...v2.8.0)

#### What's Changed

- Add cache-workspace-crates feature by
[@&#8203;jbransen](https://redirect.github.com/jbransen) in
[https://github.com/Swatinem/rust-cache/pull/246](https://redirect.github.com/Swatinem/rust-cache/pull/246)
- Feat: support warpbuild cache provider by
[@&#8203;stegaBOB](https://redirect.github.com/stegaBOB) in
[https://github.com/Swatinem/rust-cache/pull/247](https://redirect.github.com/Swatinem/rust-cache/pull/247)

#### New Contributors

- [@&#8203;jbransen](https://redirect.github.com/jbransen) made their
first contribution in
[https://github.com/Swatinem/rust-cache/pull/246](https://redirect.github.com/Swatinem/rust-cache/pull/246)
- [@&#8203;stegaBOB](https://redirect.github.com/stegaBOB) made their
first contribution in
[https://github.com/Swatinem/rust-cache/pull/247](https://redirect.github.com/Swatinem/rust-cache/pull/247)

**Full Changelog**:
https://github.com/Swatinem/rust-cache/compare/v2.7.8...v2.8.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:38:05 +05:30
renovate[bot]
5f40651ae7 Update Rust crate notify to v8.2.0 (#19724)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [notify](https://redirect.github.com/notify-rs/notify) |
workspace.dependencies | minor | `8.1.0` -> `8.2.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>notify-rs/notify (notify)</summary>

###
[`v8.2.0`](https://redirect.github.com/notify-rs/notify/blob/HEAD/CHANGELOG.md#notify-820-2025-08-03)

[Compare
Source](https://redirect.github.com/notify-rs/notify/compare/notify-8.1.0...notify-8.2.0)

- FEATURE: notify user if inotify's `max_user_watches` has been reached
[#&#8203;698]
- FIX: `INotifyWatcher` ignore events with unknown watch descriptors
(instead of `EventMask::Q_OVERFLOW`) [#&#8203;700]

[#&#8203;698]: https://redirect.github.com/notify-rs/notify/pull/698

[#&#8203;700]: https://redirect.github.com/notify-rs/notify/pull/700

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:37:47 +05:30
renovate[bot]
ea031a3b39 Update taiki-e/install-action action to v2.57.6 (#19726)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[taiki-e/install-action](https://redirect.github.com/taiki-e/install-action)
| action | minor | `v2.56.19` -> `v2.57.6` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>taiki-e/install-action (taiki-e/install-action)</summary>

###
[`v2.57.6`](https://redirect.github.com/taiki-e/install-action/blob/HEAD/CHANGELOG.md#100---2021-12-30)

[Compare
Source](https://redirect.github.com/taiki-e/install-action/compare/v2.57.5...v2.57.6)

Initial release

[Unreleased]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.6...HEAD

[2.57.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.5...v2.57.6

[2.57.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.4...v2.57.5

[2.57.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.3...v2.57.4

[2.57.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.2...v2.57.3

[2.57.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.1...v2.57.2

[2.57.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.0...v2.57.1

[2.57.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.24...v2.57.0

[2.56.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.23...v2.56.24

[2.56.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.22...v2.56.23

[2.56.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.21...v2.56.22

[2.56.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.20...v2.56.21

[2.56.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.19...v2.56.20

[2.56.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.18...v2.56.19

[2.56.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.17...v2.56.18

[2.56.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.16...v2.56.17

[2.56.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.15...v2.56.16

[2.56.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.14...v2.56.15

[2.56.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.13...v2.56.14

[2.56.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.12...v2.56.13

[2.56.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.11...v2.56.12

[2.56.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.10...v2.56.11

[2.56.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.9...v2.56.10

[2.56.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.8...v2.56.9

[2.56.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.7...v2.56.8

[2.56.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.6...v2.56.7

[2.56.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.5...v2.56.6

[2.56.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.4...v2.56.5

[2.56.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.3...v2.56.4

[2.56.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.2...v2.56.3

[2.56.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.1...v2.56.2

[2.56.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.0...v2.56.1

[2.56.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.4...v2.56.0

[2.55.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.3...v2.55.4

[2.55.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.2...v2.55.3

[2.55.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.1...v2.55.2

[2.55.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.0...v2.55.1

[2.55.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.3...v2.55.0

[2.54.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.2...v2.54.3

[2.54.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.1...v2.54.2

[2.54.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.0...v2.54.1

[2.54.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.2...v2.54.0

[2.53.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.1...v2.53.2

[2.53.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.0...v2.53.1

[2.53.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.8...v2.53.0

[2.52.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.7...v2.52.8

[2.52.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.6...v2.52.7

[2.52.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.5...v2.52.6

[2.52.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.4...v2.52.5

[2.52.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.3...v2.52.4

[2.52.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.2...v2.52.3

[2.52.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.1...v2.52.2

[2.52.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.0...v2.52.1

[2.52.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.3...v2.52.0

[2.51.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.2...v2.51.3

[2.51.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.1...v2.51.2

[2.51.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.0...v2.51.1

[2.51.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.10...v2.51.0

[2.50.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.9...v2.50.10

[2.50.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.8...v2.50.9

[2.50.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.7...v2.50.8

[2.50.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.6...v2.50.7

[2.50.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.5...v2.50.6

[2.50.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.4...v2.50.5

[2.50.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.3...v2.50.4

[2.50.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.2...v2.50.3

[2.50.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.1...v2.50.2

[2.50.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.0...v2.50.1

[2.50.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.50...v2.50.0

[2.49.50]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.49...v2.49.50

[2.49.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.48...v2.49.49

[2.49.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.47...v2.49.48

[2.49.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.46...v2.49.47

[2.49.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.45...v2.49.46

[2.49.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.44...v2.49.45

[2.49.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.43...v2.49.44

[2.49.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.42...v2.49.43

[2.49.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.41...v2.49.42

[2.49.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.40...v2.49.41

[2.49.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.39...v2.49.40

[2.49.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.38...v2.49.39

[2.49.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.37...v2.49.38

[2.49.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.36...v2.49.37

[2.49.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.35...v2.49.36

[2.49.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.34...v2.49.35

[2.49.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.33...v2.49.34

[2.49.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.32...v2.49.33

[2.49.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.31...v2.49.32

[2.49.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.30...v2.49.31

[2.49.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.29...v2.49.30

[2.49.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.28...v2.49.29

[2.49.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.27...v2.49.28

[2.49.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.26...v2.49.27

[2.49.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.25...v2.49.26

[2.49.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.24...v2.49.25

[2.49.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.23...v2.49.24

[2.49.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.22...v2.49.23

[2.49.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.21...v2.49.22

[2.49.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.20...v2.49.21

[2.49.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.19...v2.49.20

[2.49.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.18...v2.49.19

[2.49.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.17...v2.49.18

[2.49.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.16...v2.49.17

[2.49.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.15...v2.49.16

[2.49.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.14...v2.49.15

[2.49.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.13...v2.49.14

[2.49.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.12...v2.49.13

[2.49.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.11...v2.49.12

[2.49.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.10...v2.49.11

[2.49.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.9...v2.49.10

[2.49.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.8...v2.49.9

[2.49.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.7...v2.49.8

[2.49.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.6...v2.49.7

[2.49.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.5...v2.49.6

[2.49.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.4...v2.49.5

[2.49.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.3...v2.49.4

[2.49.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.2...v2.49.3

[2.49.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.1...v2.49.2

[2.49.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.0...v2.49.1

[2.49.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.22...v2.49.0

[2.48.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.21...v2.48.22

[2.48.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.20...v2.48.21

[2.48.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.19...v2.48.20

[2.48.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.18...v2.48.19

[2.48.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.17...v2.48.18

[2.48.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.16...v2.48.17

[2.48.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.15...v2.48.16

[2.48.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.14...v2.48.15

[2.48.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.13...v2.48.14

[2.48.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.12...v2.48.13

[2.48.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.11...v2.48.12

[2.48.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.10...v2.48.11

[2.48.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.9...v2.48.10

[2.48.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.8...v2.48.9

[2.48.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.7...v2.48.8

[2.48.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.6...v2.48.7

[2.48.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.5...v2.48.6

[2.48.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.4...v2.48.5

[2.48.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.3...v2.48.4

[2.48.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.2...v2.48.3

[2.48.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.1...v2.48.2

[2.48.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.0...v2.48.1

[2.48.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.32...v2.48.0

[2.47.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.31...v2.47.32

[2.47.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.30...v2.47.31

[2.47.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.29...v2.47.30

[2.47.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.28...v2.47.29

[2.47.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.27...v2.47.28

[2.47.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.26...v2.47.27

[2.47.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.25...v2.47.26

[2.47.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.24...v2.47.25

[2.47.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.23...v2.47.24

[2.47.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.22...v2.47.23

[2.47.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.21...v2.47.22

[2.47.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.20...v2.47.21

[2.47.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.19...v2.47.20

[2.47.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.18...v2.47.19

[2.47.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.17...v2.47.18

[2.47.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.16...v2.47.17

[2.47.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.15...v2.47.16

[2.47.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.14...v2.47.15

[2.47.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.13...v2.47.14

[2.47.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.12...v2.47.13

[2.47.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.11...v2.47.12

[2.47.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.10...v2.47.11

[2.47.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.9...v2.47.10

[2.47.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.8...v2.47.9

[2.47.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.7...v2.47.8

[2.47.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.6...v2.47.7

[2.47.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.5...v2.47.6

[2.47.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.4...v2.47.5

[2.47.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.3...v2.47.4

[2.47.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.2...v2.47.3

[2.47.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.1...v2.47.2

[2.47.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.0...v2.47.1

[2.47.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.20...v2.47.0

[2.46.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.19...v2.46.20

[2.46.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.18...v2.46.19

[2.46.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.17...v2.46.18

[2.46.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.16...v2.46.17

[2.46.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.15...v2.46.16

[2.46.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.14...v2.46.15

[2.46.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.13...v2.46.14

[2.46.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.12...v2.46.13

[2.46.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.11...v2.46.12

[2.46.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.10...v2.46.11

[2.46.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.9...v2.46.10

[2.46.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.8...v2.46.9

[2.46.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.7...v2.46.8

[2.46.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.6...v2.46.7

[2.46.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.5...v2.46.6

[2.46.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.4...v2.46.5

[2.46.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.3...v2.46.4

[2.46.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.2...v2.46.3

[2.46.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.1...v2.46.2

[2.46.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.0...v2.46.1

[2.46.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.15...v2.46.0

[2.45.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.14...v2.45.15

[2.45.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.13...v2.45.14

[2.45.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.12...v2.45.13

[2.45.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.11...v2.45.12

[2.45.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.10...v2.45.11

[2.45.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.9...v2.45.10

[2.45.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.8...v2.45.9

[2.45.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.7...v2.45.8

[2.45.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.6...v2.45.7

[2.45.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.5...v2.45.6

[2.45.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.4...v2.45.5

[2.45.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.3...v2.45.4

[2.45.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.2...v2.45.3

[2.45.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.1...v2.45.2

[2.45.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.0...v2.45.1

[2.45.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.72...v2.45.0

[2.44.72]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.71...v2.44.72

[2.44.71]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.70...v2.44.71

[2.44.70]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.69...v2.44.70

[2.44.69]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.68...v2.44.69

[2.44.68]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.67...v2.44.68

[2.44.67]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.66...v2.44.67

[2.44.66]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.65...v2.44.66

[2.44.65]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.64...v2.44.65

[2.44.64]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.63...v2.44.64

[2.44.63]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.62...v2.44.63

[2.44.62]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.61...v2.44.62

[2.44.61]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.60...v2.44.61

[2.44.60]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.59...v2.44.60

[2.44.59]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.58...v2.44.59

[2.44.58]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.57...v2.44.58

[2.44.57]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.56...v2.44.57

[2.44.56]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.55...v2.44.56

[2.44.55]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.54...v2.44.55

[2.44.54]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.53...v2.44.54

[2.44.53]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.52...v2.44.53

[2.44.52]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.51...v2.44.52

[2.44.51]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.50...v2.44.51

[2.44.50]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.49...v2.44.50

[2.44.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.48...v2.44.49

[2.44.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.47...v2.44.48

[2.44.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.46...v2.44.47

[2.44.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.45...v2.44.46

[2.44.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.44...v2.44.45

[2.44.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.43...v2.44.44

[2.44.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.42...v2.44.43

[2.44.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.41...v2.44.42

[2.44.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.40...v2.44.41

[2.44.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.39...v2.44.40

[2.44.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.38...v2.44.39

[2.44.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.37...v2.44.38

[2.44.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.36...v2.44.37

[2.44.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.35...v2.44.36

[2.44.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.34...v2.44.35

[2.44.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.33...v2.44.34

[2.44.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.32...v2.44.33

[2.44.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.31...v2.44.32

[2.44.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.30...v2.44.31

[2.44.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.29...v2.44.30

[2.44.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.28...v2.44.29

[2.44.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.27...v2.44.28

[2.44.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.26...v2.44.27

[2.44.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.25...v2.44.26

[2.44.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.24...v2.44.25

[2.44.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.23...v2.44.24

[2.44.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.22...v2.44.23

[2.44.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.21...v2.44.22

[2.44.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.20...v2.44.21

[2.44.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.19...v2.44.20

[2.44.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.18...v2.44.19

[2.44.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.17...v2.44.18

[2.44.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.16...v2.44.17

[2.44.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.15...v2.44.16

[2.44.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.14...v2.44.15

[2.44.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.13...v2.44.14

[2.44.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.12...v2.44.13

[2.44.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.11...v2.44.12

[2.44.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.10...v2.44.11

[2.44.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.9...v2.44.10

[2.44.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.8...v2.44.9

[2.44.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.7...v2.44.8

[2.44.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.6...v2.44.7

[2.44.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.5...v2.44.6

[2.44.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.4...v2.44.5

[2.44.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.3...v2.44.4

[2.44.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.2...v2.44.3

[2.44.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.1...v2.44.2

[2.44.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.0...v2.44.1

[2.44.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.7...v2.44.0

[2.43.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.6...v2.43.7

[2.43.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.5...v2.43.6

[2.43.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.4...v2.43.5

[2.43.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.3...v2.43.4

[2.43.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.2...v2.43.3

[2.43.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.1...v2.43.2

[2.43.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.0...v2.43.1

[2.43.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.42...v2.43.0

[2.42.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.41...v2.42.42

[2.42.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.40...v2.42.41

[2.42.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.39...v2.42.40

[2.42.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.38...v2.42.39

[2.42.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.37...v2.42.38

[2.42.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.36...v2.42.37

[2.42.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.35...v2.42.36

[2.42.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.34...v2.42.35

[2.42.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.33...v2.42.34

[2.42.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.32...v2.42.33

[2.42.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.31...v2.42.32

[2.42.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.30...v2.42.31

[2.42.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.29...v2.42.30

[2.42.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.28...v2.42.29

[2.42.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.27...v2.42.28

[2.42.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.26...v2.42.27

[2.42.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.25...v2.42.26

[2.42.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.24...v2.42.25

[2.42.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.23...v2.42.24

[2.42.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.22...v2.42.23

[2.42.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.21...v2.42.22

[2.42.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.20...v2.42.21

[2.42.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.19...v2.42.20

[2.42.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.18...v2.42.19

[2.42.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.17...v2.42.18

[2.42.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.16...v2.42.17

[2.42.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.15...v2.42.16

[2.42.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.14...v2.42.15

[2.42.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.13...v2.42.14

[2.42.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.12...v2.42.13

[2.42.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.11...v2.42.12

[2.42.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.10...v2.42.11

[2.42.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.9...v2.42.10

[2.42.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.8...v2.42.9

[2.42.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.7...v2.42.8

[2.42.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.6...v2.42.7

[2.42.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.5...v2.42.6

[2.42.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.4...v2.42.5

[2.42.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.3...v2.42.4

[2.42.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.2...v2.42.3

[2.42.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.1...v2.42.2

[2.42.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.0...v2.42.1

[2.42.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.18...v2.42.0

[2.41.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.17...v2.41.18

[2.41.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.16...v2.41.17

[2.41.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.15...v2.41.16

[2.41.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.14...v2.41.15

[2.41.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.13...v2.41.14

[2.41.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.12...v2.41.13

[2.41.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.11...v2.41.12

[2.41.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.10...v2.41.11

[2.41.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.9...v2.41.10

[2.41.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.8...v2.41.9

[2.41.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.7...v2.41.8

[2.41.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.6...v2.41.7

[2.41.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.5...v2.41.6

[2.41.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.4...v2.41.5

[2.41.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.3...v2.41.4

[2.41.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.2...v2.41.3

[2.41.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.1...v2.41.2

[2.41.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.0...v2.41.1

[2.41.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.2...v2.41.0

[2.40.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.1...v2.40.2

[2.40.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.0...v2.40.1

[2.40.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.2...v2.40.0

[2.39.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.1...v2.39.2

[2.39.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.0...v2.39.1

[2.39.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.7...v2.39.0

[2.38.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.6...v2.38.7

[2.38.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.5...v2.38.6

[2.38.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.4...v2.38.5

[2.38.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.3...v2.38.4

[2.38.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.2...v2.38.3

[2.38.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.1...v2.38.2

[2.38.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.0...v2.38.1

[2.38.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.37.0...v2.38.0

[2.37.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.36.0...v2.37.0

[2.36.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.35.0...v2.36.0

[2.35.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.3...v2.35.0

[2.34.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.2...v2.34.3

[2.34.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.1...v2.34.2

[2.34.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.0...v2.34.1

[2.34.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.36...v2.34.0

[2.33.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.35...v2.33.36

[2.33.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.34...v2.33.35

[2.33.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.33...v2.33.34

[2.33.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.32...v2.33.33

[2.33.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.31...v2.33.32

[2.33.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.30...v2.33.31

[2.33.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.29...v2.33.30

[2.33.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.28...v2.33.29

[2.33.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.27...v2.33.28

[2.33.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.26...v2.33.27

[2.33.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.25...v2.33.26

[2.33.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.24...v2.33.25

[2.33.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.23...v2.33.24

[2.33.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.22...v2.33.23

[2.33.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.21...v2.33.22

[2.33.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.20...v2.33.21

[2.33.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.19...v2.33.20

[2.33.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.18...v2.33.19

[2.33.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.17...v2.33.18

[2.33.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.16...v2.33.17

[2.33.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.15...v2.33.16

[2.33.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.14...v2.33.15

[2.33.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.13...v2.33.14

[2.33.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.12...v2.33.13

[2.33.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.11...v2.33.12

[2.33.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.10...v2.33.11

[2.33.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.9...v2.33.10

[2.33.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.8...v2.33.9

[2.33.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.7...v2.33.8

[2.33.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.6...v2.33.7

[2.33.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.5...v2.33.6

[2.33.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.4...v2.33.5

[2.33.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.3...v2.33.4

[2.33.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.2...v2.33.3

[2.33.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.1...v2.33.2

[2.33.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.0...v2.33.1

[2.33.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.20...v2.33.0

[2.32.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.19...v2.32.20

[2.32.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.18...v2.32.19

[2.32.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.17...v2.32.18

[2.32.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.16...v2.32.17

[2.32.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.15...v2.32.16

[2.32.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.14...v2.32.15

[2.32.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.13...v2.32.14

[2.32.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.12...v2.32.13

[2.32.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.11...v2.32.12

[2.32.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.10...v2.32.11

[2.32.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.9...v2.32.10

[2.32.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.8...v2.32.9

[2.32.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.7...v2.32.8

[2.32.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.6...v2.32.7

[2.32.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.5...v2.32.6

[2.32.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.4...v2.32.5

[2.32.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.3...v2.32.4

[2.32.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.2...v2.32.3

[2.32.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.1...v2.32.2

[2.32.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.0...v2.32.1

[2.32.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.3...v2.32.0

[2.31.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.2...v2.31.3

[2.31.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.1...v2.31.2

[2.31.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.0...v2.31.1

[2.31.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.30.0...v2.31.0

[2.30.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.8...v2.30.0

[2.29.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.7...v2.29.8

[2.29.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.6...v2.29.7

[2.29.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.5...v2.29.6

[2.29.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.4...v2.29.5

[2.29.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.3...v2.29.4

[2.29.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.2...v2.29.3

[2.29.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.1...v2.29.2

[2.29.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.0...v2.29.1

[2.29.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.16...v2.29.0

[2.28.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.15...v2.28.16

[2.28.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.14...v2.28.15

[2.28.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.13...v2.28.14

[2.28.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.12...v2.28.13

[2.28.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.11...v2.28.12

[2.28.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.10...v2.28.11

[2.28.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.9...v2.28.10

[2.28.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.8...v2.28.9

[2.28.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.7...v2.28.8

[2.28.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.6...v2.28.7

[2.28.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.5...v2.28.6

[2.28.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.4...v2.28.5

[2.28.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.3...v2.28.4

[2.28.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.2...v2.28.3

[2.28.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.1...v2.28.2

[2.28.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.0...v2.28.1

[2.28.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.15...v2.28.0

[2.27.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.14...v2.27.15

[2.27.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.13...v2.27.14

[2.27.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.12...v2.27.13

[2.27.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.11...v2.27.12

[2.27.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.10...v2.27.11

[2.27.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.9...v2.27.10

[2.27.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.8...v2.27.9

[2.27.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.7...v2.27.8

[2.27.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.6...v2.27.7

[2.27.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.5...v2.27.6

[2.27.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.4...v2.27.5

[2.27.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.3...v2.27.4

[2.27.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.2...v2.27.3

[2.27.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.1...v2.27.2

[2.27.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.0...v2.27.1

[2.27.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.20...v2.27.0

[2.26.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.19...v2.26.20

[2.26.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.18...v2.26.19

[2.26.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.17...v2.26.18

[2.26.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.16...v2.26.17

[2.26.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.15...v2.26.16

[2.26.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.14...v2.26.15

[2.26.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.13...v2.26.14

[2.26.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.12...v2.26.13

[2.26.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.11...v2.26.12

[2.26.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.10...v2.26.11

[2.26.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.9...v2.26.10

[2.26.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.8...v2.26.9

[2.26.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.7...v2.26.8

[2.26.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.6...v2.26.7

[2.26.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.5...v2.26.6

[2.26.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.4...v2.26.5

[2.26.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.3...v2.26.4

[2.26.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.2...v2.26.3

[2.26.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.1...v2.26.2

[2.26.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.0...v2.26.1

[2.26.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.11...v2.26.0

[2.25.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.10...v2.25.11

[2.25.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.9...v2.25.10

[2.25.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.8...v2.25.9

[2.25.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.7...v2.25.8

[2.25.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.6...v2.25.7

[2.25.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.5...v2.25.6

[2.25.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.4...v2.25.5

[2.25.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.3...v2.25.4

[2.25.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.2...v2.25.3

[2.25.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.1...v2.25.2

[2.25.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.0...v2.25.1

[2.25.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.4...v2.25.0

[2.24.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.3...v2.24.4

[2.24.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.2...v2.24.3

[2.24.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.1...v2.24.2

[2.24.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.0...v2.24.1

[2.24.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.9...v2.24.0

[2.23.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.8...v2.23.9

[2.23.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.7...v2.23.8

[2.23.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.6...v2.23.7

[2.23.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.5...v2.23.6

[2.23.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.4...v2.23.5

[2.23.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.3...v2.23.4

[2.23.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.2...v2.23.3

[2.23.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.1...v2.23.2

[2.23.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.0...v2.23.1

[2.23.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.10...v2.23.0

[2.22.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.9...v2.22.10

[2.22.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.8...v2.22.9

[2.22.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.7...v2.22.8

[2.22.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.6...v2.22.7

[2.22.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.5...v2.22.6

[2.22.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.4...v2.22.5

[2.22.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.3...v2.22.4

[2.22.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.2...v2.22.3

[2.22.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.1...v2.22.2

[2.22.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.0...v2.22.1

[2.22.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.27...v2.22.0

[2.21.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.26...v2.21.27

[2.21.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.25...v2.21.26

[2.21.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.24...v2.21.25

[2.21.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.23...v2.21.24

[2.21.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.22...v2.21.23

[2.21.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.21...v2.21.22

[2.21.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.20...v2.21.21

[2.21.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.19...v2.21.20

[2.21.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.18...v2.21.19

[2.21.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.17...v2.21.18

[2.21.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.16...v2.21.17

[2.21.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.15...v2.21.16

[2.21.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.14...v2.21.15

[2.21.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.13...v2.21.14

[2.21.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.12...v2.21.13

[2.21.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.11...v2.21.12

[2.21.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.10...v2.21.11

[2.21.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.9...v2.21.10

[2.21.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.8...v2.21.9

[2.21.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.7...v2.21.8

[2.21.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.6...v2.21.7

[2.21.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.5...v2.21.6

[2.21.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.4...v2.21.5

[2.21.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.3...v2.21.4

[2.21.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.2...v2.21.3

[2.21.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.1...v2.21.2

[2.21.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.0...v2.21.1

[2.21.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.17...v2.21.0

[2.20.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.16...v2.20.17

[2.20.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.15...v2.20.16

[2.20.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.14...v2.20.15

[2.20.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.13...v2.20.14

[2.20.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.12...v2.20.13

[2.20.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.11...v2.20.12

[2.20.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.10...v2.20.11

[2.20.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.9...v2.20.10

[2.20.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.8...v2.20.9

[2.20.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.7...v2.20.8

[2.20.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.6...v2.20.7

[2.20.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.5...v2.20.6

[2.20.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.4...v2.20.5

[2.20.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.3...v2.20.4

[2.20.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.2...v2.20.3

[2.20.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.1...v2.20.2

[2.20.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.0...v2.20.1

[2.20.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.4...v2.20.0

[2.19.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.3...v2.19.4

[2.19.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.2...v2.19.3

[2.19.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.1...v2.19.2

[2.19.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.0...v2.19.1

[2.19.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.17...v2.19.0

[2.18.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.16...v2.18.17

[2.18.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.15...v2.18.16

[2.18.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.14...v2.18.15

[2.18.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.13...v2.18.14

[2.18.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.12...v2.18.13

[2.18.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.11...v2.18.12

[2.18.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.10...v2.18.11

[2.18.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.9...v2.18.10

[2.18.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.8...v2.18.9

[2.18.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.7...v2.18.8

[2.18.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.6...v2.18.7

[2.18.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.5...v2.18.6

[2.18.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.4...v2.18.5

[2.18.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.3...v2.18.4

[2.18.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.2...v2.18.3

[2.18.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.1...v2.18.2

[2.18.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.0...v2.18.1

[2.18.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.8...v2.18.0

[2.17.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.7...v2.17.8

[2.17.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.6...v2.17.7

[2.17.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.5...v2.17.6

[2.17.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.4...v2.17.5

[2.17.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.3...v2.17.4

[2.17.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.2...v2.17.3

[2.17.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.1...v2.17.2

[2.17.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.0...v2.17.1

[2.17.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.5...v2.17.0

[2.16.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.4...v2.16.5

[2.16.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.3...v2.16.4

[2.16.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.2...v2.16.3

[2.16.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.1...v2.16.2

[2.16.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.0...v2.16.1

[2.16.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.6...v2.16.0

[2.15.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.5...v2.15.6

[2.15.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.4...v2.15.5

[2.15.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.3...v2.15.4

[2.15.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.2...v2.15.3

[2.15.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.1...v2.15.2

[2.15.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.0...v2.15.1

[2.15.0]: https://redirect.github.com/taiki-e/install-action/compar

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:37:14 +05:30
renovate[bot]
93b64daa4a Update pre-commit hook astral-sh/ruff-pre-commit to v0.12.7 (#19719)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[astral-sh/ruff-pre-commit](https://redirect.github.com/astral-sh/ruff-pre-commit)
| repository | patch | `v0.12.5` -> `v0.12.7` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

Note: The `pre-commit` manager in Renovate is not supported by the
`pre-commit` maintainers or community. Please do not report any problems
there, instead [create a Discussion in the Renovate
repository](https://redirect.github.com/renovatebot/renovate/discussions/new)
if you have any questions.

---

### Release Notes

<details>
<summary>astral-sh/ruff-pre-commit (astral-sh/ruff-pre-commit)</summary>

###
[`v0.12.7`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.12.7)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.12.6...v0.12.7)

See: https://github.com/astral-sh/ruff/releases/tag/0.12.7

###
[`v0.12.6`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.12.6)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.12.5...v0.12.6)

See: https://github.com/astral-sh/ruff/releases/tag/0.12.7

Ruff's 0.12.6 release was yanked. See the linked release notes for more
information.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:02:49 +05:30
renovate[bot]
74376375e4 Update dependency ruff to v0.12.7 (#19718)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [ruff](https://docs.astral.sh/ruff)
([source](https://redirect.github.com/astral-sh/ruff),
[changelog](https://redirect.github.com/astral-sh/ruff/blob/main/CHANGELOG.md))
| `==0.12.5` -> `==0.12.7` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/ruff/0.12.7?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/ruff/0.12.5/0.12.7?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>astral-sh/ruff (ruff)</summary>

###
[`v0.12.7`](https://redirect.github.com/astral-sh/ruff/blob/HEAD/CHANGELOG.md#0127)

This is a follow-up release to 0.12.6. Because of an issue in the
package metadata, 0.12.6 failed to publish fully to PyPI and has been
yanked. Similarly, there is no GitHub release or Git tag for 0.12.6. The
contents of the 0.12.7 release are identical to 0.12.6, except for the
updated metadata.

###
[`v0.12.6`](https://redirect.github.com/astral-sh/ruff/blob/HEAD/CHANGELOG.md#0126)

##### Preview features

- \[`flake8-commas`] Add support for trailing comma checks in type
parameter lists (`COM812`, `COM819`)
([#&#8203;19390](https://redirect.github.com/astral-sh/ruff/pull/19390))
- \[`pylint`] Implement auto-fix for `missing-maxsplit-arg` (`PLC0207`)
([#&#8203;19387](https://redirect.github.com/astral-sh/ruff/pull/19387))
- \[`ruff`] Offer fixes for `RUF039` in more cases
([#&#8203;19065](https://redirect.github.com/astral-sh/ruff/pull/19065))

##### Bug fixes

- Support `.pyi` files in ruff analyze graph
([#&#8203;19611](https://redirect.github.com/astral-sh/ruff/pull/19611))
- \[`flake8-pyi`] Preserve inline comment in ellipsis removal (`PYI013`)
([#&#8203;19399](https://redirect.github.com/astral-sh/ruff/pull/19399))
- \[`perflint`] Ignore rule if target is `global` or `nonlocal`
(`PERF401`)
([#&#8203;19539](https://redirect.github.com/astral-sh/ruff/pull/19539))
- \[`pyupgrade`] Fix `UP030` to avoid modifying double curly braces in
format strings
([#&#8203;19378](https://redirect.github.com/astral-sh/ruff/pull/19378))
- \[`refurb`] Ignore decorated functions for `FURB118`
([#&#8203;19339](https://redirect.github.com/astral-sh/ruff/pull/19339))
- \[`refurb`] Mark `int` and `bool` cases for `Decimal.from_float` as
safe fixes (`FURB164`)
([#&#8203;19468](https://redirect.github.com/astral-sh/ruff/pull/19468))
- \[`ruff`] Fix `RUF033` for named default expressions
([#&#8203;19115](https://redirect.github.com/astral-sh/ruff/pull/19115))

##### Rule changes

- \[`flake8-blind-except`] Change `BLE001` to permit
`logging.critical(..., exc_info=True)`
([#&#8203;19520](https://redirect.github.com/astral-sh/ruff/pull/19520))

##### Performance

- Add support for specifying minimum dots in detected string imports
([#&#8203;19538](https://redirect.github.com/astral-sh/ruff/pull/19538))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:01:53 +05:30
renovate[bot]
30f52d8cf5 Update cargo-bins/cargo-binstall action to v1.14.3 (#19716)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[cargo-bins/cargo-binstall](https://redirect.github.com/cargo-bins/cargo-binstall)
| action | patch | `v1.14.2` -> `v1.14.3` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>cargo-bins/cargo-binstall (cargo-bins/cargo-binstall)</summary>

###
[`v1.14.3`](https://redirect.github.com/cargo-bins/cargo-binstall/releases/tag/v1.14.3)

[Compare
Source](https://redirect.github.com/cargo-bins/cargo-binstall/compare/v1.14.2...v1.14.3)

*Binstall is a tool to fetch and install Rust-based executables as
binaries. It aims to be a drop-in replacement for `cargo install` in
most cases. Install it today with `cargo install cargo-binstall`, from
the binaries below, or if you already have it, upgrade with `cargo
binstall cargo-binstall`.*

##### In this release:

- Fix race condition in target detections
([#&#8203;2238](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2238))

##### Other changes:

- Upgrade dependencies

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:01:21 +05:30
renovate[bot]
77bc32b9b9 Update Rust crate serde_json to v1.0.142 (#19721)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [serde_json](https://redirect.github.com/serde-rs/json) |
workspace.dependencies | patch | `1.0.141` -> `1.0.142` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>serde-rs/json (serde_json)</summary>

###
[`v1.0.142`](https://redirect.github.com/serde-rs/json/releases/tag/v1.0.142)

[Compare
Source](https://redirect.github.com/serde-rs/json/compare/v1.0.141...v1.0.142)

- impl Default for \&Value
([#&#8203;1265](https://redirect.github.com/serde-rs/json/issues/1265),
thanks [@&#8203;aatifsyed](https://redirect.github.com/aatifsyed))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:00:58 +05:30
Dan Parizher
1a368b0bf9 [flake8-simplify] Fix raw string handling in SIM905 for embedded quotes (#19591)
## Summary

When splitting triple-quoted, raw strings one has to take care before attempting to make each item have single-quotes.

Fixes #19577

---------

Co-authored-by: dylwil3 <dylwil3@gmail.com>
2025-08-03 11:31:28 -05:00
Jérémy Scanvic
134435415e Change 'associative' to 'commutative' in docs describing union (#19706)
Thanks for the great tool!

I noticed a small typo [in the
docs](https://docs.astral.sh/ruff/rules/none-not-at-end-of-union/): it's
[commutativity](Commutative_property) that makes the order not matter in
type unions, not
[associativity](https://en.wikipedia.org/wiki/Associative_property)
which is something different.

I make the change in this PR.
2025-08-03 16:30:56 +00:00
cristian64
bc6e105c18 Include column numbers in GitLab output format. (#19708) 2025-08-03 12:37:01 +00:00
Micha Reiser
6bd413df6c [ty] Update salsa (#19710) 2025-08-03 09:18:10 +00:00
Nathaniel Roman
85bd961fd3 [ty] resolve file symlinks in src walk (#19674)
Co-authored-by: Nathaniel Roman <nroman@openai.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-01 22:52:04 +02:00
Douglas Creager
d37911685f [ty] Correctly instantiate generic class that inherits __init__ from generic base class (#19693)
This is subtle, and the root cause became more apparent with #19604,
since we now have many more cases of superclasses and subclasses using
different typevars. The issue is easiest to see in the following:

```py
class C[T]:
    def __init__(self, t: T) -> None: ...

class D[U](C[T]):
    pass

reveal_type(C(1))  # revealed: C[int]
reveal_type(D(1))  # should be: D[int]
```

When instantiating a generic class, the `__init__` method inherits the
generic context of that class. This lets our call binding machinery
infer a specialization for that context.

Prior to this PR, the instantiation of `C` worked just fine. Its
`__init__` method would inherit the `[T]` generic context, and we would
infer `{T = int}` as the specialization based on the argument
parameters.

It didn't work for `D`. The issue is that the `__init__` method was
inheriting the generic context of the class where `__init__` was defined
(here, `C` and `[T]`). At the call site, we would then infer `{T = int}`
as the specialization — but that wouldn't help us specialize `D[U]`,
since `D` does not have `T` in its generic context!

Instead, the `__init__` method should inherit the generic context of the
class that we are performing the lookup on (here, `D` and `[U]`). That
lets us correctly infer `{U = int}` as the specialization, which we can
successfully apply to `D[U]`.

(Note that `__init__` refers to `C`'s typevars in its signature, but
that's okay; our member lookup logic already applies the `T = U`
specialization when returning a member of `C` while performing a lookup
on `D`, transforming its signature from `(Self, T) -> None` to `(Self,
U) -> None`.)

Closes https://github.com/astral-sh/ty/issues/588
2025-08-01 15:29:18 -04:00
GiGaGon
580577e667 [refurb] Make example error out-of-the-box (FURB157) (#19695)
## Summary

Part of #18972

This PR makes [verbose-decimal-constructor
(FURB157)](https://docs.astral.sh/ruff/rules/verbose-decimal-constructor/#verbose-decimal-constructor-furb157)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/0930015c-ad45-4490-800e-66ed057bfe34)
```py
Decimal("0")
Decimal(float("Infinity"))
```

[New example](https://play.ruff.rs/516e5992-322f-4203-afe7-46d8cad53368)
```py
from decimal import Decimal

Decimal("0")
Decimal(float("Infinity"))
```

Imports were also added to the "Use Instead" section.
2025-08-01 12:55:48 -05:00
Dan Parizher
dce25da19a [flake8-errmsg] Exclude typing.cast from EM101 (#19656)
## Summary

Fixes #19596
2025-08-01 13:37:44 -04:00
Douglas Creager
06cd249a9b [ty] Track different uses of legacy typevars, including context when rendering typevars (#19604)
This PR introduces a few related changes:

- We now keep track of each time a legacy typevar is bound in a
different generic context (e.g. class, function), and internally create
a new `TypeVarInstance` for each usage. This means the rest of the code
can now assume that salsa-equivalent `TypeVarInstance`s refer to the
same typevar, even taking into account that legacy typevars can be used
more than once.

- We also go ahead and track the binding context of PEP 695 typevars.
That's _much_ easier to track since we have the binding context right
there during type inference.

- With that in place, we can now include the name of the binding context
when rendering typevars (e.g. `T@f` instead of `T`)
2025-08-01 12:20:32 -04:00
David Peter
48d5bd13fa [ty] Initial test suite for TypedDict (#19686)
## Summary

Adds an initial set of tests based on the highest-priority items in
https://github.com/astral-sh/ty/issues/154. This is certainly not yet
exhaustive (required/non-required, `total`, and other things are
missing), but will be useful to measure progress on this feature.

## Test Plan

Checked intended behavior against runtime and other type checkers.
2025-08-01 16:56:02 +02:00
Alex Waygood
e7e7b7bf21 [ty] Improve debuggability of protocol types (#19662) 2025-08-01 15:16:13 +01:00
Alex Waygood
57e2e8664f [ty] Simplify lifetime requirements for PySlice trait (#19687) 2025-08-01 15:13:47 +01:00
Alex Waygood
18aae21b9a [ty] Improve isinstance() truthiness analysis for generic types (#19668) 2025-08-01 14:44:22 +01:00
GiGaGon
d8151f0239 [refurb] Make example error out-of-the-box (FURB164) (#19673)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Part of #18972

This PR makes [unnecessary-from-float
(FURB164)](https://docs.astral.sh/ruff/rules/unnecessary-from-float/#unnecessary-from-float-furb164)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/807ef72f-9671-408d-87ab-8b8bad65b33f)
```py
Decimal.from_float(4.2)
Decimal.from_float(float("inf"))
Fraction.from_float(4.2)
Fraction.from_decimal(Decimal("4.2"))
```

[New example](https://play.ruff.rs/303680d1-8a68-4b6c-a5fd-d79c56eb0f88)
```py
from decimal import Decimal
from fractions import Fraction

Decimal.from_float(4.2)
Decimal.from_float(float("inf"))
Fraction.from_float(4.2)
Fraction.from_decimal(Decimal("4.2"))
```

The "Use instead" section also had imports added, and one of the fixed
examples was slightly wrong and needed modification.

## Test Plan

<!-- How was it tested? -->

N/A, no functionality/tests affected
2025-08-01 07:49:58 -05:00
Hunter Hogan
2ee56735e2 Fix link: unused_import.rs (#19648) 2025-08-01 08:47:52 +00:00
David Peter
ade6a4262a [ty] Remove Specialization::display (full) (#19682)
## Summary

This seems to be unused
2025-08-01 10:44:11 +02:00
David Peter
d43e6fb9c6 [ty] Remove KnownModule::is_enum (#19681)
## Summary

Changes the visibility of `KnownModule` and removes an unneeded
function.
2025-08-01 10:31:12 +02:00
Matthew Mckee
b30d97e5e0 [ty] Support __setitem__ and improve __getitem__ related diagnostics (#19578)
## Summary

Adds validation to subscript assignment expressions.

```py
class Foo: ...

class Bar:
    __setattr__ = None

class Baz:
    def __setitem__(self, index: str, value: int) -> None:
        pass

# We now emit a diagnostic on these statements
Foo()[1] = 2
Bar()[1] = 2
Baz()[1] = 2

```

Also improves error messages on invalid `__getitem__` expressions

## Test Plan

Update mdtests and add more to `subscript/instance.md`

---------

Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Co-authored-by: David Peter <mail@david-peter.de>
2025-08-01 09:23:27 +02:00
github-actions[bot]
5c5d50d57a [ty] Sync vendored typeshed stubs (#19676)
Close and reopen this PR to trigger CI

---------

Co-authored-by: typeshedbot <>
2025-08-01 08:43:42 +02:00
Dan Parizher
b3a26a50ad [flake8-use-pathlib] Expand PTH201 to check all PurePath subclasses (#19440)
## Summary

Fixes #19437
2025-07-31 22:18:07 -04:00
GiGaGon
6a2d358d7a [refurb] Make example error out-of-the-box (FURB180) (#19672)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Part of #18972

This PR makes [meta-class-abc-meta
(FURB180)](https://docs.astral.sh/ruff/rules/meta-class-abc-meta/#meta-class-abc-meta-furb180)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/6beca1be-45cd-4e5a-aafa-6a0584c10d64)
```py
class C(metaclass=ABCMeta):
    pass
```

[New example](https://play.ruff.rs/bbad34da-bf07-44e6-9f34-53337e8f57d4)
```py
import abc


class C(metaclass=abc.ABCMeta):
    pass
```

The "Use instead" section as also modified similarly.

## Test Plan

<!-- How was it tested? -->

N/A, no functionality/tests affected
2025-07-31 17:14:15 -04:00
Dan Parizher
b07def07c9 [pyupgrade] Prevent infinite loop with I002 (UP010, UP035) (#19413)
## Summary

Fixes #18729 and fixes #16802

## Test Plan

Manually verified via CLI that Ruff no longer enters an infinite loop by
running:
```sh
echo 1 | ruff --isolated check - --select I002,UP010 --fix
```
with `required-imports = ["from __future__ import generator_stop"]` set
in the config, confirming “All checks passed!” and no snapshots were
generated.

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-07-31 15:17:27 -04:00
Alex Waygood
2ab1502e51 [ty] Improve the Display for generic type[] types (#19667) 2025-07-31 19:45:01 +01:00
Alex Waygood
a3f28baab4 [ty] Refactor TypeInferenceBuilder::infer_subscript_expression_types (#19658) 2025-07-31 13:38:43 +00:00
Brent Westbrook
a71513bae1 Fix tests on 32-bit architectures (#19652)
Summary
--

Fixes #19640. I'm not sure these are the exact fixes we really want, but
I
reproduced the issue in a 32-bit Docker container and tracked down the
causes,
so I figured I'd open a PR.

As I commented on the issue, the `goto_references` test depends on the
iteration
order of the files in an `FxHashSet` in `Indexed`. In this case, we can
just
sort the output in test code.

Similarly, the tuple case depended on the order of overloads inserted in
an
`FxHashMap`. `FxIndexMap` seemed like a convenient drop-in replacement,
but I
don't know if that will have other detrimental effects. I did have to
change the
assertion for the tuple test, but I think it should now be stable across
architectures.

Test Plan
--

Running the tests in the aforementioned Docker container
2025-07-31 08:52:19 -04:00
Alex Waygood
d2d4b115e3 [ty] Move pandas-stubs to bad.txt (#19659) 2025-07-31 12:33:24 +01:00
Alex Waygood
27b03a9d7b [ty] Remove special casing for string-literal-in-tuple __contains__ (#19642) 2025-07-31 11:28:03 +01:00
Harshil
32c454bb56 Update pre-commit's ruff id (#19654) 2025-07-31 07:17:04 +02:00
Ibraheem Ahmed
f6b7418def Update salsa (#19449)
## Summary

Pulls in a bunch of salsa micro-optimizations.
2025-07-30 15:31:46 -04:00
Ibraheem Ahmed
8f8c39c435 Simplify get_size2 usage (#19643)
## Summary

These were added in the 0.5.0 release.
2025-07-30 15:31:37 -04:00
Matthew Mckee
4739bc8d14 [ty] Fix incorrect diagnostic when calling __setitem__ (#19645)
## Summary

Resolves https://github.com/astral-sh/ty/issues/862 by not emitting a
diagnostic.

## Test Plan

Add test to show we don't emit the diagnostic
2025-07-30 20:34:52 +02:00
Alex Waygood
7b4103bcb6 [ty] Remove special casing for tuple addition (#19636) 2025-07-30 16:25:42 +00:00
Jim Hoekstra
38049aae12 fix missing-required-imports introducing syntax error after dosctring ending with backslash (#19505)
Issue: https://github.com/astral-sh/ruff/issues/19498

## Summary


[missing-required-import](https://docs.astral.sh/ruff/rules/missing-required-import/)
inserts the missing import on the line immediately following the last
line of the docstring. However, if the dosctring is immediately followed
by a continuation token (i.e. backslash) then this leads to a syntax
error because Python interprets the docstring and the inserted import to
be on the same line.

The proposed solution in this PR is to check if the first token after a
file docstring is a continuation character, and if so, to advance an
additional line before inserting the missing import.

## Test Plan

Added a unit test, and the following example was verified manually:

Given this simple test Python file:

```python
"Hello, World!"\

print(__doc__)
```

and this ruff linting configuration in the `pyproject.toml` file:

```toml
[tool.ruff.lint]
select = ["I"]

[tool.ruff.lint.isort]
required-imports = ["import sys"]
```

Without the changes in this PR, the ruff linter would try to insert the
missing import in line 2, resulting in a syntax error, and report the
following:

`error: Fix introduced a syntax error. Reverting all changes.`

With the changes in this PR, ruff correctly advances one more line
before adding the missing import, resulting in the following output:

```python
"Hello, World!"\

import sys

print(__doc__)
```

---------

Co-authored-by: Jim Hoekstra <jim.hoekstra@pacmed.nl>
2025-07-30 12:12:46 -04:00
Alex Waygood
ec3d5ebda2 [ty] Upcast heterogeneous and mixed tuples to homogeneous tuples where it's necessary to solve a TypeVar (#19635)
## Summary

This PR improves our generics solver such that we are able to solve the
`TypeVar` in this snippet to `int | str` (the union of the elements in
the heterogeneous tuple) by upcasting the heterogeneous tuple to its
pure-homogeneous-tuple supertype:

```py
def f[T](x: tuple[T, ...]) -> T:
    return x[0]

def g(x: tuple[int, str]):
    reveal_type(f(x))
```

## Test Plan

Mdtests. Some TODOs remain in the mdtest regarding solving `TypeVar`s
for mixed tuples, but I think this PR on its own is a significant step
forward for our generics solver when it comes to tuple types.

---------

Co-authored-by: Douglas Creager <dcreager@dcreager.net>
2025-07-30 17:12:21 +01:00
Micha Reiser
d797592f70 [ty] Fix server panic in workspace diagnostics request handler when typing (#19631) 2025-07-30 16:40:42 +01:00
David Peter
eb02aa5676 [ty] Async for loops and async iterables (#19634)
## Summary

Add support for `async for` loops and async iterables.

part of https://github.com/astral-sh/ty/issues/151

## Ecosystem impact

```diff
- boostedblob/listing.py:445:54: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
```

This is correct. We now find a true positive in the `# type: ignore`'d
code.

All of the other ecosystem hits are of the type

```diff
trio (https://github.com/python-trio/trio)
+ src/trio/_core/_tests/test_guest_mode.py:532:24: error[not-iterable] Object of type `MemorySendChannel[int] | MemoryReceiveChannel[int]` may not be iterable
```

The message is correct, because only `MemoryReceiveChannel` has an
`__aiter__` method, but `MemorySendChannel` does not. What's not correct
is our inferred type here. It should be `MemoryReceiveChannel[int]`, not
the union of the two. This is due to missing unpacking support for tuple
subclasses, which @AlexWaygood is working on. I don't think this should
block merging this PR, because those wrong types are already there,
without this PR.

## Test Plan

New Markdown tests and snapshot tests for diagnostics.
2025-07-30 17:40:24 +02:00
Dan Parizher
e593761232 [ruff] Parenthesize generator expressions in f-strings (RUF010) (#19434)
## Summary

Fixes #19433

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-07-30 15:02:31 +00:00
Brent Westbrook
8979271ea8 Always expand tabs to four spaces in diagnostics (#19618)
## Summary

I was a bit stuck on some snapshot differences I was seeing in #19415,
but @BurntSushi pointed out that `annotate-snippets` already normalizes
tabs on its own, which was very helpful! Instead of applying this change
directly to the other branch, I wanted to try applying it in
`ruff_linter` first. This should very slightly reduce the number of
changes in #19415 proper.

It looks like `annotate-snippets` always expands a tab to four spaces,
whereas I think we were aligning to tab stops:

```diff
  6 | spam(ham[1], { eggs: 2})
  7 | #: E201:1:6
- 8 | spam(   ham[1], {eggs: 2})
-   |      ^^^ E201
+ 8 | spam(    ham[1], {eggs: 2})
+   |      ^^^^ E201
```

```diff
61 | #: E203:2:15 E702:2:16
 62 | if x == 4:
-63 |     print(x, y) ; x, y = y, x
-   |                ^ E203
+63 |     print(x, y)    ; x, y = y, x
+   |                ^^^^ E203
```

```diff
 E27.py:15:6: E271 [*] Multiple spaces after keyword
    |
-13 | True        and False
+13 | True        and    False
 14 | #: E271
 15 | a and  b
    |      ^^ E271
```

I don't think this is too bad and has the major benefit of allowing us
to pass the non-tab-expanded range to `annotate-snippets` in #19415,
where it's also displayed in the header. Ruff doesn't have this problem
currently because it uses its own concise diagnostic output as the
header for full diagnostics, where the pre-expansion range is used
directly.

## Test Plan

Existing tests with a few snapshot updates
2025-07-30 11:00:36 -04:00
Andrew Gallant
d1a286226c [ty] Update module resolution diagram to account for typeshed VERSIONS file
This does unfortunately add a fair bit of complexity to the flow
diagram.

Ref https://github.com/astral-sh/ruff/pull/19620#issuecomment-3133684294
2025-07-30 10:34:58 -04:00
Micha Reiser
1ba32684da Fix copy and line separator colors in dark mode (#19630) 2025-07-30 15:08:31 +01:00
Micha Reiser
70d4b271da [ty] Remove AssertUnwindSafe requirement from ProgressReporter (#19637) 2025-07-30 12:46:44 +00:00
Alex Waygood
feaedb1812 [ty] Synthesize precise __getitem__ overloads for tuple subclasses (#19493) 2025-07-30 11:25:44 +00:00
Micha Reiser
6237ecb4db [ty] Add progress reporting to workspace diagnostics (#19616) 2025-07-30 10:27:34 +00:00
Micha Reiser
2a5ace6e55 [ty] Implement diagnostic caching (#19605) 2025-07-30 11:04:34 +01:00
David Peter
4ecf1d205a [ty] Support async/await, async with and yield from (#19595)
## Summary

- Add support for the return types of `async` functions
- Add type inference for `await` expressions
- Add support for `async with` / async context managers
- Add support for `yield from` expressions

This PR is generally lacking proper error handling in some cases (e.g.
illegal `__await__` attributes). I'm planning to work on this in a
follow-up.

part of https://github.com/astral-sh/ty/issues/151

closes https://github.com/astral-sh/ty/issues/736

## Ecosystem

There are a lot of true positives on `prefect` which look similar to:
```diff
prefect (https://github.com/PrefectHQ/prefect)
+ src/integrations/prefect-aws/tests/workers/test_ecs_worker.py:406:12: error[unresolved-attribute] Type `str` has no attribute `status_code`
```

This is due to a wrong return type annotation
[here](e926b8c4c1/src/integrations/prefect-aws/tests/workers/test_ecs_worker.py (L355-L391)).

```diff
mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ test/mitmproxy/addons/test_clientplayback.py:18:1: error[invalid-argument-type] Argument to function `asynccontextmanager` is incorrect: Expected `(...) -> AsyncIterator[Unknown]`, found `def tcp_server(handle_conn, **server_args) -> Unknown | tuple[str, int]`
```


[This](a4d794c59a/test/mitmproxy/addons/test_clientplayback.py (L18-L19))
is a true positive. That function should return
`AsyncIterator[Address]`, not `Address`.

I looked through almost all of the other new diagnostics and they all
look like known problems or true positives.

## Typing conformance

The typing conformance diff looks good.

## Test Plan

New Markdown tests
2025-07-30 11:51:21 +02:00
Brent Westbrook
c5ac998892 Bump 0.12.7 (#19627)
## Test Plan

- [x] Download the [sdist
artifact](https://github.com/astral-sh/ruff/actions/runs/16608501774/artifacts/3643617012)
and check that the LICENSE is present
2025-07-29 18:18:42 -04:00
Brent Westbrook
04a8f64cd7 Revert license and license-files changes in pyproject.toml (#19624)
Summary
--

This partially reverts commit 13634ff433
after issues in the release today.

Test Plan
--

```shell
uv build --sdist
tar -tzf dist/ruff-0.12.6.tar.gz | grep ruff-0.12.6/LICENSE
```

which finds the license now.
2025-07-29 17:27:55 -04:00
Brent Westbrook
6e00adf308 Bump 0.12.6 (#19622) 2025-07-29 16:31:01 -04:00
Brent Westbrook
864196b988 Add Checker::context method, deduplicate Unicode checks (#19609)
Summary
--

This PR adds a `Checker::context` method that returns the underlying
`LintContext` to unify `Candidate::into_diagnostic` and
`Candidate::report_diagnostic` in our ambiguous Unicode character
checks. This avoids some duplication and also avoids collecting a `Vec`
of `Candidate`s only to iterate over it later.

Test Plan
--

Existing tests
2025-07-29 16:07:55 -04:00
Thomas Mattone
ae26fa020c [flake8-pyi] Preserve inline comment in ellipsis removal (PYI013) (#19399)
## Summary

Fixes #19385.

Based on [unnecessary-placeholder
(PIE790)](https://docs.astral.sh/ruff/rules/unnecessary-placeholder/)
behavior, [ellipsis-in-non-empty-class-body
(PYI013)](https://docs.astral.sh/ruff/rules/ellipsis-in-non-empty-class-body/)
now safely preserve inline comment on ellipsis removal.

## Test Plan

A new test class was added:

```python
class NonEmptyChildWithInlineComment:
    value: int
    ... # preserve me
```

with the following expected fix:

```python
class NonEmptyChildWithInlineComment:
    value: int
    # preserve me
```
2025-07-29 15:06:04 -04:00
Andrew Gallant
88a679945c [ty] Add flow diagram for import resolution
The diagram is written in the Dot language, which can
be converted to SVG (or any other image) by GraphViz.

I thought it was a good idea to write this down in
preparation for adding routines that list modules.
Code reuse is likely to be difficult and I wanted to
be sure I understood how it worked.
2025-07-29 14:49:20 -04:00
Andrew Gallant
941be52358 [ty] Add comments to some core resolver functions
Some of the contracts were a little tricky to discover from just the
parameter types, so I added some docs (and fixed what I believe was one
typo).
2025-07-29 14:49:20 -04:00
Andrew Gallant
13624ce17f [ty] Add missing ticks and use consistent quoting
This irked me while I was reading the code, so I just tried to fix what
I could see.
2025-07-29 14:49:20 -04:00
Andrew Gallant
edb2f8e997 [ty] Reflow some long lines
I mostly just did this because the long string literals were annoying
me. And these can make rustfmt give up on formatting.

I also re-flowed some long comment lines while I was here.
2025-07-29 14:49:20 -04:00
Andrew Gallant
5e6ad849ff [ty] Unexport helper function
I'm not sure if this used to be used elsewhere, but it no longer is.
And it looks like an internal-only helper function, so just un-export
it.

And note that `ModuleNameIngredient` is also un-exported, so this
function isn't really usable outside of its defining module anyway.
2025-07-29 14:49:20 -04:00
Andrew Gallant
865a9b3424 [ty] Remove offset from CompletionTargetTokens::Unknown
At some point, the surrounding code was refactored so that the
cursor offset was always passed around, so storing it here is
no longer necessary.
2025-07-29 14:49:20 -04:00
Dan Parizher
d449c541cb [pyupgrade] Fix UP030 to avoid modifying double curly braces in format strings (#19378)
## Summary

Fixes #19348

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-07-29 18:35:54 +00:00
हिमांशु
f7c6a6b2d0 [ty] fix a typo (#19621)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-07-29 17:53:44 +00:00
justin
656273bf3d [ty] synthesize __replace__ for dataclasses (>=3.13) (#19545)
## Summary
https://github.com/astral-sh/ty/issues/111

adds support for the new `copy.replace` and `__replace__` protocol
[added in 3.13](https://docs.python.org/3/whatsnew/3.13.html#copy)

- docs: https://docs.python.org/3/library/copy.html#object.__replace__
- some discussion on pyright/mypy implementations:
https://discuss.python.org/t/dataclass-transform-and-replace/69067



### Burndown
- [x] add tests
- [x] implement `__replace__`
- [ ]
[collections.namedtuple()](https://docs.python.org/3/library/collections.html#collections.namedtuple)
- [x]
[dataclasses.dataclass](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass)

## Test Plan
new mdtests

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-07-29 17:32:01 +02:00
3188 changed files with 121081 additions and 71799 deletions

6
.github/CODEOWNERS vendored
View File

@@ -19,6 +19,10 @@
# ty
/crates/ty* @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_project/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_server/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_wasm/ @carljm @MichaReiser @sharkdp @dcreager
/scripts/ty_benchmark/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ty_python_semantic @carljm @AlexWaygood @sharkdp @dcreager

View File

@@ -49,7 +49,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build sdist"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
command: sdist
args: --out dist
@@ -79,7 +79,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - x86_64"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: x86_64
args: --release --locked --out dist
@@ -121,7 +121,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - aarch64"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: aarch64
args: --release --locked --out dist
@@ -177,7 +177,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: ${{ matrix.platform.target }}
args: --release --locked --out dist
@@ -230,7 +230,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: ${{ matrix.target }}
manylinux: auto
@@ -292,6 +292,8 @@ jobs:
maturin_docker_options: -e JEMALLOC_SYS_WITH_LG_PAGE=16
- target: arm-unknown-linux-musleabihf
arch: arm
- target: riscv64gc-unknown-linux-gnu
arch: riscv64
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
@@ -304,7 +306,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: ${{ matrix.platform.target }}
manylinux: auto
@@ -319,7 +321,7 @@ jobs:
githubToken: ${{ github.token }}
install: |
apt-get update
apt-get install -y --no-install-recommends python3 python3-pip
apt-get install -y --no-install-recommends python3 python3-pip libatomic1
pip3 install -U pip
run: |
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
@@ -370,7 +372,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: ${{ matrix.target }}
manylinux: musllinux_1_2
@@ -435,7 +437,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
target: ${{ matrix.platform.target }}
manylinux: musllinux_1_2

View File

@@ -40,7 +40,7 @@ jobs:
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -63,7 +63,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
with:
images: ${{ env.RUFF_BASE_IMG }}
# Defining this makes sure the org.opencontainers.image.version OCI label becomes the actual release version and not the branch name
@@ -123,7 +123,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
with:
images: ${{ env.RUFF_BASE_IMG }}
# Order is on purpose such that the label org.opencontainers.image.version has the first pattern with the full version
@@ -131,7 +131,7 @@ jobs:
type=pep440,pattern={{ version }},value=${{ fromJson(inputs.plan).announcement_tag }}
type=pep440,pattern={{ major }}.{{ minor }},value=${{ fromJson(inputs.plan).announcement_tag }}
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -169,7 +169,7 @@ jobs:
steps:
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -219,7 +219,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
# ghcr.io prefers index level annotations
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
@@ -266,7 +266,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
@@ -276,7 +276,7 @@ jobs:
type=pep440,pattern={{ version }},value=${{ fromJson(inputs.plan).announcement_tag }}
type=pep440,pattern={{ major }}.{{ minor }},value=${{ fromJson(inputs.plan).announcement_tag }}
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}

View File

@@ -38,7 +38,8 @@ jobs:
fuzz: ${{ steps.check_fuzzer.outputs.changed }}
# Flag that is set to "true" when code related to ty changes.
ty: ${{ steps.check_ty.outputs.changed }}
# Flag that is set to "true" when code related to the py-fuzzer folder changes.
py-fuzzer: ${{ steps.check_py_fuzzer.outputs.changed }}
# Flag that is set to "true" when code related to the playground changes.
playground: ${{ steps.check_playground.outputs.changed }}
steps:
@@ -68,7 +69,6 @@ jobs:
':crates/ruff_text_size/**' \
':crates/ruff_python_ast/**' \
':crates/ruff_python_parser/**' \
':python/py-fuzzer/**' \
':.github/workflows/ci.yaml' \
; then
echo "changed=false" >> "$GITHUB_OUTPUT"
@@ -138,6 +138,18 @@ jobs:
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Check if the py-fuzzer code changed
id: check_py_fuzzer
env:
MERGE_BASE: ${{ steps.merge_base.outputs.sha }}
run: |
if git diff --quiet "${MERGE_BASE}...HEAD" -- 'python/py_fuzzer/**' \
; then
echo "changed=false" >> "$GITHUB_OUTPUT"
else
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Check if there was any code related change
id: check_code
env:
@@ -238,13 +250,13 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
uses: rui314/setup-mold@7344740a9418dcdcb481c7df83d9fbd1d5072d7d # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-insta
- name: ty mdtests (GitHub annotations)
@@ -296,13 +308,13 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
uses: rui314/setup-mold@7344740a9418dcdcb481c7df83d9fbd1d5072d7d # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-insta
- name: "Run tests"
@@ -325,7 +337,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-nextest
- name: "Run tests"
@@ -381,7 +393,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
uses: rui314/setup-mold@7344740a9418dcdcb481c7df83d9fbd1d5072d7d # v1
- name: "Build"
run: cargo build --release --locked
@@ -406,7 +418,7 @@ jobs:
MSRV: ${{ steps.msrv.outputs.value }}
run: rustup default "${MSRV}"
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
uses: rui314/setup-mold@7344740a9418dcdcb481c7df83d9fbd1d5072d7d # v1
- name: "Build tests"
shell: bash
env:
@@ -429,7 +441,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@808dcb1b503398677d089d3216c51ac7cc11e7ab # v1.14.2
uses: cargo-bins/cargo-binstall@0dca8cf8dfb40cb77a29cece06933ce674674523 # v1.15.1
with:
tool: cargo-fuzz@0.11.2
- name: "Install cargo-fuzz"
@@ -443,7 +455,7 @@ jobs:
needs:
- cargo-test-linux
- determine_changes
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && needs.determine_changes.outputs.parser == 'true' }}
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && (needs.determine_changes.outputs.parser == 'true' || needs.determine_changes.outputs.py-fuzzer == 'true') }}
timeout-minutes: 20
env:
FORCE_COLOR: 1
@@ -451,7 +463,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
name: Download Ruff binary to test
id: download-cached-binary
@@ -633,7 +645,7 @@ jobs:
- cargo-test-linux
- determine_changes
# Only runs on pull requests, since that is the only we way we can find the base version for comparison.
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && github.event_name == 'pull_request' && needs.determine_changes.outputs.ty == 'true' }}
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && github.event_name == 'pull_request' && (needs.determine_changes.outputs.ty == 'true' || needs.determine_changes.outputs.py-fuzzer == 'true') }}
timeout-minutes: 20
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
@@ -652,7 +664,7 @@ jobs:
branch: ${{ github.event.pull_request.base.ref }}
workflow: "ci.yaml"
check_artifacts: true
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: Fuzz
env:
FORCE_COLOR: 1
@@ -682,7 +694,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@808dcb1b503398677d089d3216c51ac7cc11e7ab # v1.14.2
- uses: cargo-bins/cargo-binstall@0dca8cf8dfb40cb77a29cece06933ce674674523 # v1.15.1
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
@@ -703,7 +715,7 @@ jobs:
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@e10f6c464b90acceb5f640d31beda6d586ba7b4a # v1.49.3
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
args: --out dist
- name: "Test wheel"
@@ -722,13 +734,13 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: 22
- name: "Cache pre-commit"
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/pre-commit
key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
@@ -765,7 +777,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: Install uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: uv pip install -r docs/requirements-insiders.txt --system
@@ -897,13 +909,13 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-codspeed
@@ -930,13 +942,13 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-codspeed

View File

@@ -34,11 +34,11 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
uses: rui314/setup-mold@7344740a9418dcdcb481c7df83d9fbd1d5072d7d # v1
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
- name: Build ruff
# A debug build means the script runs slower once it gets started,

View File

@@ -11,6 +11,7 @@ on:
- "crates/ruff_python_parser"
- ".github/workflows/mypy_primer.yaml"
- ".github/workflows/mypy_primer_comment.yaml"
- "scripts/mypy_primer.sh"
- "Cargo.lock"
- "!**.md"
@@ -38,7 +39,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
@@ -81,9 +82,9 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
workspaces: "ruff"

View File

@@ -22,7 +22,7 @@ jobs:
id-token: write
steps:
- name: "Install uv"
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: wheels-*

View File

@@ -61,7 +61,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@09d2acae674a48949e3602304ab46fd20ae0c42f
- uses: actions/checkout@ff7abcd0c3c05ccf6adc123a8cd1fd4fb30fb493
with:
persist-credentials: false
submodules: recursive
@@ -124,19 +124,19 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json
steps:
- uses: actions/checkout@09d2acae674a48949e3602304ab46fd20ae0c42f
- uses: actions/checkout@ff7abcd0c3c05ccf6adc123a8cd1fd4fb30fb493
with:
persist-credentials: false
submodules: recursive
- name: Install cached dist
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0
with:
name: cargo-dist-cache
path: ~/.cargo/bin/
- run: chmod +x ~/.cargo/bin/dist
# Get all the local artifacts for the global tasks to use (for e.g. checksums)
- name: Fetch local artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0
with:
pattern: artifacts-*
path: target/distrib/
@@ -175,19 +175,19 @@ jobs:
outputs:
val: ${{ steps.host.outputs.manifest }}
steps:
- uses: actions/checkout@09d2acae674a48949e3602304ab46fd20ae0c42f
- uses: actions/checkout@ff7abcd0c3c05ccf6adc123a8cd1fd4fb30fb493
with:
persist-credentials: false
submodules: recursive
- name: Install cached dist
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0
with:
name: cargo-dist-cache
path: ~/.cargo/bin/
- run: chmod +x ~/.cargo/bin/dist
# Fetch artifacts from scratch-storage
- name: Fetch artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0
with:
pattern: artifacts-*
path: target/distrib/
@@ -251,13 +251,13 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@09d2acae674a48949e3602304ab46fd20ae0c42f
- uses: actions/checkout@ff7abcd0c3c05ccf6adc123a8cd1fd4fb30fb493
with:
persist-credentials: false
submodules: recursive
# Create a GitHub Release while uploading all files to it
- name: "Download GitHub Artifacts"
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0
with:
pattern: artifacts-*
path: artifacts

View File

@@ -65,7 +65,7 @@ jobs:
run: |
git config --global user.name typeshedbot
git config --global user.email '<>'
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: Sync typeshed stubs
run: |
rm -rf "ruff/${VENDORED_TYPESHED}"
@@ -117,7 +117,7 @@ jobs:
with:
persist-credentials: true
ref: ${{ env.UPSTREAM_BRANCH}}
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: Setup git
run: |
git config --global user.name typeshedbot
@@ -155,7 +155,7 @@ jobs:
with:
persist-credentials: true
ref: ${{ env.UPSTREAM_BRANCH}}
- uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- name: Setup git
run: |
git config --global user.name typeshedbot

View File

@@ -33,7 +33,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:

View File

@@ -29,7 +29,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
uses: astral-sh/setup-uv@4959332f0f014c5280e7eac8b70c90cb574c9f9b # v6.6.0
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:

View File

@@ -24,6 +24,7 @@ env:
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
RUST_BACKTRACE: 1
CONFORMANCE_SUITE_COMMIT: d4f39b27a4a47aac8b6d4019e1b0b5b3156fabdc
jobs:
typing_conformance:
@@ -40,13 +41,10 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: python/typing
ref: d4f39b27a4a47aac8b6d4019e1b0b5b3156fabdc
ref: ${{ env.CONFORMANCE_SUITE_COMMIT }}
path: typing
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
workspaces: "ruff"
@@ -56,6 +54,9 @@ jobs:
- name: Compute diagnostic diff
shell: bash
env:
# TODO: Remove this once we fixed the remaining panics in the conformance suite.
TY_MAX_PARALLELISM: 1
run: |
RUFF_DIR="$GITHUB_WORKSPACE/ruff"
@@ -64,17 +65,16 @@ jobs:
cd ruff
echo "new commit"
git checkout -b new_commit "${{ github.event.pull_request.head.sha }}"
git rev-list --format=%s --max-count=1 new_commit
cargo build --release --bin ty
mv target/release/ty ty-new
git rev-list --format=%s --max-count=1 "$GITHUB_SHA"
cargo build --bin ty
mv target/debug/ty ty-new
echo "old commit (merge base)"
MERGE_BASE="$(git merge-base "$GITHUB_SHA" "origin/$GITHUB_BASE_REF")"
git checkout -b old_commit "$MERGE_BASE"
echo "old commit (merge base)"
git rev-list --format=%s --max-count=1 old_commit
cargo build --release --bin ty
mv target/release/ty ty-old
cargo build --bin ty
mv target/debug/ty ty-old
)
(
@@ -95,6 +95,7 @@ jobs:
fi
echo ${{ github.event.number }} > pr-number
echo "${CONFORMANCE_SUITE_COMMIT}" > conformance-suite-commit
- name: Upload diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
@@ -107,3 +108,9 @@ jobs:
with:
name: pr-number
path: pr-number
- name: Upload conformance suite commit
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: conformance-suite-commit
path: conformance-suite-commit

View File

@@ -32,6 +32,14 @@ jobs:
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download typing conformance suite commit
with:
name: conformance-suite-commit
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download typing_conformance results"
id: download-typing_conformance_diff
@@ -61,7 +69,14 @@ jobs:
# subsequent runs
echo '<!-- generated-comment typing_conformance_diagnostics_diff -->' >> comment.txt
echo '## Diagnostic diff on typing conformance tests' >> comment.txt
if [[ -f conformance-suite-commit ]]
then
echo "## Diagnostic diff on [typing conformance tests](https://github.com/python/typing/tree/$(<conformance-suite-commit)/conformance)" >> comment.txt
else
echo "conformance-suite-commit file not found"
echo "## Diagnostic diff on typing conformance tests" >> comment.txt
fi
if [ -s "pr/typing_conformance_diagnostics_diff/typing_conformance_diagnostics.diff" ]; then
echo '<details>' >> comment.txt
echo '<summary>Changes were detected when running ty on typing conformance tests</summary>' >> comment.txt

View File

@@ -81,7 +81,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.12.5
rev: v0.12.7
hooks:
- id: ruff-format
- id: ruff-check

View File

@@ -1,5 +1,169 @@
# Changelog
## 0.12.11
### Preview features
- \[`airflow`\] Extend `AIR311` and `AIR312` rules ([#20082](https://github.com/astral-sh/ruff/pull/20082))
- \[`airflow`\] Replace wrong path `airflow.io.storage` with `airflow.io.store` (`AIR311`) ([#20081](https://github.com/astral-sh/ruff/pull/20081))
- \[`flake8-async`\] Implement `blocking-http-call-httpx-in-async-function` (`ASYNC212`) ([#20091](https://github.com/astral-sh/ruff/pull/20091))
- \[`flake8-logging-format`\] Add auto-fix for f-string logging calls (`G004`) ([#19303](https://github.com/astral-sh/ruff/pull/19303))
- \[`flake8-use-pathlib`\] Add autofix for `PTH211` ([#20009](https://github.com/astral-sh/ruff/pull/20009))
- \[`flake8-use-pathlib`\] Make `PTH100` fix unsafe because it can change behavior ([#20100](https://github.com/astral-sh/ruff/pull/20100))
### Bug fixes
- \[`pyflakes`, `pylint`\] Fix false positives caused by `__class__` cell handling (`F841`, `PLE0117`) ([#20048](https://github.com/astral-sh/ruff/pull/20048))
- \[`pyflakes`\] Fix `allowed-unused-imports` matching for top-level modules (`F401`) ([#20115](https://github.com/astral-sh/ruff/pull/20115))
- \[`ruff`\] Fix false positive for t-strings in `default-factory-kwarg` (`RUF026`) ([#20032](https://github.com/astral-sh/ruff/pull/20032))
- \[`ruff`\] Preserve relative whitespace in multi-line expressions (`RUF033`) ([#19647](https://github.com/astral-sh/ruff/pull/19647))
### Rule changes
- \[`ruff`\] Handle empty t-strings in `unnecessary-empty-iterable-within-deque-call` (`RUF037`) ([#20045](https://github.com/astral-sh/ruff/pull/20045))
### Documentation
- Fix incorrect `D413` links in docstrings convention FAQ ([#20089](https://github.com/astral-sh/ruff/pull/20089))
- \[`flake8-use-pathlib`\] Update links to the table showing the correspondence between `os` and `pathlib` ([#20103](https://github.com/astral-sh/ruff/pull/20103))
## 0.12.10
### Preview features
- \[`flake8-simplify`\] Implement fix for `maxsplit` without separator (`SIM905`) ([#19851](https://github.com/astral-sh/ruff/pull/19851))
- \[`flake8-use-pathlib`\] Add fixes for `PTH102` and `PTH103` ([#19514](https://github.com/astral-sh/ruff/pull/19514))
### Bug fixes
- \[`isort`\] Handle multiple continuation lines after module docstring (`I002`) ([#19818](https://github.com/astral-sh/ruff/pull/19818))
- \[`pyupgrade`\] Avoid reporting `__future__` features as unnecessary when they are used (`UP010`) ([#19769](https://github.com/astral-sh/ruff/pull/19769))
- \[`pyupgrade`\] Handle nested `Optional`s (`UP045`) ([#19770](https://github.com/astral-sh/ruff/pull/19770))
### Rule changes
- \[`pycodestyle`\] Make `E731` fix unsafe instead of display-only for class assignments ([#19700](https://github.com/astral-sh/ruff/pull/19700))
- \[`pyflakes`\] Add secondary annotation showing previous definition (`F811`) ([#19900](https://github.com/astral-sh/ruff/pull/19900))
### Documentation
- Fix description of global config file discovery strategy ([#19188](https://github.com/astral-sh/ruff/pull/19188))
- Update outdated links to <https://typing.python.org/en/latest/source/stubs.html> ([#19992](https://github.com/astral-sh/ruff/pull/19992))
- \[`flake8-annotations`\] Remove unused import in example (`ANN401`) ([#20000](https://github.com/astral-sh/ruff/pull/20000))
## 0.12.9
### Preview features
- \[`airflow`\] Add check for `airflow.secrets.cache.SecretCache` (`AIR301`) ([#17707](https://github.com/astral-sh/ruff/pull/17707))
- \[`ruff`\] Offer a safe fix for multi-digit zeros (`RUF064`) ([#19847](https://github.com/astral-sh/ruff/pull/19847))
### Bug fixes
- \[`flake8-blind-except`\] Fix `BLE001` false-positive on `raise ... from None` ([#19755](https://github.com/astral-sh/ruff/pull/19755))
- \[`flake8-comprehensions`\] Fix false positive for `C420` with attribute, subscript, or slice assignment targets ([#19513](https://github.com/astral-sh/ruff/pull/19513))
- \[`flake8-simplify`\] Fix handling of U+001C..U+001F whitespace (`SIM905`) ([#19849](https://github.com/astral-sh/ruff/pull/19849))
### Rule changes
- \[`pylint`\] Use lowercase hex characters to match the formatter (`PLE2513`) ([#19808](https://github.com/astral-sh/ruff/pull/19808))
### Documentation
- Fix `lint.future-annotations` link ([#19876](https://github.com/astral-sh/ruff/pull/19876))
### Other changes
- Build `riscv64` binaries for release ([#19819](https://github.com/astral-sh/ruff/pull/19819))
- Add rule code to error description in GitLab output ([#19896](https://github.com/astral-sh/ruff/pull/19896))
- Improve rendering of the `full` output format ([#19415](https://github.com/astral-sh/ruff/pull/19415))
Below is an example diff for [`F401`](https://docs.astral.sh/ruff/rules/unused-import/):
```diff
-unused.py:8:19: F401 [*] `pathlib` imported but unused
+F401 [*] `pathlib` imported but unused
+ --> unused.py:8:19
|
7 | # Unused, _not_ marked as required (due to the alias).
8 | import pathlib as non_alias
- | ^^^^^^^^^ F401
+ | ^^^^^^^^^
9 |
10 | # Unused, marked as required.
|
- = help: Remove unused import: `pathlib`
+help: Remove unused import: `pathlib`
```
For now, the primary difference is the movement of the filename, line number, and column information to a second line in the header. This new representation will allow us to make further additions to Ruff's diagnostics, such as adding sub-diagnostics and multiple annotations to the same snippet.
## 0.12.8
### Preview features
- \[`flake8-use-pathlib`\] Expand `PTH201` to check all `PurePath` subclasses ([#19440](https://github.com/astral-sh/ruff/pull/19440))
### Bug fixes
- \[`flake8-blind-except`\] Change `BLE001` to correctly parse exception tuples ([#19747](https://github.com/astral-sh/ruff/pull/19747))
- \[`flake8-errmsg`\] Exclude `typing.cast` from `EM101` ([#19656](https://github.com/astral-sh/ruff/pull/19656))
- \[`flake8-simplify`\] Fix raw string handling in `SIM905` for embedded quotes ([#19591](https://github.com/astral-sh/ruff/pull/19591))
- \[`flake8-import-conventions`\] Avoid false positives for NFKC-normalized `__debug__` import aliases in `ICN001` ([#19411](https://github.com/astral-sh/ruff/pull/19411))
- \[`isort`\] Fix syntax error after docstring ending with backslash (`I002`) ([#19505](https://github.com/astral-sh/ruff/pull/19505))
- \[`pylint`\] Mark `PLC0207` fixes as unsafe when `*args` unpacking is present ([#19679](https://github.com/astral-sh/ruff/pull/19679))
- \[`pyupgrade`\] Prevent infinite loop with `I002` (`UP010`, `UP035`) ([#19413](https://github.com/astral-sh/ruff/pull/19413))
- \[`ruff`\] Parenthesize generator expressions in f-strings (`RUF010`) ([#19434](https://github.com/astral-sh/ruff/pull/19434))
### Rule changes
- \[`eradicate`\] Don't flag `pyrefly` pragmas as unused code (`ERA001`) ([#19731](https://github.com/astral-sh/ruff/pull/19731))
### Documentation
- Replace "associative" with "commutative" in docs for `RUF036` ([#19706](https://github.com/astral-sh/ruff/pull/19706))
- Fix copy and line separator colors in dark mode ([#19630](https://github.com/astral-sh/ruff/pull/19630))
- Fix link to `typing` documentation ([#19648](https://github.com/astral-sh/ruff/pull/19648))
- \[`refurb`\] Make more examples error out-of-the-box ([#19695](https://github.com/astral-sh/ruff/pull/19695),[#19673](https://github.com/astral-sh/ruff/pull/19673),[#19672](https://github.com/astral-sh/ruff/pull/19672))
### Other changes
- Include column numbers in GitLab output format ([#19708](https://github.com/astral-sh/ruff/pull/19708))
- Always expand tabs to four spaces in diagnostics ([#19618](https://github.com/astral-sh/ruff/pull/19618))
- Update pre-commit's `ruff` id ([#19654](https://github.com/astral-sh/ruff/pull/19654))
## 0.12.7
This is a follow-up release to 0.12.6. Because of an issue in the package metadata, 0.12.6 failed to publish fully to PyPI and has been yanked. Similarly, there is no GitHub release or Git tag for 0.12.6. The contents of the 0.12.7 release are identical to 0.12.6, except for the updated metadata.
## 0.12.6
### Preview features
- \[`flake8-commas`\] Add support for trailing comma checks in type parameter lists (`COM812`, `COM819`) ([#19390](https://github.com/astral-sh/ruff/pull/19390))
- \[`pylint`\] Implement auto-fix for `missing-maxsplit-arg` (`PLC0207`) ([#19387](https://github.com/astral-sh/ruff/pull/19387))
- \[`ruff`\] Offer fixes for `RUF039` in more cases ([#19065](https://github.com/astral-sh/ruff/pull/19065))
### Bug fixes
- Support `.pyi` files in ruff analyze graph ([#19611](https://github.com/astral-sh/ruff/pull/19611))
- \[`flake8-pyi`\] Preserve inline comment in ellipsis removal (`PYI013`) ([#19399](https://github.com/astral-sh/ruff/pull/19399))
- \[`perflint`\] Ignore rule if target is `global` or `nonlocal` (`PERF401`) ([#19539](https://github.com/astral-sh/ruff/pull/19539))
- \[`pyupgrade`\] Fix `UP030` to avoid modifying double curly braces in format strings ([#19378](https://github.com/astral-sh/ruff/pull/19378))
- \[`refurb`\] Ignore decorated functions for `FURB118` ([#19339](https://github.com/astral-sh/ruff/pull/19339))
- \[`refurb`\] Mark `int` and `bool` cases for `Decimal.from_float` as safe fixes (`FURB164`) ([#19468](https://github.com/astral-sh/ruff/pull/19468))
- \[`ruff`\] Fix `RUF033` for named default expressions ([#19115](https://github.com/astral-sh/ruff/pull/19115))
### Rule changes
- \[`flake8-blind-except`\] Change `BLE001` to permit `logging.critical(..., exc_info=True)` ([#19520](https://github.com/astral-sh/ruff/pull/19520))
### Performance
- Add support for specifying minimum dots in detected string imports ([#19538](https://github.com/astral-sh/ruff/pull/19538))
## 0.12.5
### Preview features

596
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@ resolver = "2"
[workspace.package]
# Please update rustfmt.toml when bumping the Rust edition
edition = "2024"
rust-version = "1.86"
rust-version = "1.87"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -23,6 +23,7 @@ ruff_graph = { path = "crates/ruff_graph" }
ruff_index = { path = "crates/ruff_index" }
ruff_linter = { path = "crates/ruff_linter" }
ruff_macros = { path = "crates/ruff_macros" }
ruff_memory_usage = { path = "crates/ruff_memory_usage" }
ruff_notebook = { path = "crates/ruff_notebook" }
ruff_options_metadata = { path = "crates/ruff_options_metadata" }
ruff_python_ast = { path = "crates/ruff_python_ast" }
@@ -40,6 +41,7 @@ ruff_text_size = { path = "crates/ruff_text_size" }
ruff_workspace = { path = "crates/ruff_workspace" }
ty = { path = "crates/ty" }
ty_combine = { path = "crates/ty_combine" }
ty_ide = { path = "crates/ty_ide" }
ty_project = { path = "crates/ty_project", default-features = false }
ty_python_semantic = { path = "crates/ty_python_semantic" }
@@ -83,7 +85,7 @@ etcetera = { version = "0.10.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
get-size2 = { version = "0.6.0", features = [
get-size2 = { version = "0.6.2", features = [
"derive",
"smallvec",
"hashbrown",
@@ -141,7 +143,12 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa", rev = "dba66f1a37acca014c2402f231ed5b361bd7d8fe" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "a3ffa22cb26756473d56f867aedec3fd907c4dd9", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",
"inventory",
] }
schemars = { version = "0.8.16" }
seahash = { version = "4.1.0" }
serde = { version = "1.0.197", features = ["derive"] }
@@ -208,6 +215,8 @@ unexpected_cfgs = { level = "warn", check-cfg = [
[workspace.lints.clippy]
pedantic = { level = "warn", priority = -2 }
# Enabled at the crate level
disallowed_methods = "allow"
# Allowed pedantic lints
char_lit_as_u8 = "allow"
collapsible_else_if = "allow"
@@ -246,6 +255,7 @@ unused_peekable = "warn"
# Diagnostics are not actionable: Enable once https://github.com/rust-lang/rust-clippy/issues/13774 is resolved.
large_stack_arrays = "allow"
[profile.release]
# Note that we set these explicitly, and these values
# were chosen based on a trade-off between compile times

View File

@@ -148,8 +148,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.12.5/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.12.5/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.12.11/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.12.11/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -182,7 +182,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.12.5
rev: v0.12.11
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -24,3 +24,20 @@ ignore-interior-mutability = [
# The expression is read-only.
"ruff_python_ast::hashable::HashableExpr",
]
disallowed-methods = [
{ path = "std::env::var", reason = "Use System::env_var instead in ty crates" },
{ path = "std::env::current_dir", reason = "Use System::current_directory instead in ty crates" },
{ path = "std::fs::read_to_string", reason = "Use System::read_to_string instead in ty crates" },
{ path = "std::fs::metadata", reason = "Use System::path_metadata instead in ty crates" },
{ path = "std::fs::canonicalize", reason = "Use System::canonicalize_path instead in ty crates" },
{ path = "dunce::canonicalize", reason = "Use System::canonicalize_path instead in ty crates" },
{ path = "std::fs::read_dir", reason = "Use System::read_directory instead in ty crates" },
{ path = "std::fs::write", reason = "Use WritableSystem::write_file instead in ty crates" },
{ path = "std::fs::create_dir_all", reason = "Use WritableSystem::create_directory_all instead in ty crates" },
{ path = "std::fs::File::create_new", reason = "Use WritableSystem::create_new_file instead in ty crates" },
# Path methods that have System trait equivalents
{ path = "std::path::Path::exists", reason = "Use System::path_exists instead in ty crates" },
{ path = "std::path::Path::is_dir", reason = "Use System::is_directory instead in ty crates" },
{ path = "std::path::Path::is_file", reason = "Use System::is_file instead in ty crates" },
]

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.12.5"
version = "0.12.11"
publish = true
authors = { workspace = true }
edition = { workspace = true }
@@ -85,7 +85,7 @@ dist = true
[target.'cfg(target_os = "windows")'.dependencies]
mimalloc = { workspace = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), not(target_os = "aix"), not(target_os = "android"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64")))'.dependencies]
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), not(target_os = "aix"), not(target_os = "android"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64", target_arch = "riscv64")))'.dependencies]
tikv-jemallocator = { workspace = true }
[lints]

View File

@@ -13,25 +13,16 @@ use itertools::Itertools;
use log::{debug, error};
use rayon::iter::ParallelIterator;
use rayon::iter::{IntoParallelIterator, ParallelBridge};
use ruff_linter::codes::Rule;
use rustc_hash::FxHashMap;
use tempfile::NamedTempFile;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_db::diagnostic::Diagnostic;
use ruff_diagnostics::Fix;
use ruff_linter::message::create_lint_diagnostic;
use ruff_linter::package::PackageRoot;
use ruff_linter::{VERSION, warn_user};
use ruff_macros::CacheKey;
use ruff_notebook::NotebookIndex;
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
use ruff_workspace::Settings;
use ruff_workspace::resolver::Resolver;
use crate::diagnostics::Diagnostics;
/// [`Path`] that is relative to the package root in [`PackageCache`].
pub(crate) type RelativePath = Path;
/// [`PathBuf`] that is relative to the package root in [`PackageCache`].
@@ -298,13 +289,8 @@ impl Cache {
});
}
pub(crate) fn update_lint(
&self,
path: RelativePathBuf,
key: &FileCacheKey,
data: LintCacheData,
) {
self.update(path, key, ChangeData::Lint(data));
pub(crate) fn set_linted(&self, path: RelativePathBuf, key: &FileCacheKey, yes: bool) {
self.update(path, key, ChangeData::Linted(yes));
}
pub(crate) fn set_formatted(&self, path: RelativePathBuf, key: &FileCacheKey) {
@@ -339,42 +325,15 @@ pub(crate) struct FileCache {
}
impl FileCache {
/// Convert the file cache into `Diagnostics`, using `path` as file name.
pub(crate) fn to_diagnostics(&self, path: &Path) -> Option<Diagnostics> {
self.data.lint.as_ref().map(|lint| {
let diagnostics = if lint.messages.is_empty() {
Vec::new()
} else {
let file = SourceFileBuilder::new(path.to_string_lossy(), &*lint.source).finish();
lint.messages
.iter()
.map(|msg| {
create_lint_diagnostic(
&msg.body,
msg.suggestion.as_ref(),
msg.range,
msg.fix.clone(),
msg.parent,
file.clone(),
msg.noqa_offset,
msg.rule,
)
})
.collect()
};
let notebook_indexes = if let Some(notebook_index) = lint.notebook_index.as_ref() {
FxHashMap::from_iter([(path.to_string_lossy().to_string(), notebook_index.clone())])
} else {
FxHashMap::default()
};
Diagnostics::new(diagnostics, notebook_indexes)
})
/// Return whether or not the file in the cache was linted and found to have no diagnostics.
pub(crate) fn linted(&self) -> bool {
self.data.linted
}
}
#[derive(Debug, Default, bincode::Decode, bincode::Encode)]
struct FileCacheData {
lint: Option<LintCacheData>,
linted: bool,
formatted: bool,
}
@@ -410,88 +369,6 @@ pub(crate) fn init(path: &Path) -> Result<()> {
Ok(())
}
#[derive(bincode::Decode, Debug, bincode::Encode, PartialEq)]
pub(crate) struct LintCacheData {
/// Imports made.
// pub(super) imports: ImportMap,
/// Diagnostic messages.
pub(super) messages: Vec<CacheMessage>,
/// Source code of the file.
///
/// # Notes
///
/// This will be empty if `messages` is empty.
pub(super) source: String,
/// Notebook index if this file is a Jupyter Notebook.
#[bincode(with_serde)]
pub(super) notebook_index: Option<NotebookIndex>,
}
impl LintCacheData {
pub(crate) fn from_diagnostics(
diagnostics: &[Diagnostic],
notebook_index: Option<NotebookIndex>,
) -> Self {
let source = if let Some(msg) = diagnostics.first() {
msg.expect_ruff_source_file().source_text().to_owned()
} else {
String::new() // No messages, no need to keep the source!
};
let messages = diagnostics
.iter()
// Parse the kebab-case rule name into a `Rule`. This will fail for syntax errors, so
// this also serves to filter them out, but we shouldn't be caching files with syntax
// errors anyway.
.filter_map(|msg| Some((msg.name().parse().ok()?, msg)))
.map(|(rule, msg)| {
// Make sure that all message use the same source file.
assert_eq!(
msg.expect_ruff_source_file(),
diagnostics.first().unwrap().expect_ruff_source_file(),
"message uses a different source file"
);
CacheMessage {
rule,
body: msg.body().to_string(),
suggestion: msg.first_help_text().map(ToString::to_string),
range: msg.expect_range(),
parent: msg.parent(),
fix: msg.fix().cloned(),
noqa_offset: msg.noqa_offset(),
}
})
.collect();
Self {
messages,
source,
notebook_index,
}
}
}
/// On disk representation of a diagnostic message.
#[derive(bincode::Decode, Debug, bincode::Encode, PartialEq)]
pub(super) struct CacheMessage {
/// The rule for the cached diagnostic.
#[bincode(with_serde)]
rule: Rule,
/// The message body to display to the user, to explain the diagnostic.
body: String,
/// The message to display to the user, to explain the suggested fix.
suggestion: Option<String>,
/// Range into the message's [`FileCache::source`].
#[bincode(with_serde)]
range: TextRange,
#[bincode(with_serde)]
parent: Option<TextSize>,
#[bincode(with_serde)]
fix: Option<Fix>,
#[bincode(with_serde)]
noqa_offset: Option<TextSize>,
}
pub(crate) trait PackageCaches {
fn get(&self, package_root: &Path) -> Option<&Cache>;
@@ -579,15 +456,15 @@ struct Change {
#[derive(Debug)]
enum ChangeData {
Lint(LintCacheData),
Linted(bool),
Formatted,
}
impl ChangeData {
fn apply(self, data: &mut FileCacheData) {
match self {
ChangeData::Lint(new_lint) => {
data.lint = Some(new_lint);
ChangeData::Linted(yes) => {
data.linted = yes;
}
ChangeData::Formatted => {
data.formatted = true;
@@ -612,7 +489,6 @@ mod tests {
use test_case::test_case;
use ruff_cache::CACHE_DIR_NAME;
use ruff_db::diagnostic::Diagnostic;
use ruff_linter::package::PackageRoot;
use ruff_linter::settings::LinterSettings;
use ruff_linter::settings::flags;
@@ -620,7 +496,7 @@ mod tests {
use ruff_python_ast::{PySourceType, PythonVersion};
use ruff_workspace::Settings;
use crate::cache::{self, FileCache, FileCacheData, FileCacheKey};
use crate::cache::{self, ChangeData, FileCache, FileCacheData, FileCacheKey};
use crate::cache::{Cache, RelativePathBuf};
use crate::commands::format::{FormatCommandError, FormatMode, FormatResult, format_path};
use crate::diagnostics::{Diagnostics, lint_path};
@@ -647,7 +523,7 @@ mod tests {
assert_eq!(cache.changes.lock().unwrap().len(), 0);
let mut paths = Vec::new();
let mut parse_errors = Vec::new();
let mut paths_with_diagnostics = Vec::new();
let mut expected_diagnostics = Diagnostics::default();
for entry in fs::read_dir(&package_root).unwrap() {
let entry = entry.unwrap();
@@ -671,7 +547,7 @@ mod tests {
continue;
}
let diagnostics = lint_path(
let mut diagnostics = lint_path(
&path,
Some(PackageRoot::root(&package_root)),
&settings.linter,
@@ -681,8 +557,15 @@ mod tests {
UnsafeFixes::Enabled,
)
.unwrap();
if diagnostics.inner.iter().any(Diagnostic::is_invalid_syntax) {
parse_errors.push(path.clone());
if diagnostics.inner.is_empty() {
// We won't load a notebook index from the cache for files without diagnostics,
// so remove them from `expected_diagnostics` too. This allows us to keep the
// full equality assertion below.
diagnostics
.notebook_indexes
.remove(&path.to_string_lossy().to_string());
} else {
paths_with_diagnostics.push(path.clone());
}
paths.push(path);
expected_diagnostics += diagnostics;
@@ -695,11 +578,11 @@ mod tests {
let cache = Cache::open(package_root.clone(), &settings);
assert_ne!(cache.package.files.len(), 0);
parse_errors.sort();
paths_with_diagnostics.sort();
for path in &paths {
if parse_errors.binary_search(path).is_ok() {
continue; // We don't cache parsing errors.
if paths_with_diagnostics.binary_search(path).is_ok() {
continue; // We don't cache files with diagnostics.
}
let relative_path = cache.relative_path(path).unwrap();
@@ -733,7 +616,7 @@ mod tests {
#[test]
fn cache_adds_file_on_lint() {
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\"])\n";
let test_cache = TestCache::new("cache_adds_file_on_lint");
let cache = test_cache.open();
@@ -757,7 +640,7 @@ mod tests {
#[test]
fn cache_adds_files_on_lint() {
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\"])\n";
let test_cache = TestCache::new("cache_adds_files_on_lint");
let cache = test_cache.open();
@@ -782,6 +665,40 @@ mod tests {
cache.persist().unwrap();
}
#[test]
fn cache_does_not_add_file_on_lint_with_diagnostic() {
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let test_cache = TestCache::new("cache_does_not_add_file_on_lint_with_diagnostic");
let cache = test_cache.open();
test_cache.write_source_file("source.py", source);
assert_eq!(cache.changes.lock().unwrap().len(), 0);
cache.persist().unwrap();
let cache = test_cache.open();
let results = test_cache
.lint_file_with_cache("source.py", &cache)
.expect("Failed to lint test file");
assert_eq!(results.inner.len(), 1, "Expected one F822 diagnostic");
assert_eq!(
cache.changes.lock().unwrap().len(),
1,
"Files with diagnostics still trigger change events"
);
assert!(
cache
.changes
.lock()
.unwrap()
.last()
.is_some_and(|change| matches!(change.new_data, ChangeData::Linted(false))),
"Files with diagnostics are marked as unlinted"
);
cache.persist().unwrap();
}
#[test]
fn cache_adds_files_on_format() {
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
@@ -812,7 +729,7 @@ mod tests {
#[test]
fn cache_invalidated_on_file_modified_time() {
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\"])\n";
let test_cache = TestCache::new("cache_invalidated_on_file_modified_time");
let cache = test_cache.open();
@@ -869,7 +786,7 @@ mod tests {
file.set_permissions(perms)
}
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\"])\n";
let test_cache = TestCache::new("cache_invalidated_on_permission_change");
let cache = test_cache.open();
@@ -922,7 +839,7 @@ mod tests {
);
// Now actually lint a file.
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\"])\n";
test_cache.write_source_file("new.py", source);
let new_path_key = RelativePathBuf::from("new.py");
assert_eq!(cache.changes.lock().unwrap().len(), 0);
@@ -945,7 +862,7 @@ mod tests {
#[test]
fn format_updates_cache_entry() {
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\", \"b\"])\n";
let source: &[u8] = b"a = 1\n\n__all__ = list([\"a\"])\n";
let test_cache = TestCache::new("format_updates_cache_entry");
let cache = test_cache.open();
@@ -979,7 +896,7 @@ mod tests {
panic!("Cache entry for `source.py` is missing.");
};
assert!(file_cache.data.lint.is_some());
assert!(file_cache.data.linted);
assert!(file_cache.data.formatted);
}
@@ -1029,7 +946,7 @@ mod tests {
panic!("Cache entry for `source.py` is missing.");
};
assert_eq!(file_cache.data.lint, None);
assert!(!file_cache.data.linted);
assert!(file_cache.data.formatted);
}

View File

@@ -20,15 +20,21 @@ use ruff_linter::settings::types::UnsafeFixes;
use ruff_linter::settings::{LinterSettings, flags};
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_linter::{IOError, Violation, fs};
use ruff_notebook::{Notebook, NotebookError, NotebookIndex};
use ruff_notebook::{NotebookError, NotebookIndex};
use ruff_python_ast::{PySourceType, SourceType, TomlSourceType};
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::TextRange;
use ruff_workspace::Settings;
use rustc_hash::FxHashMap;
use crate::cache::{Cache, FileCacheKey, LintCacheData};
use crate::cache::{Cache, FileCache, FileCacheKey};
/// A collection of [`Diagnostic`]s and additional information needed to render them.
///
/// Note that `notebook_indexes` may be empty if there are no diagnostics because the
/// `NotebookIndex` isn't cached in this case. This isn't a problem for any current uses as of
/// 2025-08-12, which are all related to diagnostic rendering, but could be surprising if used
/// differently in the future.
#[derive(Debug, Default, PartialEq)]
pub(crate) struct Diagnostics {
pub(crate) inner: Vec<Diagnostic>,
@@ -193,19 +199,9 @@ pub(crate) fn lint_path(
let cache_key = FileCacheKey::from_path(path).context("Failed to create cache key")?;
let cached_diagnostics = cache
.get(relative_path, &cache_key)
.and_then(|entry| entry.to_diagnostics(path));
if let Some(diagnostics) = cached_diagnostics {
// `FixMode::Generate` and `FixMode::Diff` rely on side-effects (writing to disk,
// and writing the diff to stdout, respectively). If a file has diagnostics, we
// need to avoid reading from and writing to the cache in these modes.
if match fix_mode {
flags::FixMode::Generate => true,
flags::FixMode::Apply | flags::FixMode::Diff => {
diagnostics.inner.is_empty() && diagnostics.fixed.is_empty()
}
} {
return Ok(diagnostics);
}
.is_some_and(FileCache::linted);
if cached_diagnostics {
return Ok(Diagnostics::default());
}
// Stash the file metadata for later so when we update the cache it reflects the prerun
@@ -322,31 +318,21 @@ pub(crate) fn lint_path(
(result, transformed, fixed)
};
let has_error = result.has_syntax_errors();
let diagnostics = result.diagnostics;
if let Some((cache, relative_path, key)) = caching {
// We don't cache parsing errors.
if !has_error {
// `FixMode::Apply` and `FixMode::Diff` rely on side-effects (writing to disk,
// and writing the diff to stdout, respectively). If a file has diagnostics, we
// need to avoid reading from and writing to the cache in these modes.
if match fix_mode {
flags::FixMode::Generate => true,
flags::FixMode::Apply | flags::FixMode::Diff => {
diagnostics.is_empty() && fixed.is_empty()
}
} {
cache.update_lint(
relative_path.to_owned(),
&key,
LintCacheData::from_diagnostics(
&diagnostics,
transformed.as_ipy_notebook().map(Notebook::index).cloned(),
),
);
}
}
// `FixMode::Apply` and `FixMode::Diff` rely on side-effects (writing to disk,
// and writing the diff to stdout, respectively). If a file has diagnostics
// with fixes, we need to avoid reading from and writing to the cache in these
// modes.
let use_fixes = match fix_mode {
flags::FixMode::Generate => true,
flags::FixMode::Apply | flags::FixMode::Diff => fixed.is_empty(),
};
// We don't cache files with diagnostics.
let linted = diagnostics.is_empty() && use_fixes;
cache.set_linted(relative_path.to_owned(), &key, linted);
}
let notebook_indexes = if let SourceKind::IpyNotebook(notebook) = transformed {

View File

@@ -19,7 +19,8 @@ static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
any(
target_arch = "x86_64",
target_arch = "aarch64",
target_arch = "powerpc64"
target_arch = "powerpc64",
target_arch = "riscv64"
)
))]
#[global_allocator]

View File

@@ -15,8 +15,7 @@ use ruff_db::diagnostic::{
use ruff_linter::fs::relativize_path;
use ruff_linter::logging::LogLevel;
use ruff_linter::message::{
Emitter, EmitterContext, GithubEmitter, GitlabEmitter, GroupedEmitter, SarifEmitter,
TextEmitter,
Emitter, EmitterContext, GithubEmitter, GroupedEmitter, SarifEmitter, TextEmitter,
};
use ruff_linter::notify_user;
use ruff_linter::settings::flags::{self};
@@ -296,7 +295,11 @@ impl Printer {
GithubEmitter.emit(writer, &diagnostics.inner, &context)?;
}
OutputFormat::Gitlab => {
GitlabEmitter::default().emit(writer, &diagnostics.inner, &context)?;
let config = DisplayDiagnosticConfig::default()
.format(DiagnosticFormat::Gitlab)
.preview(preview);
let value = DisplayDiagnostics::new(&context, &config, &diagnostics.inner);
write!(writer, "{value}")?;
}
OutputFormat::Pylint => {
let config = DisplayDiagnosticConfig::default()

View File

@@ -115,12 +115,13 @@ fn stdin_error() {
success: false
exit_code: 1
----- stdout -----
-:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> -:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Found 1 error.
[*] 1 fixable with the `--fix` option.
@@ -139,12 +140,13 @@ fn stdin_filename() {
success: false
exit_code: 1
----- stdout -----
F401.py:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> F401.py:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Found 1 error.
[*] 1 fixable with the `--fix` option.
@@ -174,19 +176,21 @@ import bar # unused import
success: false
exit_code: 1
----- stdout -----
bar.py:2:8: F401 [*] `bar` imported but unused
F401 [*] `bar` imported but unused
--> bar.py:2:8
|
2 | import bar # unused import
| ^^^ F401
| ^^^
|
= help: Remove unused import: `bar`
help: Remove unused import: `bar`
foo.py:2:8: F401 [*] `foo` imported but unused
F401 [*] `foo` imported but unused
--> foo.py:2:8
|
2 | import foo # unused import
| ^^^ F401
| ^^^
|
= help: Remove unused import: `foo`
help: Remove unused import: `foo`
Found 2 errors.
[*] 2 fixable with the `--fix` option.
@@ -208,12 +212,13 @@ fn check_warn_stdin_filename_with_files() {
success: false
exit_code: 1
----- stdout -----
F401.py:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> F401.py:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Found 1 error.
[*] 1 fixable with the `--fix` option.
@@ -234,12 +239,13 @@ fn stdin_source_type_py() {
success: false
exit_code: 1
----- stdout -----
TCH.py:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> TCH.py:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Found 1 error.
[*] 1 fixable with the `--fix` option.
@@ -471,10 +477,11 @@ fn stdin_fix_jupyter() {
"nbformat_minor": 5
}
----- stderr -----
Jupyter.ipynb:cell 3:1:7: F821 Undefined name `x`
F821 Undefined name `x`
--> Jupyter.ipynb:cell 3:1:7
|
1 | print(x)
| ^ F821
| ^
|
Found 3 errors (2 fixed, 1 remaining).
@@ -569,19 +576,21 @@ fn stdin_override_parser_ipynb() {
success: false
exit_code: 1
----- stdout -----
Jupyter.py:cell 1:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> Jupyter.py:cell 1:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Jupyter.py:cell 3:1:8: F401 [*] `sys` imported but unused
F401 [*] `sys` imported but unused
--> Jupyter.py:cell 3:1:8
|
1 | import sys
| ^^^ F401
| ^^^
|
= help: Remove unused import: `sys`
help: Remove unused import: `sys`
Found 2 errors.
[*] 2 fixable with the `--fix` option.
@@ -605,12 +614,13 @@ fn stdin_override_parser_py() {
success: false
exit_code: 1
----- stdout -----
F401.ipynb:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> F401.ipynb:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Found 1 error.
[*] 1 fixable with the `--fix` option.
@@ -633,12 +643,13 @@ fn stdin_fix_when_not_fixable_should_still_print_contents() {
print(sys.version)
----- stderr -----
-:3:4: F634 If test is a tuple, which is always `True`
F634 If test is a tuple, which is always `True`
--> -:3:4
|
1 | import sys
2 |
3 | if (1, 2):
| ^^^^^^ F634
| ^^^^^^
4 | print(sys.version)
|
@@ -798,7 +809,8 @@ fn stdin_parse_error() {
success: false
exit_code: 1
----- stdout -----
-:1:16: SyntaxError: Expected one or more symbol names after import
invalid-syntax: Expected one or more symbol names after import
--> -:1:16
|
1 | from foo import
| ^
@@ -818,14 +830,16 @@ fn stdin_multiple_parse_error() {
success: false
exit_code: 1
----- stdout -----
-:1:16: SyntaxError: Expected one or more symbol names after import
invalid-syntax: Expected one or more symbol names after import
--> -:1:16
|
1 | from foo import
| ^
2 | bar =
|
-:2:6: SyntaxError: Expected an expression
invalid-syntax: Expected an expression
--> -:2:6
|
1 | from foo import
2 | bar =
@@ -847,7 +861,8 @@ fn parse_error_not_included() {
success: false
exit_code: 1
----- stdout -----
-:1:6: SyntaxError: Expected an expression
invalid-syntax: Expected an expression
--> -:1:6
|
1 | foo =
| ^
@@ -867,10 +882,11 @@ fn full_output_preview() {
success: false
exit_code: 1
----- stdout -----
-:1:1: E741 Ambiguous variable name: `l`
E741 Ambiguous variable name: `l`
--> -:1:1
|
1 | l = 1
| ^ E741
| ^
|
Found 1 error.
@@ -895,10 +911,11 @@ preview = true
success: false
exit_code: 1
----- stdout -----
-:1:1: E741 Ambiguous variable name: `l`
E741 Ambiguous variable name: `l`
--> -:1:1
|
1 | l = 1
| ^ E741
| ^
|
Found 1 error.
@@ -916,10 +933,11 @@ fn full_output_format() {
success: false
exit_code: 1
----- stdout -----
-:1:1: E741 Ambiguous variable name: `l`
E741 Ambiguous variable name: `l`
--> -:1:1
|
1 | l = 1
| ^ E741
| ^
|
Found 1 error.
@@ -1406,7 +1424,9 @@ fn redirect_direct() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF950 Hey this is a test rule that was redirected from another.
RUF950 Hey this is a test rule that was redirected from another.
--> -:1:1
Found 1 error.
----- stderr -----
@@ -1438,7 +1458,9 @@ fn redirect_prefix() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF950 Hey this is a test rule that was redirected from another.
RUF950 Hey this is a test rule that was redirected from another.
--> -:1:1
Found 1 error.
----- stderr -----
@@ -1455,7 +1477,9 @@ fn deprecated_direct() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF920 Hey this is a deprecated test rule.
RUF920 Hey this is a deprecated test rule.
--> -:1:1
Found 1 error.
----- stderr -----
@@ -1472,8 +1496,12 @@ fn deprecated_multiple_direct() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF920 Hey this is a deprecated test rule.
-:1:1: RUF921 Hey this is another deprecated test rule.
RUF920 Hey this is a deprecated test rule.
--> -:1:1
RUF921 Hey this is another deprecated test rule.
--> -:1:1
Found 2 errors.
----- stderr -----
@@ -1491,8 +1519,12 @@ fn deprecated_indirect() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF920 Hey this is a deprecated test rule.
-:1:1: RUF921 Hey this is another deprecated test rule.
RUF920 Hey this is a deprecated test rule.
--> -:1:1
RUF921 Hey this is another deprecated test rule.
--> -:1:1
Found 2 errors.
----- stderr -----
@@ -1638,22 +1670,23 @@ fn check_input_from_argfile() -> Result<()> {
(file_a_path.display().to_string().as_str(), "/path/to/a.py"),
]}, {
assert_cmd_snapshot!(cmd
.pass_stdin(""), @r###"
.pass_stdin(""), @r"
success: false
exit_code: 1
----- stdout -----
/path/to/a.py:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> /path/to/a.py:1:8
|
1 | import os
| ^^ F401
| ^^
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -1669,8 +1702,12 @@ fn check_hints_hidden_unsafe_fixes() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF901 [*] Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors.
[*] 1 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option).
@@ -1687,7 +1724,9 @@ fn check_hints_hidden_unsafe_fixes_with_no_safe_fixes() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 1 error.
No fixes available (1 hidden fix can be enabled with the `--unsafe-fixes` option).
@@ -1705,8 +1744,12 @@ fn check_no_hint_for_hidden_unsafe_fixes_when_disabled() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF901 [*] Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors.
[*] 1 fixable with the --fix option.
@@ -1725,7 +1768,9 @@ fn check_no_hint_for_hidden_unsafe_fixes_with_no_safe_fixes_when_disabled() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 1 error.
----- stderr -----
@@ -1742,8 +1787,12 @@ fn check_shows_unsafe_fixes_with_opt_in() {
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 [*] Hey this is a stable test rule with an unsafe fix.
RUF901 [*] Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 [*] Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors.
[*] 2 fixable with the --fix option.
@@ -1764,7 +1813,9 @@ fn fix_applies_safe_fixes_by_default() {
# fix from stable-test-rule-safe-fix
----- stderr -----
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors (1 fixed, 1 remaining).
No fixes available (1 hidden fix can be enabled with the `--unsafe-fixes` option).
");
@@ -1801,7 +1852,9 @@ fn fix_does_not_apply_display_only_fixes() {
----- stdout -----
def add_to_list(item, some_list=[]): ...
----- stderr -----
-:1:1: RUF903 Hey this is a stable test rule with a display only fix.
RUF903 Hey this is a stable test rule with a display only fix.
--> -:1:1
Found 1 error.
");
}
@@ -1819,7 +1872,9 @@ fn fix_does_not_apply_display_only_fixes_with_unsafe_fixes_enabled() {
----- stdout -----
def add_to_list(item, some_list=[]): ...
----- stderr -----
-:1:1: RUF903 Hey this is a stable test rule with a display only fix.
RUF903 Hey this is a stable test rule with a display only fix.
--> -:1:1
Found 1 error.
");
}
@@ -1836,7 +1891,9 @@ fn fix_only_unsafe_fixes_available() {
----- stdout -----
----- stderr -----
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 1 error.
No fixes available (1 hidden fix can be enabled with the `--unsafe-fixes` option).
");
@@ -1972,8 +2029,12 @@ extend-unsafe-fixes = ["RUF901"]
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF901 Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF901 Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors.
No fixes available (2 hidden fixes can be enabled with the `--unsafe-fixes` option).
@@ -2004,8 +2065,12 @@ extend-safe-fixes = ["RUF902"]
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 [*] Hey this is a stable test rule with an unsafe fix.
RUF901 [*] Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 [*] Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors.
[*] 2 fixable with the `--fix` option.
@@ -2038,8 +2103,12 @@ extend-safe-fixes = ["RUF902"]
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF901 [*] Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 Hey this is a stable test rule with an unsafe fix.
RUF901 [*] Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 Hey this is a stable test rule with an unsafe fix.
--> -:1:1
Found 2 errors.
[*] 1 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option).
@@ -2074,13 +2143,27 @@ extend-safe-fixes = ["RUF9"]
success: false
exit_code: 1
----- stdout -----
-:1:1: RUF900 Hey this is a stable test rule.
-:1:1: RUF901 Hey this is a stable test rule with a safe fix.
-:1:1: RUF902 [*] Hey this is a stable test rule with an unsafe fix.
-:1:1: RUF903 Hey this is a stable test rule with a display only fix.
-:1:1: RUF920 Hey this is a deprecated test rule.
-:1:1: RUF921 Hey this is another deprecated test rule.
-:1:1: RUF950 Hey this is a test rule that was redirected from another.
RUF900 Hey this is a stable test rule.
--> -:1:1
RUF901 Hey this is a stable test rule with a safe fix.
--> -:1:1
RUF902 [*] Hey this is a stable test rule with an unsafe fix.
--> -:1:1
RUF903 Hey this is a stable test rule with a display only fix.
--> -:1:1
RUF920 Hey this is a deprecated test rule.
--> -:1:1
RUF921 Hey this is another deprecated test rule.
--> -:1:1
RUF950 Hey this is a test rule that was redirected from another.
--> -:1:1
Found 7 errors.
[*] 1 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option).
@@ -2141,10 +2224,11 @@ def log(x, base) -> float:
success: false
exit_code: 1
----- stdout -----
-:2:5: D417 Missing argument description in the docstring for `log`: `base`
D417 Missing argument description in the docstring for `log`: `base`
--> -:2:5
|
2 | def log(x, base) -> float:
| ^^^ D417
| ^^^
3 | """Calculate natural log of a value
|
@@ -2177,14 +2261,15 @@ select = ["RUF017"]
success: false
exit_code: 1
----- stdout -----
-:3:1: RUF017 Avoid quadratic list summation
RUF017 Avoid quadratic list summation
--> -:3:1
|
1 | x = [1, 2, 3]
2 | y = [4, 5, 6]
3 | sum([x, y], [])
| ^^^^^^^^^^^^^^^ RUF017
| ^^^^^^^^^^^^^^^
|
= help: Replace with `functools.reduce`
help: Replace with `functools.reduce`
Found 1 error.
No fixes available (1 hidden fix can be enabled with the `--unsafe-fixes` option).
@@ -2217,14 +2302,15 @@ unfixable = ["RUF"]
success: false
exit_code: 1
----- stdout -----
-:3:1: RUF017 Avoid quadratic list summation
RUF017 Avoid quadratic list summation
--> -:3:1
|
1 | x = [1, 2, 3]
2 | y = [4, 5, 6]
3 | sum([x, y], [])
| ^^^^^^^^^^^^^^^ RUF017
| ^^^^^^^^^^^^^^^
|
= help: Replace with `functools.reduce`
help: Replace with `functools.reduce`
Found 1 error.
@@ -2246,10 +2332,11 @@ fn pyproject_toml_stdin_syntax_error() {
success: false
exit_code: 1
----- stdout -----
pyproject.toml:1:9: RUF200 Failed to parse pyproject.toml: unclosed table, expected `]`
RUF200 Failed to parse pyproject.toml: unclosed table, expected `]`
--> pyproject.toml:1:9
|
1 | [project
| ^ RUF200
| ^
|
Found 1 error.
@@ -2271,11 +2358,12 @@ fn pyproject_toml_stdin_schema_error() {
success: false
exit_code: 1
----- stdout -----
pyproject.toml:2:8: RUF200 Failed to parse pyproject.toml: invalid type: integer `1`, expected a string
RUF200 Failed to parse pyproject.toml: invalid type: integer `1`, expected a string
--> pyproject.toml:2:8
|
1 | [project]
2 | name = 1
| ^ RUF200
| ^
|
Found 1 error.
@@ -2363,11 +2451,12 @@ fn pyproject_toml_stdin_schema_error_fix() {
[project]
name = 1
----- stderr -----
pyproject.toml:2:8: RUF200 Failed to parse pyproject.toml: invalid type: integer `1`, expected a string
RUF200 Failed to parse pyproject.toml: invalid type: integer `1`, expected a string
--> pyproject.toml:2:8
|
1 | [project]
2 | name = 1
| ^ RUF200
| ^
|
Found 1 error.

View File

@@ -4996,6 +4996,37 @@ fn flake8_import_convention_invalid_aliases_config_module_name() -> Result<()> {
Ok(())
}
#[test]
fn flake8_import_convention_nfkc_normalization() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.flake8-import-conventions.aliases]
"test.module" = "_𝘥𝘦𝘣𝘶𝘨"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
, @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Invalid alias for module 'test.module': alias normalizes to '__debug__', which is not allowed.
");});
Ok(())
}
#[test]
fn flake8_import_convention_unused_aliased_import() {
assert_cmd_snapshot!(
@@ -5389,7 +5420,7 @@ fn walrus_before_py38() {
success: false
exit_code: 1
----- stdout -----
test.py:1:2: SyntaxError: Cannot use named assignment expression (`:=`) on Python 3.7 (syntax was added in Python 3.8)
test.py:1:2: invalid-syntax: Cannot use named assignment expression (`:=`) on Python 3.7 (syntax was added in Python 3.8)
Found 1 error.
----- stderr -----
@@ -5435,15 +5466,15 @@ match 2:
print("it's one")
"#
),
@r###"
@r"
success: false
exit_code: 1
----- stdout -----
test.py:2:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
test.py:2:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 1 error.
----- stderr -----
"###
"
);
// syntax error on 3.9 with preview
@@ -5464,7 +5495,7 @@ match 2:
success: false
exit_code: 1
----- stdout -----
test.py:2:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
test.py:2:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 1 error.
----- stderr -----
@@ -5492,7 +5523,7 @@ fn cache_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
main.py:1:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----
"
@@ -5505,7 +5536,7 @@ fn cache_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
main.py:1:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----
"
@@ -5557,15 +5588,15 @@ fn cookiecutter_globbing() -> Result<()> {
.args(STDIN_BASE_OPTIONS)
.arg("--select=F811")
.current_dir(tempdir.path()), @r"
success: false
exit_code: 1
----- stdout -----
{{cookiecutter.repo_name}}/tests/maintest.py:3:8: F811 [*] Redefinition of unused `foo` from line 1
Found 1 error.
[*] 1 fixable with the `--fix` option.
success: false
exit_code: 1
----- stdout -----
{{cookiecutter.repo_name}}/tests/maintest.py:3:8: F811 [*] Redefinition of unused `foo` from line 1: `foo` redefined here
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
");
----- stderr -----
");
});
Ok(())
@@ -5618,7 +5649,7 @@ fn semantic_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:3: SyntaxError: assignment expression cannot rebind comprehension variable
main.py:1:3: invalid-syntax: assignment expression cannot rebind comprehension variable
main.py:1:20: F821 Undefined name `foo`
----- stderr -----
@@ -5632,7 +5663,7 @@ fn semantic_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:3: SyntaxError: assignment expression cannot rebind comprehension variable
main.py:1:3: invalid-syntax: assignment expression cannot rebind comprehension variable
main.py:1:20: F821 Undefined name `foo`
----- stderr -----
@@ -5651,7 +5682,7 @@ fn semantic_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
-:1:3: SyntaxError: assignment expression cannot rebind comprehension variable
-:1:3: invalid-syntax: assignment expression cannot rebind comprehension variable
Found 1 error.
----- stderr -----
@@ -5770,3 +5801,32 @@ fn future_annotations_preview_warning() {
",
);
}
#[test]
fn up045_nested_optional_flatten_all() {
let contents = "\
from typing import Optional
nested_optional: Optional[Optional[Optional[str]]] = None
";
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--select", "UP045", "--diff", "--target-version", "py312"])
.arg("-")
.pass_stdin(contents),
@r"
success: false
exit_code: 1
----- stdout -----
@@ -1,2 +1,2 @@
from typing import Optional
-nested_optional: Optional[Optional[Optional[str]]] = None
+nested_optional: str | None = None
----- stderr -----
Would fix 1 error.
",
);
}

View File

@@ -95,6 +95,6 @@ is stricter, which could affect the suggested fix. See [this FAQ section](https:
## References
- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
- [Typing documentation: interface conventions](https://typing.python.org/en/latest/source/libraries.html#library-interface-public-and-private-symbols)
- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)
----- stderr -----

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=1;columnnumber=8;code=F401;]`os` imported but unused
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=2;columnnumber=5;code=F821;]Undefined name `y`
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=3;columnnumber=1;]SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=3;columnnumber=1;code=invalid-syntax;]Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----

View File

@@ -18,7 +18,7 @@ exit_code: 1
----- stdout -----
input.py:1:8: F401 [*] `os` imported but unused
input.py:2:5: F821 Undefined name `y`
input.py:3:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
input.py:3:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 3 errors.
[*] 1 fixable with the `--fix` option.

View File

@@ -16,25 +16,28 @@ info:
success: false
exit_code: 1
----- stdout -----
input.py:1:8: F401 [*] `os` imported but unused
F401 [*] `os` imported but unused
--> input.py:1:8
|
1 | import os # F401
| ^^ F401
| ^^
2 | x = y # F821
3 | match 42: # invalid-syntax
|
= help: Remove unused import: `os`
help: Remove unused import: `os`
input.py:2:5: F821 Undefined name `y`
F821 Undefined name `y`
--> input.py:2:5
|
1 | import os # F401
2 | x = y # F821
| ^ F821
| ^
3 | match 42: # invalid-syntax
4 | case _: ...
|
input.py:3:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
--> input.py:3:1
|
1 | import os # F401
2 | x = y # F821

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
::error title=Ruff (F401),file=[TMP]/input.py,line=1,col=8,endLine=1,endColumn=10::input.py:1:8: F401 `os` imported but unused
::error title=Ruff (F821),file=[TMP]/input.py,line=2,col=5,endLine=2,endColumn=6::input.py:2:5: F821 Undefined name `y`
::error title=Ruff,file=[TMP]/input.py,line=3,col=1,endLine=3,endColumn=6::input.py:3:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
::error title=Ruff (invalid-syntax),file=[TMP]/input.py,line=3,col=1,endLine=3,endColumn=6::input.py:3:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----

View File

@@ -19,42 +19,60 @@ exit_code: 1
[
{
"check_name": "F401",
"description": "`os` imported but unused",
"description": "F401: `os` imported but unused",
"severity": "major",
"fingerprint": "4dbad37161e65c72",
"location": {
"lines": {
"begin": 1,
"end": 1
},
"path": "input.py"
},
"severity": "major"
"path": "input.py",
"positions": {
"begin": {
"line": 1,
"column": 8
},
"end": {
"line": 1,
"column": 10
}
}
}
},
{
"check_name": "F821",
"description": "Undefined name `y`",
"description": "F821: Undefined name `y`",
"severity": "major",
"fingerprint": "7af59862a085230",
"location": {
"lines": {
"begin": 2,
"end": 2
},
"path": "input.py"
},
"severity": "major"
"path": "input.py",
"positions": {
"begin": {
"line": 2,
"column": 5
},
"end": {
"line": 2,
"column": 6
}
}
}
},
{
"check_name": "syntax-error",
"description": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"check_name": "invalid-syntax",
"description": "invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"severity": "major",
"fingerprint": "e558cec859bb66e8",
"location": {
"lines": {
"begin": 3,
"end": 3
},
"path": "input.py"
},
"severity": "major"
"path": "input.py",
"positions": {
"begin": {
"line": 3,
"column": 1
},
"end": {
"line": 3,
"column": 6
}
}
}
}
]
----- stderr -----

View File

@@ -19,7 +19,7 @@ exit_code: 1
input.py:
1:8 F401 [*] `os` imported but unused
2:5 F821 Undefined name `y`
3:1 SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
3:1 invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 3 errors.
[*] 1 fixable with the `--fix` option.

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
{"cell":null,"code":"F401","end_location":{"column":10,"row":1},"filename":"[TMP]/input.py","fix":{"applicability":"safe","edits":[{"content":"","end_location":{"column":1,"row":2},"location":{"column":1,"row":1}}],"message":"Remove unused import: `os`"},"location":{"column":8,"row":1},"message":"`os` imported but unused","noqa_row":1,"url":"https://docs.astral.sh/ruff/rules/unused-import"}
{"cell":null,"code":"F821","end_location":{"column":6,"row":2},"filename":"[TMP]/input.py","fix":null,"location":{"column":5,"row":2},"message":"Undefined name `y`","noqa_row":2,"url":"https://docs.astral.sh/ruff/rules/undefined-name"}
{"cell":null,"code":null,"end_location":{"column":6,"row":3},"filename":"[TMP]/input.py","fix":null,"location":{"column":1,"row":3},"message":"SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)","noqa_row":null,"url":null}
{"cell":null,"code":"invalid-syntax","end_location":{"column":6,"row":3},"filename":"[TMP]/input.py","fix":null,"location":{"column":1,"row":3},"message":"Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)","noqa_row":null,"url":null}
----- stderr -----

View File

@@ -69,7 +69,7 @@ exit_code: 1
},
{
"cell": null,
"code": null,
"code": "invalid-syntax",
"end_location": {
"column": 6,
"row": 3
@@ -80,7 +80,7 @@ exit_code: 1
"column": 1,
"row": 3
},
"message": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"message": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"noqa_row": null,
"url": null
}

View File

@@ -26,7 +26,7 @@ exit_code: 1
<failure message="Undefined name `y`">line 2, col 5, Undefined name `y`</failure>
</testcase>
<testcase name="org.ruff.invalid-syntax" classname="[TMP]/input" line="3" column="1">
<failure message="SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)">line 3, col 1, SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)</failure>
<failure message="Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)">line 3, col 1, Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)</failure>
</testcase>
</testsuite>
</testsuites>

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
input.py:1: [F401] `os` imported but unused
input.py:2: [F821] Undefined name `y`
input.py:3: [invalid-syntax] SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
input.py:3: [invalid-syntax] Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----

View File

@@ -90,7 +90,7 @@ exit_code: 1
}
}
},
"message": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
"message": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
}
],
"severity": "WARNING",

View File

@@ -83,9 +83,9 @@ exit_code: 1
}
],
"message": {
"text": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
"text": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
},
"ruleId": null
"ruleId": "invalid-syntax"
}
],
"tool": {
@@ -95,7 +95,7 @@ exit_code: 1
"rules": [
{
"fullDescription": {
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Preview\nWhen [preview](https://docs.astral.sh/ruff/preview/) is enabled,\nthe criterion for determining whether an import is first-party\nis stricter, which could affect the suggested fix. See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/source/libraries.html#library-interface-public-and-private-symbols)\n"
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Preview\nWhen [preview](https://docs.astral.sh/ruff/preview/) is enabled,\nthe criterion for determining whether an import is first-party\nis stricter, which could affect the suggested fix. See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n"
},
"help": {
"text": "`{name}` imported but unused; consider using `importlib.util.find_spec` to test for availability"

View File

@@ -1,3 +1,5 @@
#![expect(clippy::needless_doctest_main)]
//! A library for formatting of text or programming code snippets.
//!
//! It's primary purpose is to build an ASCII-graphical representation of the snippet

View File

@@ -193,9 +193,14 @@ impl DisplaySet<'_> {
stylesheet: &Stylesheet,
buffer: &mut StyledBuffer,
) -> fmt::Result {
let hide_severity = annotation.annotation_type.is_none();
let color = get_annotation_style(&annotation.annotation_type, stylesheet);
let formatted_len = if let Some(id) = &annotation.id {
2 + id.len() + annotation_type_len(&annotation.annotation_type)
if hide_severity {
id.len()
} else {
2 + id.len() + annotation_type_len(&annotation.annotation_type)
}
} else {
annotation_type_len(&annotation.annotation_type)
};
@@ -209,18 +214,66 @@ impl DisplaySet<'_> {
if formatted_len == 0 {
self.format_label(line_offset, &annotation.label, stylesheet, buffer)
} else {
let id = match &annotation.id {
Some(id) => format!("[{id}]"),
None => String::new(),
};
buffer.append(
line_offset,
&format!("{}{}", annotation_type_str(&annotation.annotation_type), id),
*color,
);
// TODO(brent) All of this complicated checking of `hide_severity` should be reverted
// once we have real severities in Ruff. This code is trying to account for two
// different cases:
//
// - main diagnostic message
// - subdiagnostic message
//
// In the first case, signaled by `hide_severity = true`, we want to print the ID (the
// noqa code for a ruff lint diagnostic, e.g. `F401`, or `invalid-syntax` for a syntax
// error) without brackets. Instead, for subdiagnostics, we actually want to print the
// severity (usually `help`) regardless of the `hide_severity` setting. This is signaled
// by an ID of `None`.
//
// With real severities these should be reported more like in ty:
//
// ```
// error[F401]: `math` imported but unused
// error[invalid-syntax]: Cannot use `match` statement on Python 3.9...
// ```
//
// instead of the current versions intended to mimic the old Ruff output format:
//
// ```
// F401 `math` imported but unused
// invalid-syntax: Cannot use `match` statement on Python 3.9...
// ```
//
// Note that the `invalid-syntax` colon is added manually in `ruff_db`, not here. We
// could eventually add a colon to Ruff lint diagnostics (`F401:`) and then make the
// colon below unconditional again.
//
// This also applies to the hard-coded `stylesheet.error()` styling of the
// hidden-severity `id`. This should just be `*color` again later, but for now we don't
// want an unformatted `id`, which is what `get_annotation_style` returns for
// `DisplayAnnotationType::None`.
let annotation_type = annotation_type_str(&annotation.annotation_type);
if let Some(id) = annotation.id {
if hide_severity {
buffer.append(line_offset, &format!("{id} "), *stylesheet.error());
} else {
buffer.append(line_offset, &format!("{annotation_type}[{id}]"), *color);
}
} else {
buffer.append(line_offset, annotation_type, *color);
}
if annotation.is_fixable {
buffer.append(line_offset, "[", stylesheet.none);
buffer.append(line_offset, "*", stylesheet.help);
buffer.append(line_offset, "]", stylesheet.none);
// In the hide-severity case, we need a space instead of the colon and space below.
if hide_severity {
buffer.append(line_offset, " ", stylesheet.none);
}
}
if !is_annotation_empty(annotation) {
buffer.append(line_offset, ": ", stylesheet.none);
if annotation.id.is_none() || !hide_severity {
buffer.append(line_offset, ": ", stylesheet.none);
}
self.format_label(line_offset, &annotation.label, stylesheet, buffer)?;
}
Ok(())
@@ -249,11 +302,15 @@ impl DisplaySet<'_> {
let lineno_color = stylesheet.line_no();
buffer.puts(line_offset, lineno_width, header_sigil, *lineno_color);
buffer.puts(line_offset, lineno_width + 4, path, stylesheet.none);
if let Some((col, row)) = pos {
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, col.to_string().as_str(), stylesheet.none);
if let Some(Position { row, col, cell }) = pos {
if let Some(cell) = cell {
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, &format!("cell {cell}"), stylesheet.none);
}
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, row.to_string().as_str(), stylesheet.none);
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, col.to_string().as_str(), stylesheet.none);
}
Ok(())
}
@@ -768,6 +825,7 @@ pub(crate) struct Annotation<'a> {
pub(crate) annotation_type: DisplayAnnotationType,
pub(crate) id: Option<&'a str>,
pub(crate) label: Vec<DisplayTextFragment<'a>>,
pub(crate) is_fixable: bool,
}
/// A single line used in `DisplayList`.
@@ -833,6 +891,13 @@ impl DisplaySourceAnnotation<'_> {
}
}
#[derive(Debug, PartialEq)]
pub(crate) struct Position {
row: usize,
col: usize,
cell: Option<usize>,
}
/// Raw line - a line which does not have the `lineno` part and is not considered
/// a part of the snippet.
#[derive(Debug, PartialEq)]
@@ -841,7 +906,7 @@ pub(crate) enum DisplayRawLine<'a> {
/// slice in the project structure.
Origin {
path: &'a str,
pos: Option<(usize, usize)>,
pos: Option<Position>,
header_type: DisplayHeaderType,
},
@@ -920,6 +985,13 @@ pub(crate) enum DisplayAnnotationType {
Help,
}
impl DisplayAnnotationType {
#[inline]
const fn is_none(&self) -> bool {
matches!(self, Self::None)
}
}
impl From<snippet::Level> for DisplayAnnotationType {
fn from(at: snippet::Level) -> Self {
match at {
@@ -1015,11 +1087,12 @@ fn format_message<'m>(
title,
footer,
snippets,
is_fixable,
} = message;
let mut sets = vec![];
let body = if !snippets.is_empty() || primary {
vec![format_title(level, id, title)]
vec![format_title(level, id, title, is_fixable)]
} else {
format_footer(level, id, title)
};
@@ -1060,12 +1133,18 @@ fn format_message<'m>(
sets
}
fn format_title<'a>(level: crate::Level, id: Option<&'a str>, label: &'a str) -> DisplayLine<'a> {
fn format_title<'a>(
level: crate::Level,
id: Option<&'a str>,
label: &'a str,
is_fixable: bool,
) -> DisplayLine<'a> {
DisplayLine::Raw(DisplayRawLine::Annotation {
annotation: Annotation {
annotation_type: DisplayAnnotationType::from(level),
id,
label: format_label(Some(label), Some(DisplayTextStyle::Emphasis)),
is_fixable,
},
source_aligned: false,
continuation: false,
@@ -1084,6 +1163,7 @@ fn format_footer<'a>(
annotation_type: DisplayAnnotationType::from(level),
id,
label: format_label(Some(line), None),
is_fixable: false,
},
source_aligned: true,
continuation: i != 0,
@@ -1118,6 +1198,28 @@ fn format_snippet<'m>(
let main_range = snippet.annotations.first().map(|x| x.range.start);
let origin = snippet.origin;
let need_empty_header = origin.is_some() || is_first;
let is_file_level = snippet.annotations.iter().any(|ann| ann.is_file_level);
if is_file_level {
// TODO(brent) enable this assertion again once we set `is_file_level` for individual rules.
// It's causing too many false positives currently when the default is to make any
// annotation with a default range file-level. See
// https://github.com/astral-sh/ruff/issues/19688.
//
// assert!(
// snippet.source.is_empty(),
// "Non-empty file-level snippet that won't be rendered: {:?}",
// snippet.source
// );
let header = format_header(origin, main_range, &[], is_first, snippet.cell_index);
return DisplaySet {
display_lines: header.map_or_else(Vec::new, |header| vec![header]),
margin: Margin::new(0, 0, 0, 0, term_width, 0),
};
}
let cell_index = snippet.cell_index;
let mut body = format_body(
snippet,
need_empty_header,
@@ -1126,7 +1228,13 @@ fn format_snippet<'m>(
anonymized_line_numbers,
cut_indicator,
);
let header = format_header(origin, main_range, &body.display_lines, is_first);
let header = format_header(
origin,
main_range,
&body.display_lines,
is_first,
cell_index,
);
if let Some(header) = header {
body.display_lines.insert(0, header);
@@ -1146,6 +1254,7 @@ fn format_header<'a>(
main_range: Option<usize>,
body: &[DisplayLine<'_>],
is_first: bool,
cell_index: Option<usize>,
) -> Option<DisplayLine<'a>> {
let display_header = if is_first {
DisplayHeaderType::Initial
@@ -1169,20 +1278,31 @@ fn format_header<'a>(
..
} = item
{
if main_range >= range.0 && main_range < range.1 + max(*end_line as usize, 1) {
// At the very end of the `main_range`, report the location as the first character
// in the next line instead of falling back to the default location of `1:1`. This
// is another divergence from upstream.
let end_of_range = range.1 + max(*end_line as usize, 1);
if main_range >= range.0 && main_range < end_of_range {
let char_column = text[0..(main_range - range.0).min(text.len())]
.chars()
.count();
col = char_column + 1;
line_offset = lineno.unwrap_or(1);
break;
} else if main_range == end_of_range {
line_offset = lineno.map_or(1, |line| line + 1);
break;
}
}
}
return Some(DisplayLine::Raw(DisplayRawLine::Origin {
path,
pos: Some((line_offset, col)),
pos: Some(Position {
row: line_offset,
col,
cell: cell_index,
}),
header_type: display_header,
}));
}
@@ -1472,6 +1592,7 @@ fn format_body<'m>(
annotation_type,
id: None,
label: format_label(annotation.label, None),
is_fixable: false,
},
range,
annotation_type: DisplayAnnotationType::from(annotation.level),
@@ -1511,6 +1632,7 @@ fn format_body<'m>(
annotation_type,
id: None,
label: vec![],
is_fixable: false,
},
range,
annotation_type: DisplayAnnotationType::from(annotation.level),
@@ -1580,6 +1702,7 @@ fn format_body<'m>(
annotation_type,
id: None,
label: format_label(annotation.label, None),
is_fixable: false,
},
range,
annotation_type: DisplayAnnotationType::from(annotation.level),

View File

@@ -22,6 +22,7 @@ pub struct Message<'a> {
pub(crate) title: &'a str,
pub(crate) snippets: Vec<Snippet<'a>>,
pub(crate) footer: Vec<Message<'a>>,
pub(crate) is_fixable: bool,
}
impl<'a> Message<'a> {
@@ -49,6 +50,15 @@ impl<'a> Message<'a> {
self.footer.extend(footer);
self
}
/// Whether or not the diagnostic for this message is fixable.
///
/// This is rendered as a `[*]` indicator after the `id` in an annotation header, if the
/// annotation also has `Level::None`.
pub fn is_fixable(mut self, yes: bool) -> Self {
self.is_fixable = yes;
self
}
}
/// Structure containing the slice of text to be annotated and
@@ -65,6 +75,10 @@ pub struct Snippet<'a> {
pub(crate) annotations: Vec<Annotation<'a>>,
pub(crate) fold: bool,
/// The optional cell index in a Jupyter notebook, used for reporting source locations along
/// with the ranges on `annotations`.
pub(crate) cell_index: Option<usize>,
}
impl<'a> Snippet<'a> {
@@ -75,6 +89,7 @@ impl<'a> Snippet<'a> {
source,
annotations: vec![],
fold: false,
cell_index: None,
}
}
@@ -103,6 +118,12 @@ impl<'a> Snippet<'a> {
self.fold = fold;
self
}
/// Attach a Jupyter notebook cell index.
pub fn cell_index(mut self, index: Option<usize>) -> Self {
self.cell_index = index;
self
}
}
/// An annotation for a [`Snippet`].
@@ -114,6 +135,7 @@ pub struct Annotation<'a> {
pub(crate) range: Range<usize>,
pub(crate) label: Option<&'a str>,
pub(crate) level: Level,
pub(crate) is_file_level: bool,
}
impl<'a> Annotation<'a> {
@@ -121,6 +143,11 @@ impl<'a> Annotation<'a> {
self.label = Some(label);
self
}
pub fn is_file_level(mut self, yes: bool) -> Self {
self.is_file_level = yes;
self
}
}
/// Types of annotations.
@@ -145,6 +172,7 @@ impl Level {
title,
snippets: vec![],
footer: vec![],
is_fixable: false,
}
}
@@ -154,6 +182,7 @@ impl Level {
range: span,
label: None,
level: self,
is_file_level: false,
}
}
}

View File

@@ -86,5 +86,5 @@ walltime = ["ruff_db/os", "ty_project", "divan"]
[target.'cfg(target_os = "windows")'.dev-dependencies]
mimalloc = { workspace = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64")))'.dev-dependencies]
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64", target_arch = "riscv64")))'.dev-dependencies]
tikv-jemallocator = { workspace = true }

View File

@@ -21,7 +21,8 @@ static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
any(
target_arch = "x86_64",
target_arch = "aarch64",
target_arch = "powerpc64"
target_arch = "powerpc64",
target_arch = "riscv64"
)
))]
#[global_allocator]

View File

@@ -18,7 +18,8 @@ static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
any(
target_arch = "x86_64",
target_arch = "aarch64",
target_arch = "powerpc64"
target_arch = "powerpc64",
target_arch = "riscv64"
)
))]
#[global_allocator]

View File

@@ -26,7 +26,8 @@ static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
any(
target_arch = "x86_64",
target_arch = "aarch64",
target_arch = "powerpc64"
target_arch = "powerpc64",
target_arch = "riscv64"
)
))]
#[global_allocator]
@@ -42,7 +43,8 @@ static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
any(
target_arch = "x86_64",
target_arch = "aarch64",
target_arch = "powerpc64"
target_arch = "powerpc64",
target_arch = "riscv64"
)
))]
#[unsafe(export_name = "_rjem_malloc_conf")]
@@ -77,8 +79,11 @@ fn benchmark_linter(mut group: BenchmarkGroup, settings: &LinterSettings) {
b.iter_batched(
|| parsed.clone(),
|parsed| {
// Assert that file contains no parse errors
assert!(parsed.has_valid_syntax());
let path = case.path();
let result = lint_only(
lint_only(
&path,
None,
settings,
@@ -86,10 +91,7 @@ fn benchmark_linter(mut group: BenchmarkGroup, settings: &LinterSettings) {
&SourceKind::Python(case.code().to_string()),
PySourceType::from(path.as_path()),
ParseSource::Precomputed(parsed),
);
// Assert that file contains no parse errors
assert!(!result.has_syntax_errors());
)
},
criterion::BatchSize::SmallInput,
);

View File

@@ -20,7 +20,8 @@ static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
any(
target_arch = "x86_64",
target_arch = "aarch64",
target_arch = "powerpc64"
target_arch = "powerpc64",
target_arch = "riscv64"
)
))]
#[global_allocator]

View File

@@ -351,6 +351,41 @@ fn benchmark_many_tuple_assignments(criterion: &mut Criterion) {
});
}
fn benchmark_tuple_implicit_instance_attributes(criterion: &mut Criterion) {
setup_rayon();
criterion.bench_function("ty_micro[many_tuple_assignments]", |b| {
b.iter_batched_ref(
|| {
// This is a regression benchmark for a case that used to hang:
// https://github.com/astral-sh/ty/issues/765
setup_micro_case(
r#"
from typing import Any
class A:
foo: tuple[Any, ...]
class B(A):
def __init__(self, parent: "C", x: tuple[Any]):
self.foo = parent.foo + x
class C(A):
def __init__(self, parent: B, x: tuple[Any]):
self.foo = parent.foo + x
"#,
)
},
|case| {
let Case { db, .. } = case;
let result = db.check();
assert_eq!(result.len(), 0);
},
BatchSize::SmallInput,
);
});
}
fn benchmark_complex_constrained_attributes_1(criterion: &mut Criterion) {
setup_rayon();
@@ -415,9 +450,6 @@ fn benchmark_complex_constrained_attributes_2(criterion: &mut Criterion) {
r#"
class C:
def f(self: "C"):
self.a = ""
self.b = ""
if isinstance(self.a, str):
return
@@ -431,6 +463,56 @@ fn benchmark_complex_constrained_attributes_2(criterion: &mut Criterion) {
return
if isinstance(self.b, str):
return
if isinstance(self.b, str):
return
if isinstance(self.b, str):
return
self.a = ""
self.b = ""
"#,
)
},
|case| {
let Case { db, .. } = case;
let result = db.check();
assert_eq!(result.len(), 0);
},
BatchSize::SmallInput,
);
});
}
fn benchmark_complex_constrained_attributes_3(criterion: &mut Criterion) {
setup_rayon();
criterion.bench_function("ty_micro[complex_constrained_attributes_3]", |b| {
b.iter_batched_ref(
|| {
// This is a regression test for https://github.com/astral-sh/ty/issues/758
setup_micro_case(
r#"
class GridOut:
def __init__(self: "GridOut") -> None:
self._buffer = b""
def _read_size_or_line(self: "GridOut", size: int = -1):
if size > self._position:
size = self._position
pass
if size == 0:
return bytes()
while size > 0:
if self._buffer:
buf = self._buffer
self._buffer = b""
else:
buf = b""
if len(buf) > size:
self._buffer = buf
self._position -= len(self._buffer)
"#,
)
},
@@ -630,8 +712,10 @@ criterion_group!(
micro,
benchmark_many_string_assignments,
benchmark_many_tuple_assignments,
benchmark_tuple_implicit_instance_attributes,
benchmark_complex_constrained_attributes_1,
benchmark_complex_constrained_attributes_2,
benchmark_complex_constrained_attributes_3,
benchmark_many_enum_members,
);
criterion_group!(project, anyio, attrs, hydra, datetype);

View File

@@ -218,6 +218,24 @@ static TANJUN: std::sync::LazyLock<Benchmark<'static>> = std::sync::LazyLock::ne
)
});
static STATIC_FRAME: std::sync::LazyLock<Benchmark<'static>> = std::sync::LazyLock::new(|| {
Benchmark::new(
RealWorldProject {
name: "static-frame",
repository: "https://github.com/static-frame/static-frame",
commit: "34962b41baca5e7f98f5a758d530bff02748a421",
paths: vec![SystemPath::new("static_frame")],
// N.B. `arraykit` is installed as a dependency during mypy_primer runs,
// but it takes much longer to be installed in a Codspeed run than it does in a mypy_primer run
// (seems to be built from source on the Codspeed CI runners for some reason).
dependencies: vec!["numpy"],
max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311,
},
500,
)
});
#[track_caller]
fn run_single_threaded(bencher: Bencher, benchmark: &Benchmark) {
bencher
@@ -232,7 +250,7 @@ fn small(bencher: Bencher, benchmark: &Benchmark) {
run_single_threaded(bencher, benchmark);
}
#[bench(args=[&*COLOUR_SCIENCE, &*PANDAS], sample_size=1, sample_count=3)]
#[bench(args=[&*COLOUR_SCIENCE, &*PANDAS, &*STATIC_FRAME], sample_size=1, sample_count=3)]
fn medium(bencher: Bencher, benchmark: &Benchmark) {
run_single_threaded(bencher, benchmark);
}

View File

@@ -14,6 +14,7 @@ license = { workspace = true }
ruff_annotate_snippets = { workspace = true }
ruff_cache = { workspace = true, optional = true }
ruff_diagnostics = { workspace = true }
ruff_memory_usage = { workspace = true }
ruff_notebook = { workspace = true }
ruff_python_ast = { workspace = true, features = ["get-size"] }
ruff_python_parser = { workspace = true }
@@ -33,16 +34,17 @@ glob = { workspace = true }
ignore = { workspace = true, optional = true }
matchit = { workspace = true }
path-slash = { workspace = true }
pathdiff = { workspace = true }
quick-junit = { workspace = true, optional = true }
rustc-hash = { workspace = true }
salsa = { workspace = true }
schemars = { workspace = true, optional = true }
serde = { workspace = true, optional = true }
serde_json = { workspace = true, optional = true }
similar = { workspace = true }
thiserror = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true, optional = true }
unicode-width = { workspace = true }
zip = { workspace = true }
[target.'cfg(target_arch="wasm32")'.dependencies]
@@ -52,7 +54,7 @@ web-time = { version = "1.1.0" }
etcetera = { workspace = true, optional = true }
[dev-dependencies]
insta = { workspace = true }
insta = { workspace = true, features = ["filters"] }
tempfile = { workspace = true }
[features]

View File

@@ -21,7 +21,7 @@ mod stylesheet;
/// characteristics in the inputs given to the tool. Typically, but not always,
/// a characteristic is a deficiency. An example of a characteristic that is
/// _not_ a deficiency is the `reveal_type` diagnostic for our type checker.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct Diagnostic {
/// The actual diagnostic.
///
@@ -212,7 +212,7 @@ impl Diagnostic {
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
pub fn concise_message(&self) -> ConciseMessage {
pub fn concise_message(&self) -> ConciseMessage<'_> {
let main = self.inner.message.as_str();
let annotation = self
.primary_annotation()
@@ -254,6 +254,11 @@ impl Diagnostic {
.find(|ann| ann.is_primary)
}
/// Returns a mutable borrow of all annotations of this diagnostic.
pub fn annotations_mut(&mut self) -> impl Iterator<Item = &mut Annotation> {
Arc::make_mut(&mut self.inner).annotations.iter_mut()
}
/// Returns the "primary" span of this diagnostic if one exists.
///
/// When there are multiple primary spans, then the first one that was
@@ -310,11 +315,21 @@ impl Diagnostic {
&self.inner.subs
}
/// Returns a mutable borrow of the sub-diagnostics of this diagnostic.
pub fn sub_diagnostics_mut(&mut self) -> impl Iterator<Item = &mut SubDiagnostic> {
Arc::make_mut(&mut self.inner).subs.iter_mut()
}
/// Returns the fix for this diagnostic if it exists.
pub fn fix(&self) -> Option<&Fix> {
self.inner.fix.as_ref()
}
#[cfg(test)]
pub(crate) fn fix_mut(&mut self) -> Option<&mut Fix> {
Arc::make_mut(&mut self.inner).fix.as_mut()
}
/// Set the fix for this diagnostic.
pub fn set_fix(&mut self, fix: Fix) {
debug_assert!(
@@ -366,6 +381,16 @@ impl Diagnostic {
self.inner.secondary_code.as_ref()
}
/// Returns the secondary code for the diagnostic if it exists, or the lint name otherwise.
///
/// This is a common pattern for Ruff diagnostics, which want to use the noqa code in general,
/// but fall back on the `invalid-syntax` identifier for syntax errors, which don't have
/// secondary codes.
pub fn secondary_code_or_id(&self) -> &str {
self.secondary_code()
.map_or_else(|| self.inner.id.as_str(), SecondaryCode::as_str)
}
/// Set the secondary code for this diagnostic.
pub fn set_secondary_code(&mut self, code: SecondaryCode) {
Arc::make_mut(&mut self.inner).secondary_code = Some(code);
@@ -479,7 +504,7 @@ impl Diagnostic {
}
}
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
struct DiagnosticInner {
id: DiagnosticId,
severity: Severity,
@@ -555,7 +580,7 @@ impl Eq for RenderingSortKey<'_> {}
/// Currently, the order in which sub-diagnostics are rendered relative to one
/// another (for a single parent diagnostic) is the order in which they were
/// attached to the diagnostic.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct SubDiagnostic {
/// Like with `Diagnostic`, we box the `SubDiagnostic` to make it
/// pointer-sized.
@@ -611,6 +636,11 @@ impl SubDiagnostic {
&self.inner.annotations
}
/// Returns a mutable borrow of the annotations of this sub-diagnostic.
pub fn annotations_mut(&mut self) -> impl Iterator<Item = &mut Annotation> {
self.inner.annotations.iter_mut()
}
/// Returns a shared borrow of the "primary" annotation of this diagnostic
/// if one exists.
///
@@ -644,7 +674,7 @@ impl SubDiagnostic {
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
pub fn concise_message(&self) -> ConciseMessage {
pub fn concise_message(&self) -> ConciseMessage<'_> {
let main = self.inner.message.as_str();
let annotation = self
.primary_annotation()
@@ -659,7 +689,7 @@ impl SubDiagnostic {
}
}
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
struct SubDiagnosticInner {
severity: SubDiagnosticSeverity,
message: DiagnosticMessage,
@@ -687,7 +717,7 @@ struct SubDiagnosticInner {
///
/// Messages attached to annotations should also be as brief and specific as
/// possible. Long messages could negative impact the quality of rendering.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct Annotation {
/// The span of this annotation, corresponding to some subsequence of the
/// user's input that we want to highlight.
@@ -702,6 +732,11 @@ pub struct Annotation {
is_primary: bool,
/// The diagnostic tags associated with this annotation.
tags: Vec<DiagnosticTag>,
/// Whether this annotation is a file-level or full-file annotation.
///
/// When set, rendering will only include the file's name and (optional) range. Everything else
/// is omitted, including any file snippet or message.
is_file_level: bool,
}
impl Annotation {
@@ -720,6 +755,7 @@ impl Annotation {
message: None,
is_primary: true,
tags: Vec::new(),
is_file_level: false,
}
}
@@ -736,6 +772,7 @@ impl Annotation {
message: None,
is_primary: false,
tags: Vec::new(),
is_file_level: false,
}
}
@@ -801,13 +838,28 @@ impl Annotation {
pub fn push_tag(&mut self, tag: DiagnosticTag) {
self.tags.push(tag);
}
/// Set whether or not this annotation is file-level.
///
/// File-level annotations are only rendered with their file name and range, if available. This
/// is intended for backwards compatibility with Ruff diagnostics, which historically used
/// `TextRange::default` to indicate a file-level diagnostic. In the new diagnostic model, a
/// [`Span`] with a range of `None` should be used instead, as mentioned in the `Span`
/// documentation.
///
/// TODO(brent) update this usage in Ruff and remove `is_file_level` entirely. See
/// <https://github.com/astral-sh/ruff/issues/19688>, especially my first comment, for more
/// details.
pub fn set_file_level(&mut self, yes: bool) {
self.is_file_level = yes;
}
}
/// Tags that can be associated with an annotation.
///
/// These tags are used to provide additional information about the annotation.
/// and are passed through to the language server protocol.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub enum DiagnosticTag {
/// Unused or unnecessary code. Used for unused parameters, unreachable code, etc.
Unnecessary,
@@ -1016,7 +1068,7 @@ impl std::fmt::Display for DiagnosticId {
///
/// This enum presents a unified interface to these two types for the sake of creating [`Span`]s and
/// emitting diagnostics from both ty and ruff.
#[derive(Debug, Clone, PartialEq, Eq, get_size2::GetSize)]
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
pub enum UnifiedFile {
Ty(File),
Ruff(SourceFile),
@@ -1067,7 +1119,7 @@ enum DiagnosticSource {
impl DiagnosticSource {
/// Returns this input as a `SourceCode` for convenient querying.
fn as_source_code(&self) -> SourceCode {
fn as_source_code(&self) -> SourceCode<'_, '_> {
match self {
DiagnosticSource::Ty(input) => SourceCode::new(input.text.as_str(), &input.line_index),
DiagnosticSource::Ruff(source) => SourceCode::new(source.source_text(), source.index()),
@@ -1080,7 +1132,7 @@ impl DiagnosticSource {
/// It consists of a `File` and an optional range into that file. When the
/// range isn't present, it semantically implies that the diagnostic refers to
/// the entire file. For example, when the file should be executable but isn't.
#[derive(Debug, Clone, PartialEq, Eq, get_size2::GetSize)]
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
pub struct Span {
file: UnifiedFile,
range: Option<TextRange>,
@@ -1158,7 +1210,7 @@ impl From<crate::files::FileRange> for Span {
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, get_size2::GetSize)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, Hash, get_size2::GetSize)]
pub enum Severity {
Info,
Warning,
@@ -1193,7 +1245,7 @@ impl Severity {
/// This type only exists to add an additional `Help` severity that isn't present in `Severity` or
/// used for main diagnostics. If we want to add `Severity::Help` in the future, this type could be
/// deleted and the two combined again.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, get_size2::GetSize)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, Hash, get_size2::GetSize)]
pub enum SubDiagnosticSeverity {
Help,
Info,
@@ -1247,6 +1299,10 @@ pub struct DisplayDiagnosticConfig {
hide_severity: bool,
/// Whether to show the availability of a fix in a diagnostic.
show_fix_status: bool,
/// Whether to show the diff for an available fix after the main diagnostic.
///
/// This currently only applies to `DiagnosticFormat::Full`.
show_fix_diff: bool,
/// The lowest applicability that should be shown when reporting diagnostics.
fix_applicability: Applicability,
}
@@ -1294,6 +1350,14 @@ impl DisplayDiagnosticConfig {
}
}
/// Whether to show a diff for an available fix after the main diagnostic.
pub fn show_fix_diff(self, yes: bool) -> DisplayDiagnosticConfig {
DisplayDiagnosticConfig {
show_fix_diff: yes,
..self
}
}
/// Set the lowest fix applicability that should be shown.
///
/// In other words, an applicability of `Safe` (the default) would suppress showing fixes or fix
@@ -1317,6 +1381,7 @@ impl Default for DisplayDiagnosticConfig {
preview: false,
hide_severity: false,
show_fix_status: false,
show_fix_diff: false,
fix_applicability: Applicability::Safe,
}
}
@@ -1370,6 +1435,11 @@ pub enum DiagnosticFormat {
/// Print diagnostics in the format expected by JUnit.
#[cfg(feature = "junit")]
Junit,
/// Print diagnostics in the JSON format used by GitLab [Code Quality] reports.
///
/// [Code Quality]: https://docs.gitlab.com/ee/ci/testing/code_quality.html#implement-a-custom-tool
#[cfg(feature = "serde")]
Gitlab,
}
/// A representation of the kinds of messages inside a diagnostic.
@@ -1428,7 +1498,7 @@ impl std::fmt::Display for ConciseMessage<'_> {
/// In most cases, callers shouldn't need to use this. Instead, there is
/// a blanket trait implementation for `IntoDiagnosticMessage` for
/// anything that implements `std::fmt::Display`.
#[derive(Clone, Debug, Eq, PartialEq, get_size2::GetSize)]
#[derive(Clone, Debug, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct DiagnosticMessage(Box<str>);
impl DiagnosticMessage {

View File

@@ -2,15 +2,15 @@ use std::borrow::Cow;
use std::collections::BTreeMap;
use std::path::Path;
use full::FullRenderer;
use ruff_annotate_snippets::{
Annotation as AnnotateAnnotation, Level as AnnotateLevel, Message as AnnotateMessage,
Renderer as AnnotateRenderer, Snippet as AnnotateSnippet,
Snippet as AnnotateSnippet,
};
use ruff_notebook::{Notebook, NotebookIndex};
use ruff_source_file::{LineIndex, OneIndexed, SourceCode};
use ruff_text_size::{TextLen, TextRange, TextSize};
use crate::diagnostic::stylesheet::DiagnosticStylesheet;
use crate::{
Db,
files::File,
@@ -31,6 +31,8 @@ mod azure;
mod concise;
mod full;
#[cfg(feature = "serde")]
mod gitlab;
#[cfg(feature = "serde")]
mod json;
#[cfg(feature = "serde")]
mod json_lines;
@@ -111,37 +113,7 @@ impl std::fmt::Display for DisplayDiagnostics<'_> {
ConciseRenderer::new(self.resolver, self.config).render(f, self.diagnostics)?;
}
DiagnosticFormat::Full => {
let stylesheet = if self.config.color {
DiagnosticStylesheet::styled()
} else {
DiagnosticStylesheet::plain()
};
let mut renderer = if self.config.color {
AnnotateRenderer::styled()
} else {
AnnotateRenderer::plain()
}
.cut_indicator("");
renderer = renderer
.error(stylesheet.error)
.warning(stylesheet.warning)
.info(stylesheet.info)
.note(stylesheet.note)
.help(stylesheet.help)
.line_no(stylesheet.line_no)
.emphasis(stylesheet.emphasis)
.none(stylesheet.none);
for diag in self.diagnostics {
let resolved = Resolved::new(self.resolver, diag);
let renderable = resolved.to_renderable(self.config.context);
for diag in renderable.diagnostics.iter() {
writeln!(f, "{}", renderer.render(diag.to_annotate()))?;
}
writeln!(f)?;
}
FullRenderer::new(self.resolver, self.config).render(f, self.diagnostics)?;
}
DiagnosticFormat::Azure => {
AzureRenderer::new(self.resolver).render(f, self.diagnostics)?;
@@ -166,6 +138,10 @@ impl std::fmt::Display for DisplayDiagnostics<'_> {
DiagnosticFormat::Junit => {
junit::JunitRenderer::new(self.resolver).render(f, self.diagnostics)?;
}
#[cfg(feature = "serde")]
DiagnosticFormat::Gitlab => {
gitlab::GitlabRenderer::new(self.resolver).render(f, self.diagnostics)?;
}
}
Ok(())
@@ -191,9 +167,13 @@ struct Resolved<'a> {
impl<'a> Resolved<'a> {
/// Creates a new resolved set of diagnostics.
fn new(resolver: &'a dyn FileResolver, diag: &'a Diagnostic) -> Resolved<'a> {
fn new(
resolver: &'a dyn FileResolver,
diag: &'a Diagnostic,
config: &DisplayDiagnosticConfig,
) -> Resolved<'a> {
let mut diagnostics = vec![];
diagnostics.push(ResolvedDiagnostic::from_diagnostic(resolver, diag));
diagnostics.push(ResolvedDiagnostic::from_diagnostic(resolver, config, diag));
for sub in &diag.inner.subs {
diagnostics.push(ResolvedDiagnostic::from_sub_diagnostic(resolver, sub));
}
@@ -223,12 +203,14 @@ struct ResolvedDiagnostic<'a> {
id: Option<String>,
message: String,
annotations: Vec<ResolvedAnnotation<'a>>,
is_fixable: bool,
}
impl<'a> ResolvedDiagnostic<'a> {
/// Resolve a single diagnostic.
fn from_diagnostic(
resolver: &'a dyn FileResolver,
config: &DisplayDiagnosticConfig,
diag: &'a Diagnostic,
) -> ResolvedDiagnostic<'a> {
let annotations: Vec<_> = diag
@@ -236,18 +218,45 @@ impl<'a> ResolvedDiagnostic<'a> {
.annotations
.iter()
.filter_map(|ann| {
let path = ann.span.file.path(resolver);
let path = ann
.span
.file
.relative_path(resolver)
.to_str()
.unwrap_or_else(|| ann.span.file.path(resolver));
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
ResolvedAnnotation::new(path, &diagnostic_source, ann, resolver)
})
.collect();
let id = Some(diag.inner.id.to_string());
let message = diag.inner.message.as_str().to_string();
let id = if config.hide_severity {
// Either the rule code alone (e.g. `F401`), or the lint id with a colon (e.g.
// `invalid-syntax:`). When Ruff gets real severities, we should put the colon back in
// `DisplaySet::format_annotation` for both cases, but this is a small hack to improve
// the formatting of syntax errors for now. This should also be kept consistent with the
// concise formatting.
Some(diag.secondary_code().map_or_else(
|| format!("{id}:", id = diag.inner.id),
|code| code.to_string(),
))
} else {
Some(diag.inner.id.to_string())
};
let level = if config.hide_severity {
AnnotateLevel::None
} else {
diag.inner.severity.to_annotate()
};
ResolvedDiagnostic {
level: diag.inner.severity.to_annotate(),
level,
id,
message,
message: diag.inner.message.as_str().to_string(),
annotations,
is_fixable: diag
.fix()
.is_some_and(|fix| fix.applies(config.fix_applicability)),
}
}
@@ -261,9 +270,14 @@ impl<'a> ResolvedDiagnostic<'a> {
.annotations
.iter()
.filter_map(|ann| {
let path = ann.span.file.path(resolver);
let path = ann
.span
.file
.relative_path(resolver)
.to_str()
.unwrap_or_else(|| ann.span.file.path(resolver));
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
ResolvedAnnotation::new(path, &diagnostic_source, ann, resolver)
})
.collect();
ResolvedDiagnostic {
@@ -271,6 +285,7 @@ impl<'a> ResolvedDiagnostic<'a> {
id: None,
message: diag.inner.message.as_str().to_string(),
annotations,
is_fixable: false,
}
}
@@ -301,20 +316,49 @@ impl<'a> ResolvedDiagnostic<'a> {
&prev.diagnostic_source.as_source_code(),
context,
prev.line_end,
prev.notebook_index.as_ref(),
)
.get();
let this_context_begins = context_before(
&ann.diagnostic_source.as_source_code(),
context,
ann.line_start,
ann.notebook_index.as_ref(),
)
.get();
// For notebooks, check whether the end of the
// previous annotation and the start of the current
// annotation are in different cells.
let prev_cell_index = prev.notebook_index.as_ref().map(|notebook_index| {
let prev_end = prev
.diagnostic_source
.as_source_code()
.line_column(prev.range.end());
notebook_index.cell(prev_end.line).unwrap_or_default().get()
});
let this_cell_index = ann.notebook_index.as_ref().map(|notebook_index| {
let this_start = ann
.diagnostic_source
.as_source_code()
.line_column(ann.range.start());
notebook_index
.cell(this_start.line)
.unwrap_or_default()
.get()
});
let in_different_cells = prev_cell_index != this_cell_index;
// The boundary case here is when `prev_context_ends`
// is exactly one less than `this_context_begins`. In
// that case, the context windows are adjacent and we
// should fall through below to add this annotation to
// the existing snippet.
if this_context_begins.saturating_sub(prev_context_ends) > 1 {
//
// For notebooks, also check that the context windows
// are in the same cell. Windows from different cells
// should never be considered adjacent.
if in_different_cells || this_context_begins.saturating_sub(prev_context_ends) > 1 {
snippet_by_path
.entry(path)
.or_default()
@@ -338,6 +382,7 @@ impl<'a> ResolvedDiagnostic<'a> {
id: self.id.as_deref(),
message: &self.message,
snippets_by_input,
is_fixable: self.is_fixable,
}
}
}
@@ -357,6 +402,8 @@ struct ResolvedAnnotation<'a> {
line_end: OneIndexed,
message: Option<&'a str>,
is_primary: bool,
is_file_level: bool,
notebook_index: Option<NotebookIndex>,
}
impl<'a> ResolvedAnnotation<'a> {
@@ -369,6 +416,7 @@ impl<'a> ResolvedAnnotation<'a> {
path: &'a str,
diagnostic_source: &DiagnosticSource,
ann: &'a Annotation,
resolver: &'a dyn FileResolver,
) -> Option<ResolvedAnnotation<'a>> {
let source = diagnostic_source.as_source_code();
let (range, line_start, line_end) = match (ann.span.range(), ann.message.is_some()) {
@@ -402,6 +450,8 @@ impl<'a> ResolvedAnnotation<'a> {
line_end,
message: ann.get_message(),
is_primary: ann.is_primary,
is_file_level: ann.is_file_level,
notebook_index: resolver.notebook_index(&ann.span.file),
})
}
}
@@ -436,6 +486,10 @@ struct RenderableDiagnostic<'r> {
/// should be from the same file, and none of the snippets inside of a
/// collection should overlap with one another or be directly adjacent.
snippets_by_input: Vec<RenderableSnippets<'r>>,
/// Whether or not the diagnostic is fixable.
///
/// This is rendered as a `[*]` indicator after the diagnostic ID.
is_fixable: bool,
}
impl RenderableDiagnostic<'_> {
@@ -448,7 +502,7 @@ impl RenderableDiagnostic<'_> {
.iter()
.map(|snippet| snippet.to_annotate(path))
});
let mut message = self.level.title(self.message);
let mut message = self.level.title(self.message).is_fixable(self.is_fixable);
if let Some(id) = self.id {
message = message.id(id);
}
@@ -530,17 +584,27 @@ struct RenderableSnippet<'r> {
/// Whether this snippet contains at least one primary
/// annotation.
has_primary: bool,
/// The cell index in a Jupyter notebook, if this snippet refers to a notebook.
///
/// This is used for rendering annotations with offsets like `cell 1:2:3` instead of simple row
/// and column numbers.
cell_index: Option<usize>,
}
impl<'r> RenderableSnippet<'r> {
/// Creates a new snippet with one or more annotations that is ready to be
/// renderer.
/// rendered.
///
/// The first line of the snippet is the smallest line number on which one
/// of the annotations begins, minus the context window size. The last line
/// is the largest line number on which one of the annotations ends, plus
/// the context window size.
///
/// For Jupyter notebooks, the context window may also be truncated at cell
/// boundaries. If multiple annotations are present, and they point to
/// different cells, these will have already been split into separate
/// snippets by `ResolvedDiagnostic::to_renderable`.
///
/// Callers should guarantee that the `input` on every `ResolvedAnnotation`
/// given is identical.
///
@@ -557,19 +621,19 @@ impl<'r> RenderableSnippet<'r> {
"creating a renderable snippet requires a non-zero number of annotations",
);
let diagnostic_source = &anns[0].diagnostic_source;
let notebook_index = anns[0].notebook_index.as_ref();
let source = diagnostic_source.as_source_code();
let has_primary = anns.iter().any(|ann| ann.is_primary);
let line_start = context_before(
&source,
context,
anns.iter().map(|ann| ann.line_start).min().unwrap(),
);
let line_end = context_after(
&source,
context,
anns.iter().map(|ann| ann.line_end).max().unwrap(),
);
let content_start_index = anns.iter().map(|ann| ann.line_start).min().unwrap();
let line_start = context_before(&source, context, content_start_index, notebook_index);
let start = source.line_column(anns[0].range.start());
let cell_index = notebook_index
.map(|notebook_index| notebook_index.cell(start.line).unwrap_or_default().get());
let content_end_index = anns.iter().map(|ann| ann.line_end).max().unwrap();
let line_end = context_after(&source, context, content_end_index, notebook_index);
let snippet_start = source.line_start(line_start);
let snippet_end = source.line_end(line_end);
@@ -577,6 +641,22 @@ impl<'r> RenderableSnippet<'r> {
.as_source_code()
.slice(TextRange::new(snippet_start, snippet_end));
// Strip the BOM from the beginning of the snippet, if present. Doing this here saves us the
// trouble of updating the annotation ranges in `replace_unprintable`, and also allows us to
// check that the BOM is at the very beginning of the file, not just the beginning of the
// snippet.
const BOM: char = '\u{feff}';
let bom_len = BOM.text_len();
let (snippet, snippet_start) =
if snippet_start == TextSize::ZERO && snippet.starts_with(BOM) {
(
&snippet[bom_len.to_usize()..],
snippet_start + TextSize::new(bom_len.to_u32()),
)
} else {
(snippet, snippet_start)
};
let annotations = anns
.iter()
.map(|ann| RenderableAnnotation::new(snippet_start, ann))
@@ -585,14 +665,20 @@ impl<'r> RenderableSnippet<'r> {
let EscapedSourceCode {
text: snippet,
annotations,
} = replace_whitespace_and_unprintable(snippet, annotations)
.fix_up_empty_spans_after_line_terminator();
} = replace_unprintable(snippet, annotations).fix_up_empty_spans_after_line_terminator();
let line_start = notebook_index.map_or(line_start, |notebook_index| {
notebook_index
.cell_row(line_start)
.unwrap_or(OneIndexed::MIN)
});
RenderableSnippet {
snippet,
line_start,
annotations,
has_primary,
cell_index,
}
}
@@ -606,6 +692,7 @@ impl<'r> RenderableSnippet<'r> {
.iter()
.map(RenderableAnnotation::to_annotate),
)
.cell_index(self.cell_index)
}
}
@@ -620,6 +707,8 @@ struct RenderableAnnotation<'r> {
message: Option<&'r str>,
/// Whether this annotation is considered "primary" or not.
is_primary: bool,
/// Whether this annotation applies to an entire file, rather than a snippet within it.
is_file_level: bool,
}
impl<'r> RenderableAnnotation<'r> {
@@ -632,11 +721,16 @@ impl<'r> RenderableAnnotation<'r> {
/// lifetime parameter here refers to the lifetime of the resolver that
/// created the given `ResolvedAnnotation`.
fn new(snippet_start: TextSize, ann: &'_ ResolvedAnnotation<'r>) -> RenderableAnnotation<'r> {
let range = ann.range - snippet_start;
// This should only ever saturate if a BOM is present _and_ the annotation range points
// before the BOM (i.e. at offset 0). In Ruff this typically results from the use of
// `TextRange::default()` for a diagnostic range instead of a range relative to file
// contents.
let range = ann.range.checked_sub(snippet_start).unwrap_or(ann.range);
RenderableAnnotation {
range,
message: ann.message,
is_primary: ann.is_primary,
is_file_level: ann.is_file_level,
}
}
@@ -662,7 +756,7 @@ impl<'r> RenderableAnnotation<'r> {
if let Some(message) = self.message {
ann = ann.label(message);
}
ann
ann.is_file_level(self.is_file_level)
}
}
@@ -789,7 +883,15 @@ pub struct Input {
///
/// The line number returned is guaranteed to be less than
/// or equal to `start`.
fn context_before(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) -> OneIndexed {
///
/// In Jupyter notebooks, lines outside the cell containing
/// `start` will be omitted.
fn context_before(
source: &SourceCode<'_, '_>,
len: usize,
start: OneIndexed,
notebook_index: Option<&NotebookIndex>,
) -> OneIndexed {
let mut line = start.saturating_sub(len);
// Trim leading empty lines.
while line < start {
@@ -798,6 +900,17 @@ fn context_before(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) ->
}
line = line.saturating_add(1);
}
if let Some(index) = notebook_index {
let content_start_cell = index.cell(start).unwrap_or(OneIndexed::MIN);
while line < start {
if index.cell(line).unwrap_or(OneIndexed::MIN) == content_start_cell {
break;
}
line = line.saturating_add(1);
}
}
line
}
@@ -807,7 +920,15 @@ fn context_before(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) ->
/// The line number returned is guaranteed to be greater
/// than or equal to `start` and no greater than the
/// number of lines in `source`.
fn context_after(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) -> OneIndexed {
///
/// In Jupyter notebooks, lines outside the cell containing
/// `start` will be omitted.
fn context_after(
source: &SourceCode<'_, '_>,
len: usize,
start: OneIndexed,
notebook_index: Option<&NotebookIndex>,
) -> OneIndexed {
let max_lines = OneIndexed::from_zero_indexed(source.line_count());
let mut line = start.saturating_add(len).min(max_lines);
// Trim trailing empty lines.
@@ -817,6 +938,17 @@ fn context_after(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) ->
}
line = line.saturating_sub(1);
}
if let Some(index) = notebook_index {
let content_end_cell = index.cell(start).unwrap_or(OneIndexed::MIN);
while line > start {
if index.cell(line).unwrap_or(OneIndexed::MIN) == content_end_cell {
break;
}
line = line.saturating_sub(1);
}
}
line
}
@@ -828,13 +960,18 @@ fn relativize_path<'p>(cwd: &SystemPath, path: &'p str) -> &'p str {
path
}
/// Given some source code and annotation ranges, this routine replaces tabs
/// with ASCII whitespace, and unprintable characters with printable
/// representations of them.
/// Given some source code and annotation ranges, this routine replaces
/// unprintable characters with printable representations of them.
///
/// The source code and annotations returned are updated to reflect changes made
/// to the source code (if any).
fn replace_whitespace_and_unprintable<'r>(
///
/// We don't need to normalize whitespace, such as converting tabs to spaces,
/// because `annotate-snippets` handles that internally. Similarly, it's safe to
/// modify the annotation ranges by inserting 3-byte Unicode replacements
/// because `annotate-snippets` will account for their actual width when
/// rendering and displaying the column to the user.
fn replace_unprintable<'r>(
source: &'r str,
mut annotations: Vec<RenderableAnnotation<'r>>,
) -> EscapedSourceCode<'r> {
@@ -866,48 +1003,22 @@ fn replace_whitespace_and_unprintable<'r>(
}
};
const TAB_SIZE: usize = 4;
let mut width = 0;
let mut column = 0;
let mut last_end = 0;
let mut result = String::new();
for (index, c) in source.char_indices() {
let old_width = width;
match c {
'\n' | '\r' => {
width = 0;
column = 0;
}
'\t' => {
let tab_offset = TAB_SIZE - (column % TAB_SIZE);
width += tab_offset;
column += tab_offset;
// normalize `\r` line endings but don't double `\r\n`
if c == '\r' && !source[index + 1..].starts_with("\n") {
result.push_str(&source[last_end..index]);
result.push('\n');
last_end = index + 1;
} else if let Some(printable) = unprintable_replacement(c) {
result.push_str(&source[last_end..index]);
let tab_width =
u32::try_from(width - old_width).expect("small width because of tab size");
result.push_str(&source[last_end..index]);
let len = printable.text_len().to_u32();
update_ranges(result.text_len().to_usize(), len);
update_ranges(result.text_len().to_usize(), tab_width);
for _ in 0..tab_width {
result.push(' ');
}
last_end = index + 1;
}
_ => {
width += unicode_width::UnicodeWidthChar::width(c).unwrap_or(0);
column += 1;
if let Some(printable) = unprintable_replacement(c) {
result.push_str(&source[last_end..index]);
let len = printable.text_len().to_u32();
update_ranges(result.text_len().to_usize(), len);
result.push(printable);
last_end = index + 1;
}
}
result.push(printable);
last_end = index + 1;
}
}
@@ -2517,6 +2628,13 @@ watermelon
self.config = config;
}
/// Show a diff for the fix when rendering.
pub(super) fn show_fix_diff(&mut self, yes: bool) {
let mut config = std::mem::take(&mut self.config);
config = config.show_fix_diff(yes);
self.config = config;
}
/// The lowest fix applicability to show when rendering.
pub(super) fn fix_applicability(&mut self, applicability: Applicability) {
let mut config = std::mem::take(&mut self.config);
@@ -2544,7 +2662,12 @@ watermelon
/// of the corresponding line minus one. (The "minus one" is because
/// otherwise, the span will end where the next line begins, and this
/// confuses `ruff_annotate_snippets` as of 2025-03-13.)
fn span(&self, path: &str, line_offset_start: &str, line_offset_end: &str) -> Span {
pub(super) fn span(
&self,
path: &str,
line_offset_start: &str,
line_offset_end: &str,
) -> Span {
let span = self.path(path);
let file = span.expect_ty_file();
@@ -2567,7 +2690,7 @@ watermelon
}
/// Like `span`, but only attaches a file path.
fn path(&self, path: &str) -> Span {
pub(super) fn path(&self, path: &str) -> Span {
let file = system_path_to_file(&self.db, path).unwrap();
Span::from(file)
}
@@ -2681,7 +2804,7 @@ watermelon
///
/// See the docs on `TestEnvironment::span` for the meaning of
/// `path`, `line_offset_start` and `line_offset_end`.
fn secondary(
pub(super) fn secondary(
mut self,
path: &str,
line_offset_start: &str,
@@ -2717,7 +2840,7 @@ watermelon
}
/// Adds a "help" sub-diagnostic with the given message.
fn help(mut self, message: impl IntoDiagnosticMessage) -> DiagnosticBuilder<'e> {
pub(super) fn help(mut self, message: impl IntoDiagnosticMessage) -> DiagnosticBuilder<'e> {
self.diag.help(message);
self
}
@@ -2877,10 +3000,10 @@ if call(foo
env.format(format);
let diagnostics = vec![
env.invalid_syntax("SyntaxError: Expected one or more symbol names after import")
env.invalid_syntax("Expected one or more symbol names after import")
.primary("syntax_errors.py", "1:14", "1:15", "")
.build(),
env.invalid_syntax("SyntaxError: Expected ')', found newline")
env.invalid_syntax("Expected ')', found newline")
.primary("syntax_errors.py", "3:11", "3:12", "")
.build(),
];
@@ -2888,7 +3011,8 @@ if call(foo
(env, diagnostics)
}
/// Create Ruff-style diagnostics for testing the various output formats for a notebook.
/// A Jupyter notebook for testing diagnostics.
///
///
/// The concatenated cells look like this:
///
@@ -2908,17 +3032,7 @@ if call(foo
/// The first diagnostic is on the unused `os` import with location cell 1, row 2, column 8
/// (`cell 1:2:8`). The second diagnostic is the unused `math` import at `cell 2:2:8`, and the
/// third diagnostic is an unfixable unused variable at `cell 3:4:5`.
#[allow(
dead_code,
reason = "This is currently only used for JSON but will be needed soon for other formats"
)]
pub(crate) fn create_notebook_diagnostics(
format: DiagnosticFormat,
) -> (TestEnvironment, Vec<Diagnostic>) {
let mut env = TestEnvironment::new();
env.add(
"notebook.ipynb",
r##"
pub(super) static NOTEBOOK: &str = r##"
{
"cells": [
{
@@ -2957,8 +3071,14 @@ if call(foo
"nbformat": 4,
"nbformat_minor": 5
}
"##,
);
"##;
/// Create Ruff-style diagnostics for testing the various output formats for a notebook.
pub(crate) fn create_notebook_diagnostics(
format: DiagnosticFormat,
) -> (TestEnvironment, Vec<Diagnostic>) {
let mut env = TestEnvironment::new();
env.add("notebook.ipynb", NOTEBOOK);
env.format(format);
let diagnostics = vec![

View File

@@ -50,10 +50,8 @@ impl AzureRenderer<'_> {
}
writeln!(
f,
"{code}]{body}",
code = diag
.secondary_code()
.map_or_else(String::new, |code| format!("code={code};")),
"code={code};]{body}",
code = diag.secondary_code_or_id(),
body = diag.body(),
)?;
}

View File

@@ -69,6 +69,12 @@ impl<'a> ConciseRenderer<'a> {
"{code} ",
code = fmt_styled(code, stylesheet.secondary_code)
)?;
} else {
write!(
f,
"{id}: ",
id = fmt_styled(diag.inner.id.as_str(), stylesheet.secondary_code)
)?;
}
if self.config.show_fix_status {
if let Some(fix) = diag.fix() {
@@ -156,8 +162,8 @@ mod tests {
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
syntax_errors.py:1:15: SyntaxError: Expected one or more symbol names after import
syntax_errors.py:3:12: SyntaxError: Expected ')', found newline
syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
syntax_errors.py:3:12: invalid-syntax: Expected ')', found newline
");
}
@@ -165,8 +171,8 @@ mod tests {
fn syntax_errors() {
let (env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Concise);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
syntax_errors.py:1:15: error[invalid-syntax] SyntaxError: Expected one or more symbol names after import
syntax_errors.py:3:12: error[invalid-syntax] SyntaxError: Expected ')', found newline
syntax_errors.py:1:15: error[invalid-syntax] Expected one or more symbol names after import
syntax_errors.py:3:12: error[invalid-syntax] Expected ')', found newline
");
}

View File

@@ -1,8 +1,311 @@
use std::borrow::Cow;
use std::num::NonZeroUsize;
use anstyle::Style;
use ruff_notebook::NotebookIndex;
use similar::{ChangeTag, TextDiff};
use ruff_annotate_snippets::Renderer as AnnotateRenderer;
use ruff_diagnostics::{Applicability, Fix};
use ruff_source_file::OneIndexed;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use crate::diagnostic::render::{FileResolver, Resolved};
use crate::diagnostic::stylesheet::{DiagnosticStylesheet, fmt_styled};
use crate::diagnostic::{Diagnostic, DiagnosticSource, DisplayDiagnosticConfig};
pub(super) struct FullRenderer<'a> {
resolver: &'a dyn FileResolver,
config: &'a DisplayDiagnosticConfig,
}
impl<'a> FullRenderer<'a> {
pub(super) fn new(resolver: &'a dyn FileResolver, config: &'a DisplayDiagnosticConfig) -> Self {
Self { resolver, config }
}
pub(super) fn render(
&self,
f: &mut std::fmt::Formatter,
diagnostics: &[Diagnostic],
) -> std::fmt::Result {
let stylesheet = if self.config.color {
DiagnosticStylesheet::styled()
} else {
DiagnosticStylesheet::plain()
};
let mut renderer = if self.config.color {
AnnotateRenderer::styled()
} else {
AnnotateRenderer::plain()
}
.cut_indicator("");
renderer = renderer
.error(stylesheet.error)
.warning(stylesheet.warning)
.info(stylesheet.info)
.note(stylesheet.note)
.help(stylesheet.help)
.line_no(stylesheet.line_no)
.emphasis(stylesheet.emphasis)
.none(stylesheet.none);
for diag in diagnostics {
let resolved = Resolved::new(self.resolver, diag, self.config);
let renderable = resolved.to_renderable(self.config.context);
for diag in renderable.diagnostics.iter() {
writeln!(f, "{}", renderer.render(diag.to_annotate()))?;
}
writeln!(f)?;
if self.config.show_fix_diff {
if let Some(diff) = Diff::from_diagnostic(diag, &stylesheet, self.resolver) {
writeln!(f, "{diff}")?;
}
}
}
Ok(())
}
}
/// Renders a diff that shows the code fixes.
///
/// The implementation isn't fully fledged out and only used by tests. Before using in production, try
/// * Improve layout
/// * Replace tabs with spaces for a consistent experience across terminals
/// * Replace zero-width whitespaces
/// * Print a simpler diff if only a single line has changed
/// * Compute the diff from the `Edit` because diff calculation is expensive.
struct Diff<'a> {
fix: &'a Fix,
diagnostic_source: DiagnosticSource,
notebook_index: Option<NotebookIndex>,
stylesheet: &'a DiagnosticStylesheet,
}
impl<'a> Diff<'a> {
fn from_diagnostic(
diagnostic: &'a Diagnostic,
stylesheet: &'a DiagnosticStylesheet,
resolver: &'a dyn FileResolver,
) -> Option<Diff<'a>> {
let file = &diagnostic.primary_span_ref()?.file;
Some(Diff {
fix: diagnostic.fix()?,
diagnostic_source: file.diagnostic_source(resolver),
notebook_index: resolver.notebook_index(file),
stylesheet,
})
}
}
impl std::fmt::Display for Diff<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let source_code = self.diagnostic_source.as_source_code();
let source_text = source_code.text();
// Partition the source code into end offsets for each cell. If `self.notebook_index` is
// `None`, indicating a regular script file, all the lines will be in one "cell" under the
// `None` key.
let cells = if let Some(notebook_index) = &self.notebook_index {
let mut last_cell = OneIndexed::MIN;
let mut cells: Vec<(Option<OneIndexed>, TextSize)> = Vec::new();
for (row, cell) in notebook_index.iter() {
if cell != last_cell {
let offset = source_code.line_start(row);
cells.push((Some(last_cell), offset));
last_cell = cell;
}
}
cells.push((Some(last_cell), source_text.text_len()));
cells
} else {
vec![(None, source_text.text_len())]
};
let message = match self.fix.applicability() {
// TODO(zanieb): Adjust this messaging once it's user-facing
Applicability::Safe => "Safe fix",
Applicability::Unsafe => "Unsafe fix",
Applicability::DisplayOnly => "Display-only fix",
};
// TODO(brent) `stylesheet.separator` is cyan rather than blue, as we had before. I think
// we're getting rid of this soon anyway, so I didn't think it was worth adding another
// style to the stylesheet temporarily. The color doesn't appear at all in the snapshot
// tests, which is the only place these are currently used.
writeln!(f, " {}", fmt_styled(message, self.stylesheet.separator))?;
let mut last_end = TextSize::ZERO;
for (cell, offset) in cells {
let range = TextRange::new(last_end, offset);
last_end = offset;
let input = source_code.slice(range);
let mut output = String::with_capacity(input.len());
let mut last_end = range.start();
let mut applied = 0;
for edit in self.fix.edits() {
if range.contains_range(edit.range()) {
output.push_str(source_code.slice(TextRange::new(last_end, edit.start())));
output.push_str(edit.content().unwrap_or_default());
last_end = edit.end();
applied += 1;
}
}
// No edits were applied, so there's no need to diff.
if applied == 0 {
continue;
}
output.push_str(&source_text[usize::from(last_end)..usize::from(range.end())]);
let diff = TextDiff::from_lines(input, &output);
let (largest_old, largest_new) = diff
.ops()
.last()
.map(|op| (op.old_range().start, op.new_range().start))
.unwrap_or_default();
let digit_with = OneIndexed::from_zero_indexed(largest_new.max(largest_old)).digits();
if let Some(cell) = cell {
// Room for 2 digits, 2 x 1 space before each digit, 1 space, and 1 `|`. This
// centers the three colons on the pipe.
writeln!(f, "{:>1$} cell {cell}", ":::", 2 * digit_with.get() + 4)?;
}
for (idx, group) in diff.grouped_ops(3).iter().enumerate() {
if idx > 0 {
writeln!(f, "{:-^1$}", "-", 80)?;
}
for op in group {
for change in diff.iter_inline_changes(op) {
let sign = match change.tag() {
ChangeTag::Delete => "-",
ChangeTag::Insert => "+",
ChangeTag::Equal => " ",
};
let line_style = LineStyle::from(change.tag(), self.stylesheet);
let old_index = change.old_index().map(OneIndexed::from_zero_indexed);
let new_index = change.new_index().map(OneIndexed::from_zero_indexed);
write!(
f,
"{} {} |{}",
Line {
index: old_index,
width: digit_with,
},
Line {
index: new_index,
width: digit_with,
},
fmt_styled(line_style.apply_to(sign), self.stylesheet.emphasis),
)?;
for (emphasized, value) in change.iter_strings_lossy() {
let value = show_nonprinting(&value);
if emphasized {
write!(
f,
"{}",
fmt_styled(
line_style.apply_to(&value),
self.stylesheet.underline
)
)?;
} else {
write!(f, "{}", line_style.apply_to(&value))?;
}
}
if change.missing_newline() {
writeln!(f)?;
}
}
}
}
}
Ok(())
}
}
struct LineStyle {
style: Style,
}
impl LineStyle {
fn apply_to(&self, input: &str) -> impl std::fmt::Display {
fmt_styled(input, self.style)
}
fn from(value: ChangeTag, stylesheet: &DiagnosticStylesheet) -> LineStyle {
match value {
ChangeTag::Equal => LineStyle {
style: stylesheet.none,
},
ChangeTag::Delete => LineStyle {
style: stylesheet.deletion,
},
ChangeTag::Insert => LineStyle {
style: stylesheet.insertion,
},
}
}
}
struct Line {
index: Option<OneIndexed>,
width: NonZeroUsize,
}
impl std::fmt::Display for Line {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
match self.index {
None => {
for _ in 0..self.width.get() {
f.write_str(" ")?;
}
Ok(())
}
Some(idx) => write!(f, "{:<width$}", idx, width = self.width.get()),
}
}
}
fn show_nonprinting(s: &str) -> Cow<'_, str> {
if s.find(['\x07', '\x08', '\x1b', '\x7f']).is_some() {
Cow::Owned(
s.replace('\x07', "")
.replace('\x08', "")
.replace('\x1b', "")
.replace('\x7f', ""),
)
} else {
Cow::Borrowed(s)
}
}
#[cfg(test)]
mod tests {
use ruff_diagnostics::{Applicability, Fix};
use ruff_text_size::{TextLen, TextRange, TextSize};
use crate::diagnostic::{
DiagnosticFormat, Severity,
render::tests::{TestEnvironment, create_diagnostics, create_syntax_error_diagnostics},
Annotation, DiagnosticFormat, Severity,
render::tests::{
NOTEBOOK, TestEnvironment, create_diagnostics, create_notebook_diagnostics,
create_syntax_error_diagnostics,
},
};
#[test]
@@ -42,7 +345,7 @@ mod tests {
fn syntax_errors() {
let (env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Full);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[invalid-syntax]: SyntaxError: Expected one or more symbol names after import
error[invalid-syntax]: Expected one or more symbol names after import
--> syntax_errors.py:1:15
|
1 | from os import
@@ -51,7 +354,71 @@ mod tests {
3 | if call(foo
|
error[invalid-syntax]: SyntaxError: Expected ')', found newline
error[invalid-syntax]: Expected ')', found newline
--> syntax_errors.py:3:12
|
1 | from os import
2 |
3 | if call(foo
| ^
4 | def bar():
5 | pass
|
");
}
#[test]
fn hide_severity_output() {
let (mut env, diagnostics) = create_diagnostics(DiagnosticFormat::Full);
env.hide_severity(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r#"
F401 [*] `os` imported but unused
--> fib.py:1:8
|
1 | import os
| ^^
|
help: Remove unused import: `os`
F841 [*] Local variable `x` is assigned to but never used
--> fib.py:6:5
|
4 | def fibonacci(n):
5 | """Compute the nth number in the Fibonacci sequence."""
6 | x = 1
| ^
7 | if n == 0:
8 | return 0
|
help: Remove assignment to unused variable `x`
F821 Undefined name `a`
--> undef.py:1:4
|
1 | if a == 1: pass
| ^
|
"#);
}
#[test]
fn hide_severity_syntax_errors() {
let (mut env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Full);
env.hide_severity(true);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
invalid-syntax: Expected one or more symbol names after import
--> syntax_errors.py:1:15
|
1 | from os import
| ^
2 |
3 | if call(foo
|
invalid-syntax: Expected ')', found newline
--> syntax_errors.py:3:12
|
1 | from os import
@@ -116,7 +483,7 @@ print()
/// For example, without the fix, we get diagnostics like this:
///
/// ```
/// error[invalid-character-sub]: Invalid unescaped character SUB, use "\x1A" instead
/// error[invalid-character-sub]: Invalid unescaped character SUB, use "\x1a" instead
/// --> example.py:1:25
/// |
/// 1 | nested_fstrings = f'␈{f'{f'␛'}'}'
@@ -136,13 +503,13 @@ print()
.builder(
"invalid-character-sub",
Severity::Error,
r#"Invalid unescaped character SUB, use "\x1A" instead"#,
r#"Invalid unescaped character SUB, use "\x1a" instead"#,
)
.primary("example.py", "1:24", "1:24", "")
.build();
insta::assert_snapshot!(env.render(&diagnostic), @r#"
error[invalid-character-sub]: Invalid unescaped character SUB, use "\x1A" instead
error[invalid-character-sub]: Invalid unescaped character SUB, use "\x1a" instead
--> example.py:1:25
|
1 | nested_fstrings = f'␈{f'{f'␛'}'}'
@@ -161,13 +528,13 @@ print()
.builder(
"invalid-character-sub",
Severity::Error,
r#"Invalid unescaped character SUB, use "\x1A" instead"#,
r#"Invalid unescaped character SUB, use "\x1a" instead"#,
)
.primary("example.py", "1:1", "1:1", "")
.build();
insta::assert_snapshot!(env.render(&diagnostic), @r#"
error[invalid-character-sub]: Invalid unescaped character SUB, use "\x1A" instead
error[invalid-character-sub]: Invalid unescaped character SUB, use "\x1a" instead
--> example.py:1:2
|
1 | ␈
@@ -177,4 +544,361 @@ print()
Ok(())
}
/// Ensure that the header column matches the column in the user's input, even if we've replaced
/// tabs with spaces for rendering purposes.
#[test]
fn tab_replacement() {
let mut env = TestEnvironment::new();
env.add("example.py", "def foo():\n\treturn 1");
env.format(DiagnosticFormat::Full);
let diagnostic = env.err().primary("example.py", "2:1", "2:9", "").build();
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:2:2
|
1 | def foo():
2 | return 1
| ^^^^^^^^
|
");
}
/// For file-level diagnostics, we expect to see the header line with the diagnostic information
/// and the `-->` line with the file information but no lines of source code.
#[test]
fn file_level() {
let mut env = TestEnvironment::new();
env.add("example.py", "");
env.format(DiagnosticFormat::Full);
let mut diagnostic = env.err().build();
let span = env.path("example.py").with_range(TextRange::default());
let mut annotation = Annotation::primary(span);
annotation.set_file_level(true);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
");
}
/// Check that ranges in notebooks are remapped relative to the cells.
#[test]
fn notebook_output() {
let (env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[unused-import][*]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
help: Remove unused import: `os`
error[unused-import][*]: `math` imported but unused
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^
3 |
4 | print('hello world')
|
help: Remove unused import: `math`
error[unused-variable]: Local variable `x` is assigned to but never used
--> notebook.ipynb:cell 3:4:5
|
2 | def foo():
3 | print()
4 | x = 1
| ^
|
help: Remove assignment to unused variable `x`
");
}
/// Check notebook handling for multiple annotations in a single diagnostic that span cells.
#[test]
fn notebook_output_multiple_annotations() {
let mut env = TestEnvironment::new();
env.add("notebook.ipynb", NOTEBOOK);
let diagnostics = vec![
// adjacent context windows
env.builder("unused-import", Severity::Error, "`os` imported but unused")
.primary("notebook.ipynb", "2:7", "2:9", "")
.secondary("notebook.ipynb", "4:7", "4:11", "second cell")
.help("Remove unused import: `os`")
.build(),
// non-adjacent context windows
env.builder("unused-import", Severity::Error, "`os` imported but unused")
.primary("notebook.ipynb", "2:7", "2:9", "")
.secondary("notebook.ipynb", "10:4", "10:5", "second cell")
.help("Remove unused import: `os`")
.build(),
// adjacent context windows in the same cell
env.err()
.primary("notebook.ipynb", "4:7", "4:11", "second cell")
.secondary("notebook.ipynb", "6:0", "6:5", "print statement")
.help("Remove `print` statement")
.build(),
];
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[unused-import]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
::: notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ---- second cell
3 |
4 | print('hello world')
|
help: Remove unused import: `os`
error[unused-import]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
::: notebook.ipynb:cell 3:4:5
|
2 | def foo():
3 | print()
4 | x = 1
| - second cell
|
help: Remove unused import: `os`
error[test-diagnostic]: main diagnostic message
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^ second cell
3 |
4 | print('hello world')
| ----- print statement
|
help: Remove `print` statement
");
}
/// Test that we remap notebook cell line numbers in the diff as well as the main diagnostic.
#[test]
fn notebook_output_with_diff() {
let (mut env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
env.show_fix_diff(true);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[unused-import][*]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
help: Remove unused import: `os`
Safe fix
::: cell 1
1 1 | # cell 1
2 |-import os
error[unused-import][*]: `math` imported but unused
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^
3 |
4 | print('hello world')
|
help: Remove unused import: `math`
Safe fix
::: cell 2
1 1 | # cell 2
2 |-import math
3 2 |
4 3 | print('hello world')
error[unused-variable]: Local variable `x` is assigned to but never used
--> notebook.ipynb:cell 3:4:5
|
2 | def foo():
3 | print()
4 | x = 1
| ^
|
help: Remove assignment to unused variable `x`
Unsafe fix
::: cell 3
1 1 | # cell 3
2 2 | def foo():
3 3 | print()
4 |- x = 1
5 4 |
");
}
#[test]
fn notebook_output_with_diff_spanning_cells() {
let (mut env, mut diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
env.show_fix_diff(true);
// Move all of the edits from the later diagnostics to the first diagnostic to simulate a
// single diagnostic with edits in different cells.
let mut diagnostic = diagnostics.swap_remove(0);
let fix = diagnostic.fix_mut().unwrap();
let mut edits = fix.edits().to_vec();
for diag in diagnostics {
edits.extend_from_slice(diag.fix().unwrap().edits());
}
*fix = Fix::unsafe_edits(edits.remove(0), edits);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[unused-import]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
help: Remove unused import: `os`
Unsafe fix
::: cell 1
1 1 | # cell 1
2 |-import os
::: cell 2
1 1 | # cell 2
2 |-import math
3 2 |
4 3 | print('hello world')
::: cell 3
1 1 | # cell 3
2 2 | def foo():
3 3 | print()
4 |- x = 1
5 4 |
");
}
/// Carriage return (`\r`) is a valid line-ending in Python, so we should normalize this to a
/// line feed (`\n`) for rendering. Otherwise we report a single long line for this case.
#[test]
fn normalize_carriage_return() {
let mut env = TestEnvironment::new();
env.add(
"example.py",
"# Keep parenthesis around preserved CR\rint(-\r 1)\rint(+\r 1)",
);
env.format(DiagnosticFormat::Full);
let mut diagnostic = env.err().build();
let span = env
.path("example.py")
.with_range(TextRange::at(TextSize::new(39), TextSize::new(0)));
let annotation = Annotation::primary(span);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:2:1
|
1 | # Keep parenthesis around preserved CR
2 | int(-
| ^
3 | 1)
4 | int(+
|
");
}
/// Without stripping the BOM, we report an error in column 2, unlike Ruff.
#[test]
fn strip_bom() {
let mut env = TestEnvironment::new();
env.add("example.py", "\u{feff}import foo");
env.format(DiagnosticFormat::Full);
let mut diagnostic = env.err().build();
let span = env
.path("example.py")
.with_range(TextRange::at(TextSize::new(3), TextSize::new(0)));
let annotation = Annotation::primary(span);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
|
1 | import foo
| ^
|
");
}
#[test]
fn bom_with_default_range() {
let mut env = TestEnvironment::new();
env.add("example.py", "\u{feff}import foo");
env.format(DiagnosticFormat::Full);
let mut diagnostic = env.err().build();
let span = env.path("example.py").with_range(TextRange::default());
let annotation = Annotation::primary(span);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
|
1 | import foo
| ^
|
");
}
/// We previously rendered this correctly, but the header was falling back to 1:1 for ranges
/// pointing to the final newline in a file. Like Ruff, we now use the offset of the first
/// character in the nonexistent final line in the header.
#[test]
fn end_of_file() {
let mut env = TestEnvironment::new();
let contents = "unexpected eof\n";
env.add("example.py", contents);
env.format(DiagnosticFormat::Full);
let mut diagnostic = env.err().build();
let span = env
.path("example.py")
.with_range(TextRange::at(contents.text_len(), TextSize::new(0)));
let annotation = Annotation::primary(span);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:2:1
|
1 | unexpected eof
| ^
|
");
}
}

View File

@@ -0,0 +1,205 @@
use std::{
collections::HashSet,
hash::{DefaultHasher, Hash, Hasher},
path::Path,
};
use ruff_source_file::LineColumn;
use serde::{Serialize, Serializer, ser::SerializeSeq};
use crate::diagnostic::{Diagnostic, Severity};
use super::FileResolver;
pub(super) struct GitlabRenderer<'a> {
resolver: &'a dyn FileResolver,
}
impl<'a> GitlabRenderer<'a> {
pub(super) fn new(resolver: &'a dyn FileResolver) -> Self {
Self { resolver }
}
}
impl GitlabRenderer<'_> {
pub(super) fn render(
&self,
f: &mut std::fmt::Formatter,
diagnostics: &[Diagnostic],
) -> std::fmt::Result {
write!(
f,
"{}",
serde_json::to_string_pretty(&SerializedMessages {
diagnostics,
resolver: self.resolver,
#[expect(
clippy::disallowed_methods,
reason = "We don't have access to a `System` here, \
and this is only intended for use by GitLab CI, \
which runs on a real `System`."
)]
project_dir: std::env::var("CI_PROJECT_DIR").ok().as_deref(),
})
.unwrap()
)
}
}
struct SerializedMessages<'a> {
diagnostics: &'a [Diagnostic],
resolver: &'a dyn FileResolver,
project_dir: Option<&'a str>,
}
impl Serialize for SerializedMessages<'_> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut s = serializer.serialize_seq(Some(self.diagnostics.len()))?;
let mut fingerprints = HashSet::<u64>::with_capacity(self.diagnostics.len());
for diagnostic in self.diagnostics {
let location = diagnostic
.primary_span()
.map(|span| {
let file = span.file();
let positions = if self.resolver.is_notebook(file) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
Default::default()
} else {
let diagnostic_source = file.diagnostic_source(self.resolver);
let source_code = diagnostic_source.as_source_code();
span.range()
.map(|range| Positions {
begin: source_code.line_column(range.start()),
end: source_code.line_column(range.end()),
})
.unwrap_or_default()
};
let path = self.project_dir.as_ref().map_or_else(
|| file.relative_path(self.resolver).display().to_string(),
|project_dir| relativize_path_to(file.path(self.resolver), project_dir),
);
Location { path, positions }
})
.unwrap_or_default();
let mut message_fingerprint = fingerprint(diagnostic, &location.path, 0);
// Make sure that we do not get a fingerprint that is already in use
// by adding in the previously generated one.
while fingerprints.contains(&message_fingerprint) {
message_fingerprint = fingerprint(diagnostic, &location.path, message_fingerprint);
}
fingerprints.insert(message_fingerprint);
let description = diagnostic.body();
let check_name = diagnostic.secondary_code_or_id();
let severity = match diagnostic.severity() {
Severity::Info => "info",
Severity::Warning => "minor",
Severity::Error => "major",
// Another option here is `blocker`
Severity::Fatal => "critical",
};
let value = Message {
check_name,
// GitLab doesn't display the separate `check_name` field in a Code Quality report,
// so prepend it to the description too.
description: format!("{check_name}: {description}"),
severity,
fingerprint: format!("{:x}", message_fingerprint),
location,
};
s.serialize_element(&value)?;
}
s.end()
}
}
#[derive(Serialize)]
struct Message<'a> {
check_name: &'a str,
description: String,
severity: &'static str,
fingerprint: String,
location: Location,
}
/// The place in the source code where the issue was discovered.
///
/// According to the CodeClimate report format [specification] linked from the GitLab [docs], this
/// field is required, so we fall back on a default `path` and position if the diagnostic doesn't
/// have a primary span.
///
/// [specification]: https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types
/// [docs]: https://docs.gitlab.com/ci/testing/code_quality/#code-quality-report-format
#[derive(Default, Serialize)]
struct Location {
path: String,
positions: Positions,
}
#[derive(Default, Serialize)]
struct Positions {
begin: LineColumn,
end: LineColumn,
}
/// Generate a unique fingerprint to identify a violation.
fn fingerprint(diagnostic: &Diagnostic, project_path: &str, salt: u64) -> u64 {
let mut hasher = DefaultHasher::new();
salt.hash(&mut hasher);
diagnostic.name().hash(&mut hasher);
project_path.hash(&mut hasher);
hasher.finish()
}
/// Convert an absolute path to be relative to the specified project root.
fn relativize_path_to<P: AsRef<Path>, R: AsRef<Path>>(path: P, project_root: R) -> String {
format!(
"{}",
pathdiff::diff_paths(&path, project_root)
.expect("Could not diff paths")
.display()
)
}
#[cfg(test)]
mod tests {
use crate::diagnostic::{
DiagnosticFormat,
render::tests::{create_diagnostics, create_syntax_error_diagnostics},
};
const FINGERPRINT_FILTERS: [(&str, &str); 1] = [(
r#""fingerprint": "[a-z0-9]+","#,
r#""fingerprint": "<redacted>","#,
)];
#[test]
fn output() {
let (env, diagnostics) = create_diagnostics(DiagnosticFormat::Gitlab);
insta::with_settings!({filters => FINGERPRINT_FILTERS}, {
insta::assert_snapshot!(env.render_diagnostics(&diagnostics));
});
}
#[test]
fn syntax_errors() {
let (env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Gitlab);
insta::with_settings!({filters => FINGERPRINT_FILTERS}, {
insta::assert_snapshot!(env.render_diagnostics(&diagnostics));
});
}
}

View File

@@ -6,7 +6,7 @@ use ruff_notebook::NotebookIndex;
use ruff_source_file::{LineColumn, OneIndexed};
use ruff_text_size::Ranged;
use crate::diagnostic::{Diagnostic, DiagnosticSource, DisplayDiagnosticConfig, SecondaryCode};
use crate::diagnostic::{Diagnostic, DiagnosticSource, DisplayDiagnosticConfig};
use super::FileResolver;
@@ -99,7 +99,7 @@ pub(super) fn diagnostic_to_json<'a>(
// In preview, the locations and filename can be optional.
if config.preview {
JsonDiagnostic {
code: diagnostic.secondary_code(),
code: diagnostic.secondary_code_or_id(),
url: diagnostic.to_ruff_url(),
message: diagnostic.body(),
fix,
@@ -111,7 +111,7 @@ pub(super) fn diagnostic_to_json<'a>(
}
} else {
JsonDiagnostic {
code: diagnostic.secondary_code(),
code: diagnostic.secondary_code_or_id(),
url: diagnostic.to_ruff_url(),
message: diagnostic.body(),
fix,
@@ -221,7 +221,7 @@ impl Serialize for ExpandedEdits<'_> {
#[derive(Serialize)]
pub(crate) struct JsonDiagnostic<'a> {
cell: Option<OneIndexed>,
code: Option<&'a SecondaryCode>,
code: &'a str,
end_location: Option<JsonLocation>,
filename: Option<&'a str>,
fix: Option<JsonFix<'a>>,
@@ -302,7 +302,7 @@ mod tests {
[
{
"cell": null,
"code": null,
"code": "test-diagnostic",
"end_location": {
"column": 1,
"row": 1
@@ -336,7 +336,7 @@ mod tests {
[
{
"cell": null,
"code": null,
"code": "test-diagnostic",
"end_location": null,
"filename": null,
"fix": null,

View File

@@ -2,5 +2,5 @@
source: crates/ruff_db/src/diagnostic/render/azure.rs
expression: env.render_diagnostics(&diagnostics)
---
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=1;columnnumber=15;]SyntaxError: Expected one or more symbol names after import
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=3;columnnumber=12;]SyntaxError: Expected ')', found newline
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=1;columnnumber=15;code=invalid-syntax;]Expected one or more symbol names after import
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=3;columnnumber=12;code=invalid-syntax;]Expected ')', found newline

View File

@@ -0,0 +1,63 @@
---
source: crates/ruff_db/src/diagnostic/render/gitlab.rs
expression: env.render_diagnostics(&diagnostics)
---
[
{
"check_name": "F401",
"description": "F401: `os` imported but unused",
"severity": "major",
"fingerprint": "<redacted>",
"location": {
"path": "fib.py",
"positions": {
"begin": {
"line": 1,
"column": 8
},
"end": {
"line": 1,
"column": 10
}
}
}
},
{
"check_name": "F841",
"description": "F841: Local variable `x` is assigned to but never used",
"severity": "major",
"fingerprint": "<redacted>",
"location": {
"path": "fib.py",
"positions": {
"begin": {
"line": 6,
"column": 5
},
"end": {
"line": 6,
"column": 6
}
}
}
},
{
"check_name": "F821",
"description": "F821: Undefined name `a`",
"severity": "major",
"fingerprint": "<redacted>",
"location": {
"path": "undef.py",
"positions": {
"begin": {
"line": 1,
"column": 4
},
"end": {
"line": 1,
"column": 5
}
}
}
}
]

View File

@@ -0,0 +1,44 @@
---
source: crates/ruff_db/src/diagnostic/render/gitlab.rs
expression: env.render_diagnostics(&diagnostics)
---
[
{
"check_name": "invalid-syntax",
"description": "invalid-syntax: Expected one or more symbol names after import",
"severity": "major",
"fingerprint": "<redacted>",
"location": {
"path": "syntax_errors.py",
"positions": {
"begin": {
"line": 1,
"column": 15
},
"end": {
"line": 2,
"column": 1
}
}
}
},
{
"check_name": "invalid-syntax",
"description": "invalid-syntax: Expected ')', found newline",
"severity": "major",
"fingerprint": "<redacted>",
"location": {
"path": "syntax_errors.py",
"positions": {
"begin": {
"line": 3,
"column": 12
},
"end": {
"line": 4,
"column": 1
}
}
}
}
]

View File

@@ -5,7 +5,7 @@ expression: env.render_diagnostics(&diagnostics)
[
{
"cell": null,
"code": null,
"code": "invalid-syntax",
"end_location": {
"column": 1,
"row": 2
@@ -16,13 +16,13 @@ expression: env.render_diagnostics(&diagnostics)
"column": 15,
"row": 1
},
"message": "SyntaxError: Expected one or more symbol names after import",
"message": "Expected one or more symbol names after import",
"noqa_row": null,
"url": null
},
{
"cell": null,
"code": null,
"code": "invalid-syntax",
"end_location": {
"column": 1,
"row": 4
@@ -33,7 +33,7 @@ expression: env.render_diagnostics(&diagnostics)
"column": 12,
"row": 3
},
"message": "SyntaxError: Expected ')', found newline",
"message": "Expected ')', found newline",
"noqa_row": null,
"url": null
}

View File

@@ -2,5 +2,5 @@
source: crates/ruff_db/src/diagnostic/render/json_lines.rs
expression: env.render_diagnostics(&diagnostics)
---
{"cell":null,"code":null,"end_location":{"column":1,"row":2},"filename":"syntax_errors.py","fix":null,"location":{"column":15,"row":1},"message":"SyntaxError: Expected one or more symbol names after import","noqa_row":null,"url":null}
{"cell":null,"code":null,"end_location":{"column":1,"row":4},"filename":"syntax_errors.py","fix":null,"location":{"column":12,"row":3},"message":"SyntaxError: Expected ')', found newline","noqa_row":null,"url":null}
{"cell":null,"code":"invalid-syntax","end_location":{"column":1,"row":2},"filename":"syntax_errors.py","fix":null,"location":{"column":15,"row":1},"message":"Expected one or more symbol names after import","noqa_row":null,"url":null}
{"cell":null,"code":"invalid-syntax","end_location":{"column":1,"row":4},"filename":"syntax_errors.py","fix":null,"location":{"column":12,"row":3},"message":"Expected ')', found newline","noqa_row":null,"url":null}

View File

@@ -6,10 +6,10 @@ expression: env.render_diagnostics(&diagnostics)
<testsuites name="ruff" tests="2" failures="2" errors="0">
<testsuite name="syntax_errors.py" tests="2" disabled="0" errors="0" failures="2" package="org.ruff">
<testcase name="org.ruff.invalid-syntax" classname="syntax_errors" line="1" column="15">
<failure message="SyntaxError: Expected one or more symbol names after import">line 1, col 15, SyntaxError: Expected one or more symbol names after import</failure>
<failure message="Expected one or more symbol names after import">line 1, col 15, Expected one or more symbol names after import</failure>
</testcase>
<testcase name="org.ruff.invalid-syntax" classname="syntax_errors" line="3" column="12">
<failure message="SyntaxError: Expected &apos;)&apos;, found newline">line 3, col 12, SyntaxError: Expected &apos;)&apos;, found newline</failure>
<failure message="Expected &apos;)&apos;, found newline">line 3, col 12, Expected &apos;)&apos;, found newline</failure>
</testcase>
</testsuite>
</testsuites>

View File

@@ -2,5 +2,5 @@
source: crates/ruff_db/src/diagnostic/render/pylint.rs
expression: env.render_diagnostics(&diagnostics)
---
syntax_errors.py:1: [invalid-syntax] SyntaxError: Expected one or more symbol names after import
syntax_errors.py:3: [invalid-syntax] SyntaxError: Expected ')', found newline
syntax_errors.py:1: [invalid-syntax] Expected one or more symbol names after import
syntax_errors.py:3: [invalid-syntax] Expected ')', found newline

View File

@@ -21,7 +21,7 @@ expression: env.render_diagnostics(&diagnostics)
}
}
},
"message": "SyntaxError: Expected one or more symbol names after import"
"message": "Expected one or more symbol names after import"
},
{
"code": {
@@ -40,7 +40,7 @@ expression: env.render_diagnostics(&diagnostics)
}
}
},
"message": "SyntaxError: Expected ')', found newline"
"message": "Expected ')', found newline"
}
],
"severity": "WARNING",

View File

@@ -40,9 +40,12 @@ pub struct DiagnosticStylesheet {
pub(crate) help: Style,
pub(crate) line_no: Style,
pub(crate) emphasis: Style,
pub(crate) underline: Style,
pub(crate) none: Style,
pub(crate) separator: Style,
pub(crate) secondary_code: Style,
pub(crate) insertion: Style,
pub(crate) deletion: Style,
}
impl Default for DiagnosticStylesheet {
@@ -63,9 +66,12 @@ impl DiagnosticStylesheet {
help: AnsiColor::BrightCyan.on_default().effects(Effects::BOLD),
line_no: bright_blue.effects(Effects::BOLD),
emphasis: Style::new().effects(Effects::BOLD),
underline: Style::new().effects(Effects::UNDERLINE),
none: Style::new(),
separator: AnsiColor::Cyan.on_default(),
secondary_code: AnsiColor::Red.on_default().effects(Effects::BOLD),
insertion: AnsiColor::Green.on_default(),
deletion: AnsiColor::Red.on_default(),
}
}
@@ -78,9 +84,12 @@ impl DiagnosticStylesheet {
help: Style::new(),
line_no: Style::new(),
emphasis: Style::new(),
underline: Style::new(),
none: Style::new(),
separator: Style::new(),
secondary_code: Style::new(),
insertion: Style::new(),
deletion: Style::new(),
}
}
}

View File

@@ -9,7 +9,7 @@ use crate::system::file_time_now;
/// * The last modification time of the file.
/// * The hash of the file's content.
/// * The revision as it comes from an external system, for example the LSP.
#[derive(Copy, Clone, Debug, Eq, PartialEq, Default)]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Default, get_size2::GetSize)]
pub struct FileRevision(u128);
impl FileRevision {

View File

@@ -87,11 +87,12 @@ impl Files {
.system_by_path
.entry(absolute.clone())
.or_insert_with(|| {
tracing::trace!("Adding file '{path}'");
let metadata = db.system().path_metadata(path);
tracing::trace!("Adding file '{absolute}'");
let durability = self
.root(db, path)
.root(db, &absolute)
.map_or(Durability::default(), |root| root.durability(db));
let builder = File::builder(FilePath::System(absolute))
@@ -289,7 +290,7 @@ impl std::panic::RefUnwindSafe for Files {}
/// # Ordering
/// Ordering is based on the file's salsa-assigned id and not on its values.
/// The id may change between runs.
#[salsa::input]
#[salsa::input(heap_size=ruff_memory_usage::heap_size)]
#[derive(PartialOrd, Ord)]
pub struct File {
/// The path of the file (immutable).
@@ -521,7 +522,7 @@ impl VirtualFile {
// The types in here need to be public because they're salsa ingredients but we
// don't want them to be publicly accessible. That's why we put them into a private module.
mod private {
#[derive(Copy, Clone, Debug, Eq, PartialEq, Default)]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Default, get_size2::GetSize)]
pub enum FileStatus {
/// The file exists.
#[default]

View File

@@ -16,7 +16,7 @@ use crate::system::{SystemPath, SystemPathBuf};
/// The main usage of file roots is to determine a file's durability. But it can also be used
/// to make a salsa query dependent on whether a file in a root has changed without writing any
/// manual invalidation logic.
#[salsa::input(debug)]
#[salsa::input(debug, heap_size=ruff_memory_usage::heap_size)]
pub struct FileRoot {
/// The path of a root is guaranteed to never change.
#[returns(deref)]
@@ -37,7 +37,7 @@ impl FileRoot {
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
#[derive(Copy, Clone, Debug, Eq, PartialEq, get_size2::GetSize)]
pub enum FileRootKind {
/// The root of a project.
Project,

View File

@@ -11,7 +11,7 @@ use std::fmt::{Display, Formatter};
/// * a file stored on the [host system](crate::system::System).
/// * a virtual file stored on the [host system](crate::system::System).
/// * a vendored file stored in the [vendored file system](crate::vendored::VendoredFileSystem).
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
#[derive(Clone, Debug, Eq, PartialEq, Hash, get_size2::GetSize)]
pub enum FilePath {
/// Path to a file on the [host system](crate::system::System).
System(SystemPathBuf),

View File

@@ -1,3 +1,8 @@
#![warn(
clippy::disallowed_methods,
reason = "Prefer System trait methods over std methods"
)]
use crate::files::Files;
use crate::system::System;
use crate::vendored::VendoredFileSystem;
@@ -65,6 +70,10 @@ pub trait Db: salsa::Database {
/// to process work in parallel. For example, to index a directory or checking the files of a project.
/// ty can still spawn more threads for other tasks, e.g. to wait for a Ctrl+C signal or
/// watching the files for changes.
#[expect(
clippy::disallowed_methods,
reason = "We don't have access to System here, but this is also only used by the CLI and the server which always run on a real system."
)]
pub fn max_parallelism() -> NonZeroUsize {
std::env::var(EnvVars::TY_MAX_PARALLELISM)
.or_else(|_| std::env::var(EnvVars::RAYON_NUM_THREADS))

View File

@@ -21,7 +21,7 @@ use crate::source::source_text;
/// reflected in the changed AST offsets.
/// The other reason is that Ruff's AST doesn't implement `Eq` which Salsa requires
/// for determining if a query result is unchanged.
#[salsa::tracked(returns(ref), no_eq, heap_size=get_size2::GetSize::get_heap_size)]
#[salsa::tracked(returns(ref), no_eq, heap_size=ruff_memory_usage::heap_size)]
pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
let _span = tracing::trace_span!("parsed_module", ?file).entered();
@@ -92,14 +92,14 @@ impl ParsedModule {
self.inner.store(None);
}
/// Returns a pointer for this [`ParsedModule`].
/// Returns the pointer address of this [`ParsedModule`].
///
/// The pointer uniquely identifies the module within the current Salsa revision,
/// regardless of whether particular [`ParsedModuleRef`] instances are garbage collected.
pub fn as_ptr(&self) -> *const () {
pub fn addr(&self) -> usize {
// Note that the outer `Arc` in `inner` is stable across garbage collection, while the inner
// `Arc` within the `ArcSwap` may change.
Arc::as_ptr(&self.inner).cast()
Arc::as_ptr(&self.inner).addr()
}
}
@@ -202,9 +202,13 @@ mod indexed {
/// Returns the node at the given index.
pub fn get_by_index<'ast>(&'ast self, index: NodeIndex) -> AnyRootNodeRef<'ast> {
let index = index
.as_u32()
.expect("attempted to access uninitialized `NodeIndex`");
// Note that this method restores the correct lifetime: the nodes are valid for as
// long as the reference to `IndexedModule` is alive.
self.index[index.as_usize()]
self.index[index as usize]
}
}
@@ -220,7 +224,7 @@ mod indexed {
T: HasNodeIndex + std::fmt::Debug,
AnyRootNodeRef<'a>: From<&'a T>,
{
node.node_index().set(self.index);
node.node_index().set(NodeIndex::from(self.index));
self.nodes.push(AnyRootNodeRef::from(node));
self.index += 1;
}

View File

@@ -9,7 +9,7 @@ use crate::Db;
use crate::files::{File, FilePath};
/// Reads the source text of a python text file (must be valid UTF8) or notebook.
#[salsa::tracked(heap_size=get_size2::GetSize::get_heap_size)]
#[salsa::tracked(heap_size=ruff_memory_usage::heap_size)]
pub fn source_text(db: &dyn Db, file: File) -> SourceText {
let path = file.path(db);
let _span = tracing::trace_span!("source_text", file = %path).entered();
@@ -69,21 +69,21 @@ impl SourceText {
pub fn as_str(&self) -> &str {
match &self.inner.kind {
SourceTextKind::Text(source) => source,
SourceTextKind::Notebook(notebook) => notebook.source_code(),
SourceTextKind::Notebook { notebook } => notebook.source_code(),
}
}
/// Returns the underlying notebook if this is a notebook file.
pub fn as_notebook(&self) -> Option<&Notebook> {
match &self.inner.kind {
SourceTextKind::Notebook(notebook) => Some(notebook),
SourceTextKind::Notebook { notebook } => Some(notebook),
SourceTextKind::Text(_) => None,
}
}
/// Returns `true` if this is a notebook source file.
pub fn is_notebook(&self) -> bool {
matches!(&self.inner.kind, SourceTextKind::Notebook(_))
matches!(&self.inner.kind, SourceTextKind::Notebook { .. })
}
/// Returns `true` if there was an error when reading the content of the file.
@@ -108,7 +108,7 @@ impl std::fmt::Debug for SourceText {
SourceTextKind::Text(text) => {
dbg.field(text);
}
SourceTextKind::Notebook(notebook) => {
SourceTextKind::Notebook { notebook } => {
dbg.field(notebook);
}
}
@@ -123,23 +123,15 @@ struct SourceTextInner {
read_error: Option<SourceTextError>,
}
#[derive(Eq, PartialEq)]
#[derive(Eq, PartialEq, get_size2::GetSize)]
enum SourceTextKind {
Text(String),
Notebook(Box<Notebook>),
}
impl get_size2::GetSize for SourceTextKind {
fn get_heap_size(&self) -> usize {
match self {
SourceTextKind::Text(text) => text.get_heap_size(),
// TODO: The `get-size` derive does not support ignoring enum variants.
//
// Jupyter notebooks are not very relevant for memory profiling, and contain
// arbitrary JSON values that do not implement the `GetSize` trait.
SourceTextKind::Notebook(_) => 0,
}
}
Notebook {
// Jupyter notebooks are not very relevant for memory profiling, and contain
// arbitrary JSON values that do not implement the `GetSize` trait.
#[get_size(ignore)]
notebook: Box<Notebook>,
},
}
impl From<String> for SourceTextKind {
@@ -150,7 +142,9 @@ impl From<String> for SourceTextKind {
impl From<Notebook> for SourceTextKind {
fn from(notebook: Notebook) -> Self {
SourceTextKind::Notebook(Box::new(notebook))
SourceTextKind::Notebook {
notebook: Box::new(notebook),
}
}
}
@@ -163,7 +157,7 @@ pub enum SourceTextError {
}
/// Computes the [`LineIndex`] for `file`.
#[salsa::tracked(heap_size=get_size2::GetSize::get_heap_size)]
#[salsa::tracked(heap_size=ruff_memory_usage::heap_size)]
pub fn line_index(db: &dyn Db, file: File) -> LineIndex {
let _span = tracing::trace_span!("line_index", ?file).entered();

View File

@@ -46,7 +46,7 @@ pub type Result<T> = std::io::Result<T>;
/// * File watching isn't supported.
///
/// Abstracting the system also enables tests to use a more efficient in-memory file system.
pub trait System: Debug {
pub trait System: Debug + Sync + Send {
/// Reads the metadata of the file or directory at `path`.
///
/// This function will traverse symbolic links to query information about the destination file.
@@ -197,6 +197,8 @@ pub trait System: Debug {
fn as_any(&self) -> &dyn std::any::Any;
fn as_any_mut(&mut self) -> &mut dyn std::any::Any;
fn dyn_clone(&self) -> Box<dyn System>;
}
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq)]

View File

@@ -1,3 +1,5 @@
#![allow(clippy::disallowed_methods)]
use super::walk_directory::{
self, DirectoryWalker, WalkDirectoryBuilder, WalkDirectoryConfiguration,
WalkDirectoryVisitorBuilder, WalkState,
@@ -255,6 +257,10 @@ impl System for OsSystem {
fn env_var(&self, name: &str) -> std::result::Result<String, std::env::VarError> {
std::env::var(name)
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(self.clone())
}
}
impl OsSystem {

View File

@@ -236,7 +236,7 @@ impl SystemPath {
///
/// [`CurDir`]: camino::Utf8Component::CurDir
#[inline]
pub fn components(&self) -> camino::Utf8Components {
pub fn components(&self) -> camino::Utf8Components<'_> {
self.0.components()
}
@@ -762,7 +762,7 @@ impl SystemVirtualPath {
}
/// An owned, virtual path on [`System`](`super::System`) (akin to [`String`]).
#[derive(Eq, PartialEq, Clone, Hash, PartialOrd, Ord)]
#[derive(Eq, PartialEq, Clone, Hash, PartialOrd, Ord, get_size2::GetSize)]
pub struct SystemVirtualPathBuf(String);
impl SystemVirtualPathBuf {

View File

@@ -146,6 +146,10 @@ impl System for TestSystem {
fn case_sensitivity(&self) -> CaseSensitivity {
self.system().case_sensitivity()
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(self.clone())
}
}
impl Default for TestSystem {
@@ -394,6 +398,13 @@ impl System for InMemorySystem {
fn case_sensitivity(&self) -> CaseSensitivity {
CaseSensitivity::CaseSensitive
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(Self {
user_config_directory: Mutex::new(self.user_config_directory.lock().unwrap().clone()),
memory_fs: self.memory_fs.clone(),
})
}
}
impl WritableSystem for InMemorySystem {

View File

@@ -195,7 +195,7 @@ impl VendoredFileSystem {
///
/// ## Panics:
/// If the current thread already holds the lock.
fn lock_archive(&self) -> LockedZipArchive {
fn lock_archive(&self) -> LockedZipArchive<'_> {
self.inner.lock().unwrap()
}
}
@@ -360,7 +360,7 @@ impl VendoredZipArchive {
Ok(Self(ZipArchive::new(io::Cursor::new(data))?))
}
fn lookup_path(&mut self, path: &NormalizedVendoredPath) -> Result<ZipFile> {
fn lookup_path(&mut self, path: &NormalizedVendoredPath) -> Result<ZipFile<'_>> {
Ok(self.0.by_name(path.as_str())?)
}

View File

@@ -37,7 +37,7 @@ impl VendoredPath {
self.0.as_std_path()
}
pub fn components(&self) -> Utf8Components {
pub fn components(&self) -> Utf8Components<'_> {
self.0.components()
}

View File

@@ -13,6 +13,7 @@ license = { workspace = true }
[dependencies]
ty = { workspace = true }
ty_project = { workspace = true, features = ["schemars"] }
ty_python_semantic = { workspace = true }
ty_static = { workspace = true }
ruff = { workspace = true }
ruff_formatter = { workspace = true }

View File

@@ -348,7 +348,7 @@ fn format_dev_multi_project(
debug!(parent: None, "Starting {}", project_path.display());
match format_dev_project(
&[project_path.clone()],
std::slice::from_ref(&project_path),
args.stability_check,
args.write,
args.preview,
@@ -628,7 +628,7 @@ struct CheckRepoResult {
}
impl CheckRepoResult {
fn display(&self, format: Format) -> DisplayCheckRepoResult {
fn display(&self, format: Format) -> DisplayCheckRepoResult<'_> {
DisplayCheckRepoResult {
result: self,
format,
@@ -665,7 +665,7 @@ struct Diagnostic {
}
impl Diagnostic {
fn display(&self, format: Format) -> DisplayDiagnostic {
fn display(&self, format: Format) -> DisplayDiagnostic<'_> {
DisplayDiagnostic {
diagnostic: self,
format,

View File

@@ -52,7 +52,7 @@ pub(crate) fn main(args: &Args) -> Result<()> {
}
fn generate_markdown() -> String {
let registry = &*ty_project::DEFAULT_LINT_REGISTRY;
let registry = ty_python_semantic::default_lint_registry();
let mut output = String::new();

View File

@@ -14,8 +14,11 @@ license = { workspace = true }
doctest = false
[dependencies]
ruff_text_size = { workspace = true }
ruff_text_size = { workspace = true, features = ["get-size"] }
get-size2 = { workspace = true }
is-macro = { workspace = true }
serde = { workspace = true, optional = true, features = [] }
[features]
serde = ["dep:serde", "ruff_text_size/serde"]

View File

@@ -43,7 +43,7 @@ pub enum IsolationLevel {
}
/// A collection of [`Edit`] elements to be applied to a source file.
#[derive(Debug, PartialEq, Eq, Clone, get_size2::GetSize)]
#[derive(Debug, PartialEq, Eq, Clone, Hash, get_size2::GetSize)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct Fix {
/// The [`Edit`] elements to be applied, sorted by [`Edit::start`] in ascending order.

View File

@@ -562,7 +562,7 @@ struct RemoveSoftLinebreaksSnapshot {
pub trait BufferExtensions: Buffer + Sized {
/// Returns a new buffer that calls the passed inspector for every element that gets written to the output
#[must_use]
fn inspect<F>(&mut self, inspector: F) -> Inspect<Self::Context, F>
fn inspect<F>(&mut self, inspector: F) -> Inspect<'_, Self::Context, F>
where
F: FnMut(&FormatElement),
{
@@ -607,7 +607,7 @@ pub trait BufferExtensions: Buffer + Sized {
/// # }
/// ```
#[must_use]
fn start_recording(&mut self) -> Recording<Self> {
fn start_recording(&mut self) -> Recording<'_, Self> {
Recording::new(self)
}

View File

@@ -340,7 +340,7 @@ impl<Context> Format<Context> for SourcePosition {
/// Creates a text from a dynamic string.
///
/// This is done by allocating a new string internally.
pub fn text(text: &str) -> Text {
pub fn text(text: &str) -> Text<'_> {
debug_assert_no_newlines(text);
Text { text }
@@ -459,7 +459,10 @@ fn debug_assert_no_newlines(text: &str) {
/// # }
/// ```
#[inline]
pub fn line_suffix<Content, Context>(inner: &Content, reserved_width: u32) -> LineSuffix<Context>
pub fn line_suffix<Content, Context>(
inner: &Content,
reserved_width: u32,
) -> LineSuffix<'_, Context>
where
Content: Format<Context>,
{
@@ -597,7 +600,10 @@ impl<Context> Format<Context> for LineSuffixBoundary {
/// Use `Memoized.inspect(f)?.has_label(LabelId::of::<SomeLabelId>()` if you need to know if some content breaks that should
/// only be written later.
#[inline]
pub fn labelled<Content, Context>(label_id: LabelId, content: &Content) -> FormatLabelled<Context>
pub fn labelled<Content, Context>(
label_id: LabelId,
content: &Content,
) -> FormatLabelled<'_, Context>
where
Content: Format<Context>,
{
@@ -700,7 +706,7 @@ impl<Context> Format<Context> for Space {
/// # }
/// ```
#[inline]
pub fn indent<Content, Context>(content: &Content) -> Indent<Context>
pub fn indent<Content, Context>(content: &Content) -> Indent<'_, Context>
where
Content: Format<Context>,
{
@@ -771,7 +777,7 @@ impl<Context> std::fmt::Debug for Indent<'_, Context> {
/// # }
/// ```
#[inline]
pub fn dedent<Content, Context>(content: &Content) -> Dedent<Context>
pub fn dedent<Content, Context>(content: &Content) -> Dedent<'_, Context>
where
Content: Format<Context>,
{
@@ -846,7 +852,7 @@ impl<Context> std::fmt::Debug for Dedent<'_, Context> {
///
/// This resembles the behaviour of Prettier's `align(Number.NEGATIVE_INFINITY, content)` IR element.
#[inline]
pub fn dedent_to_root<Content, Context>(content: &Content) -> Dedent<Context>
pub fn dedent_to_root<Content, Context>(content: &Content) -> Dedent<'_, Context>
where
Content: Format<Context>,
{
@@ -960,7 +966,7 @@ where
///
/// - tab indentation: Printer indents the expression with two tabs because the `align` increases the indentation level.
/// - space indentation: Printer indents the expression by 4 spaces (one indentation level) **and** 2 spaces for the align.
pub fn align<Content, Context>(count: u8, content: &Content) -> Align<Context>
pub fn align<Content, Context>(count: u8, content: &Content) -> Align<'_, Context>
where
Content: Format<Context>,
{
@@ -1030,7 +1036,7 @@ impl<Context> std::fmt::Debug for Align<'_, Context> {
/// # }
/// ```
#[inline]
pub fn block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::Block,
@@ -1101,7 +1107,7 @@ pub fn block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Cont
/// # }
/// ```
#[inline]
pub fn soft_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn soft_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::Soft,
@@ -1175,7 +1181,9 @@ pub fn soft_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent
/// # }
/// ```
#[inline]
pub fn soft_line_indent_or_space<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn soft_line_indent_or_space<Context>(
content: &impl Format<Context>,
) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::SoftLineOrSpace,
@@ -1308,7 +1316,9 @@ impl<Context> std::fmt::Debug for BlockIndent<'_, Context> {
/// # Ok(())
/// # }
/// ```
pub fn soft_space_or_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn soft_space_or_block_indent<Context>(
content: &impl Format<Context>,
) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::SoftSpace,
@@ -1388,7 +1398,7 @@ pub fn soft_space_or_block_indent<Context>(content: &impl Format<Context>) -> Bl
/// # }
/// ```
#[inline]
pub fn group<Context>(content: &impl Format<Context>) -> Group<Context> {
pub fn group<Context>(content: &impl Format<Context>) -> Group<'_, Context> {
Group {
content: Argument::new(content),
id: None,
@@ -1551,7 +1561,7 @@ impl<Context> std::fmt::Debug for Group<'_, Context> {
#[inline]
pub fn best_fit_parenthesize<Context>(
content: &impl Format<Context>,
) -> BestFitParenthesize<Context> {
) -> BestFitParenthesize<'_, Context> {
BestFitParenthesize {
content: Argument::new(content),
group_id: None,
@@ -1691,7 +1701,7 @@ impl<Context> std::fmt::Debug for BestFitParenthesize<'_, Context> {
pub fn conditional_group<Content, Context>(
content: &Content,
condition: Condition,
) -> ConditionalGroup<Context>
) -> ConditionalGroup<'_, Context>
where
Content: Format<Context>,
{
@@ -1852,7 +1862,7 @@ impl<Context> Format<Context> for ExpandParent {
/// # }
/// ```
#[inline]
pub fn if_group_breaks<Content, Context>(content: &Content) -> IfGroupBreaks<Context>
pub fn if_group_breaks<Content, Context>(content: &Content) -> IfGroupBreaks<'_, Context>
where
Content: Format<Context>,
{
@@ -1933,7 +1943,7 @@ where
/// # }
/// ```
#[inline]
pub fn if_group_fits_on_line<Content, Context>(flat_content: &Content) -> IfGroupBreaks<Context>
pub fn if_group_fits_on_line<Content, Context>(flat_content: &Content) -> IfGroupBreaks<'_, Context>
where
Content: Format<Context>,
{
@@ -2122,7 +2132,7 @@ impl<Context> std::fmt::Debug for IfGroupBreaks<'_, Context> {
pub fn indent_if_group_breaks<Content, Context>(
content: &Content,
group_id: GroupId,
) -> IndentIfGroupBreaks<Context>
) -> IndentIfGroupBreaks<'_, Context>
where
Content: Format<Context>,
{
@@ -2205,7 +2215,7 @@ impl<Context> std::fmt::Debug for IndentIfGroupBreaks<'_, Context> {
/// # Ok(())
/// # }
/// ```
pub fn fits_expanded<Content, Context>(content: &Content) -> FitsExpanded<Context>
pub fn fits_expanded<Content, Context>(content: &Content) -> FitsExpanded<'_, Context>
where
Content: Format<Context>,
{

View File

@@ -197,7 +197,7 @@ pub const LINE_TERMINATORS: [char; 3] = ['\r', LINE_SEPARATOR, PARAGRAPH_SEPARAT
/// Replace the line terminators matching the provided list with "\n"
/// since its the only line break type supported by the printer
pub fn normalize_newlines<const N: usize>(text: &str, terminators: [char; N]) -> Cow<str> {
pub fn normalize_newlines<const N: usize>(text: &str, terminators: [char; N]) -> Cow<'_, str> {
let mut result = String::new();
let mut last_end = 0;

View File

@@ -222,7 +222,7 @@ impl FormatContext for IrFormatContext<'_> {
&IrFormatOptions
}
fn source_code(&self) -> SourceCode {
fn source_code(&self) -> SourceCode<'_> {
self.source_code
}
}

View File

@@ -193,7 +193,7 @@ pub trait FormatContext {
fn options(&self) -> &Self::Options;
/// Returns the source code from the document that gets formatted.
fn source_code(&self) -> SourceCode;
fn source_code(&self) -> SourceCode<'_>;
}
/// Options customizing how the source code should be formatted.
@@ -239,7 +239,7 @@ impl FormatContext for SimpleFormatContext {
&self.options
}
fn source_code(&self) -> SourceCode {
fn source_code(&self) -> SourceCode<'_> {
SourceCode::new(&self.source_code)
}
}
@@ -326,7 +326,7 @@ where
printer.print_with_indent(&self.document, indent)
}
fn create_printer(&self) -> Printer {
fn create_printer(&self) -> Printer<'_> {
let source_code = self.context.source_code();
let print_options = self.context.options().as_print_options();

View File

@@ -69,7 +69,7 @@ impl<'a> Resolver<'a> {
}
/// Resolves a module name to a module.
fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
pub(crate) fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
let module = resolve_module(self.db, module_name)?;
Some(module.file(self.db)?.path(self.db))
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.12.5"
version = "0.12.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -13,7 +13,6 @@ license = { workspace = true }
[lib]
[dependencies]
ruff_annotate_snippets = { workspace = true }
ruff_cache = { workspace = true }
ruff_db = { workspace = true, features = ["junit", "serde"] }
ruff_diagnostics = { workspace = true, features = ["serde"] }
@@ -52,7 +51,6 @@ path-absolutize = { workspace = true, features = [
"once_cell_cache",
"use_unix_paths_on_wasm",
] }
pathdiff = { workspace = true }
pep440_rs = { workspace = true }
pyproject-toml = { workspace = true }
regex = { workspace = true }

View File

@@ -13,6 +13,7 @@ from airflow.api_connexion.security import requires_access
from airflow.contrib.aws_athena_hook import AWSAthenaHook
from airflow.datasets import DatasetAliasEvent
from airflow.operators.subdag import SubDagOperator
from airflow.secrets.cache import SecretCache
from airflow.secrets.local_filesystem import LocalFilesystemBackend
from airflow.triggers.external_task import TaskStateTrigger
from airflow.utils import dates
@@ -56,6 +57,9 @@ SubDagOperator()
# get_connection
LocalFilesystemBackend()
# airflow.secrets.cache
SecretCache()
# airflow.triggers.external_task
TaskStateTrigger()

View File

@@ -34,7 +34,7 @@ task_group()
setup()
from airflow.decorators import teardown
from airflow.io.path import ObjectStoragePath
from airflow.io.storage import attach
from airflow.io.store import attach
from airflow.models import DAG as DAGFromModel
from airflow.models import (
Connection,
@@ -74,3 +74,36 @@ DatasetOrTimeSchedule()
# airflow.utils.dag_parsing_context
get_parsing_context()
from airflow.decorators.base import (
DecoratedMappedOperator,
DecoratedOperator,
TaskDecorator,
get_unique_task_id,
task_decorator_factory,
)
# airflow.decorators.base
DecoratedMappedOperator()
DecoratedOperator()
TaskDecorator()
get_unique_task_id()
task_decorator_factory()
from airflow.models import Param
# airflow.models
Param()
from airflow.sensors.base import (
BaseSensorOperator,
PokeReturnValue,
poke_mode_only,
)
# airflow.sensors.base
BaseSensorOperator()
PokeReturnValue()
poke_mode_only()

View File

@@ -9,7 +9,6 @@ from airflow.operators.empty import EmptyOperator
from airflow.operators.latest_only import LatestOnlyOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow.operators.weekday import BranchDayOfWeekOperator
from airflow.sensors.date_time import DateTimeSensor
FSHook()
PackageIndexHook()
@@ -22,7 +21,6 @@ EmptyOperator()
LatestOnlyOperator()
BranchDayOfWeekOperator()
DateTimeSensor()
from airflow.operators.python import (
BranchPythonOperator,
@@ -30,16 +28,23 @@ from airflow.operators.python import (
PythonVirtualenvOperator,
ShortCircuitOperator,
)
from airflow.sensors.bash import BashSensor
from airflow.sensors.date_time import DateTimeSensor
BranchPythonOperator()
PythonOperator()
PythonVirtualenvOperator()
ShortCircuitOperator()
BashSensor()
DateTimeSensor()
from airflow.sensors.date_time import DateTimeSensorAsync
from airflow.sensors.external_task import (
ExternalTaskMarker,
ExternalTaskSensor,
)
from airflow.sensors.time_sensor import (
TimeSensor,
TimeSensorAsync,
)
from airflow.sensors.filesystem import FileSensor
from airflow.sensors.python import PythonSensor
BranchPythonOperator()
PythonOperator()
@@ -49,6 +54,13 @@ DateTimeSensorAsync()
ExternalTaskMarker()
ExternalTaskSensor()
FileSensor()
PythonSensor()
from airflow.sensors.time_sensor import (
TimeSensor,
TimeSensorAsync,
)
TimeSensor()
TimeSensorAsync()

View File

@@ -89,3 +89,14 @@ print(1)
# ///
#
# Foobar
# Regression tests for https://github.com/astral-sh/ruff/issues/19713
# mypy: ignore-errors
# pyright: ignore-errors
# pyrefly: ignore-errors
# ty: ignore[unresolved-import]
# pyrefly: ignore[unused-import]
print(1)

View File

@@ -0,0 +1,75 @@
from typing import Optional
import httpx
def foo():
client = httpx.Client()
client.close() # Ok
client.delete() # Ok
client.get() # Ok
client.head() # Ok
client.options() # Ok
client.patch() # Ok
client.post() # Ok
client.put() # Ok
client.request() # Ok
client.send() # Ok
client.stream() # Ok
client.anything() # Ok
client.build_request() # Ok
client.is_closed # Ok
async def foo():
client = httpx.Client()
client.close() # ASYNC212
client.delete() # ASYNC212
client.get() # ASYNC212
client.head() # ASYNC212
client.options() # ASYNC212
client.patch() # ASYNC212
client.post() # ASYNC212
client.put() # ASYNC212
client.request() # ASYNC212
client.send() # ASYNC212
client.stream() # ASYNC212
client.anything() # Ok
client.build_request() # Ok
client.is_closed # Ok
async def foo(client: httpx.Client):
client.request() # ASYNC212
client.anything() # Ok
async def foo(client: httpx.Client | None):
client.request() # ASYNC212
client.anything() # Ok
async def foo(client: Optional[httpx.Client]):
client.request() # ASYNC212
client.anything() # Ok
async def foo():
client: httpx.Client = ...
client.request() # ASYNC212
client.anything() # Ok
global_client = httpx.Client()
async def foo():
global_client.request() # ASYNC212
global_client.anything() # Ok
async def foo():
async with httpx.AsyncClient() as client:
await client.get() # Ok

View File

@@ -154,6 +154,11 @@ try:
except Exception as e:
raise ValueError from e
try:
...
except Exception as e:
raise e from ValueError("hello")
try:
pass
@@ -162,3 +167,92 @@ except Exception:
exception("An error occurred")
else:
exception("An error occurred")
# Test tuple exceptions
try:
pass
except (Exception,):
pass
try:
pass
except (Exception, ValueError):
pass
try:
pass
except (ValueError, Exception):
pass
try:
pass
except (ValueError, Exception) as e:
print(e)
try:
pass
except (BaseException, TypeError):
pass
try:
pass
except (TypeError, BaseException):
pass
try:
pass
except (Exception, BaseException):
pass
try:
pass
except (BaseException, Exception):
pass
# Test nested tuples
try:
pass
except ((Exception, ValueError), TypeError):
pass
try:
pass
except (ValueError, (BaseException, TypeError)):
pass
# Test valid tuple exceptions (should not trigger)
try:
pass
except (ValueError, TypeError):
pass
try:
pass
except (OSError, FileNotFoundError):
pass
try:
pass
except (OSError, FileNotFoundError) as e:
print(e)
try:
pass
except (Exception, ValueError):
critical("...", exc_info=True)
try:
pass
except (Exception, ValueError):
raise
try:
pass
except (Exception, ValueError) as e:
raise e
# `from None` cause
try:
pass
except BaseException as e:
raise e from None

View File

@@ -0,0 +1,43 @@
class C: a = None
{C.a: None for C.a in "abc"}
print(C.a)
x = [None]
{x[0]: None for x[0] in "abc"}
print(x)
class C(list):
def __getitem__(self, index, /):
item = super().__getitem__(index)
if isinstance(index, slice): item = tuple(item)
return item
x = C()
{x[:0]: None for x[:0] in "abc"}
print(x)
class C:
a = None
def func():
{(C.a,): None for (C.a,) in "abc"} # OK
def func():
obj = type('obj', (), {'attr': 1})()
{(obj.attr,): None for (obj.attr,) in "abc"} # OK
def func():
lst = [1, 2, 3]
{(lst[0],): None for (lst[0],) in "abc"} # OK
def func():
lst = [1, 2, 3, 4, 5]
{(lst[1:3],): None for (lst[1:3],) in "abc"} # OK
# C420: side-effecting assignment targets
# These should NOT trigger C420 because they have side-effecting assignment targets
# See https://github.com/astral-sh/ruff/issues/19511

View File

@@ -88,3 +88,25 @@ def f_multi_line_string2():
example="example"
)
)
def raise_typing_cast_exception():
import typing
raise typing.cast("Exception", None)
def f_typing_cast_excluded():
from typing import cast
raise cast(RuntimeError, "This should not trigger EM101")
def f_typing_cast_excluded_import():
import typing
raise typing.cast(RuntimeError, "This should not trigger EM101")
def f_typing_cast_excluded_aliased():
from typing import cast as my_cast
raise my_cast(RuntimeError, "This should not trigger EM101")

View File

@@ -17,3 +17,50 @@ info(f"{__name__}")
# Don't trigger for t-strings
info(t"{name}")
info(t"{__name__}")
count = 5
total = 9
directory_path = "/home/hamir/ruff/crates/ruff_linter/resources/test/"
logging.info(f"{count} out of {total} files in {directory_path} checked")
x = 99
fmt = "08d"
logger.info(f"{x:{'08d'}}")
logger.info(f"{x:>10} {x:{fmt}}")
logging.info(f"")
logging.info(f"This message doesn't have any variables.")
obj = {"key": "value"}
logging.info(f"Object: {obj!r}")
items_count = 3
logging.warning(f"Items: {items_count:d}")
data = {"status": "active"}
logging.info(f"Processing {len(data)} items")
logging.info(f"Status: {data.get('status', 'unknown').upper()}")
result = 123
logging.info(f"Calculated result: {result + 100}")
temperature = 123
logging.info(f"Temperature: {temperature:.1f}°C")
class FilePath:
def __init__(self, name: str):
self.name = name
logging.info(f"No changes made to {file_path.name}.")
user = "tron"
balance = 123.45
logging.error(f"Error {404}: User {user} has insufficient balance ${balance:.2f}")
import logging
x = 1
logging.error(f"{x} -> %s", x)

Some files were not shown because too many files have changed in this diff Show More