Compare commits

...

124 Commits

Author SHA1 Message Date
Dylan
f51a228f04 Bump 0.12.8 (#19813) 2025-08-07 13:52:16 -05:00
Andrew Gallant
d5e1b7983e [ty] Fix static assertion size check (#19814)
A `Segment` has a `Box` in it, which has a platform dependent size.
Restrict the check to only 64-bit targets.
2025-08-07 13:38:16 -05:00
Micha Reiser
7dfde3b929 Update Rust toolchain to 1.89 (#19807) 2025-08-07 18:21:50 +02:00
Dhruv Manilawala
b22586fa0e [ty] Add ty.inlayHints.variableTypes server option (#19780)
## Summary

This PR adds a new `ty.inlayHints.variableTypes` server setting to
configure ty to include / exclude inlay hints at variable position.

Currently, we only support inlay hints at this position so this option
basically translates to enabling / disabling inlay hints for now :)

The VS Code extension PR is
https://github.com/astral-sh/ty-vscode/pull/112.

closes: astral-sh/ty#472

## Test Plan

Add E2E tests.
2025-08-07 19:16:51 +05:30
Alex Waygood
c401a6d86e [ty] Add failing tests for tuple subclasses (#19803) 2025-08-07 13:11:15 +00:00
Dhruv Manilawala
7b6abfb030 [ty] Add ty.experimental.rename server setting (#19800)
## Summary

This PR is a follow-up from https://github.com/astral-sh/ruff/pull/19551
and adds a new `ty.experimental.rename` setting to conditionally
register for the rename capability. The complementary PR in ty VS Code
extension is https://github.com/astral-sh/ty-vscode/pull/111.

This is done using dynamic registration after the settings have been
resolved. The experimental group is part of the global settings because
they're applied for all workspaces that are managed by the client.

## Test Plan

Add E2E tests.

In VS Code, with the following setting:
```json
{
	"ty.experimental.rename": "true",
	"python.languageServer": "None"
}
```

I get the relevant log entry:
```
2025-08-07 16:05:40.598709000 DEBUG client_response{id=3 method="client/registerCapability"}: Registered rename capability
```

And, I'm able to rename a symbol. Once I set it to `false`, then I can
see this log entry:

```
2025-08-07 16:08:39.027876000 DEBUG Rename capability is disabled in the client settings
```

And, I don't see the "Rename Symbol" open in the VS Code dropdown.


https://github.com/user-attachments/assets/501659df-ba96-4252-bf51-6f22acb4920b
2025-08-07 12:54:58 +00:00
UnboundVariable
b005cdb7ff [ty] Implemented support for "rename" language server feature (#19551)
This PR adds support for the "rename" language server feature. It builds
upon existing functionality used for "go to references".

The "rename" feature involves two language server requests. The first is
a "prepare rename" request that determines whether renaming should be
possible for the identifier at the current offset. The second is a
"rename" request that returns a list of file ranges where the rename
should be applied.

Care must be taken when attempting to rename symbols that span files,
especially if the symbols are defined in files that are not part of the
project. We don't want to modify code in the user's Python environment
or in the vendored stub files.

I found a few bugs in the "go to references" feature when implementing
"rename", and those bug fixes are included in this PR.

---------

Co-authored-by: UnboundVariable <unbound@gmail.com>
2025-08-07 15:58:18 +05:30
Micha Reiser
b96aa4605b [ty] Reduce size of member table (#19572) 2025-08-07 11:16:04 +02:00
Dhruv Manilawala
cc97579c3b [ty] Move server capabilities creation (#19798) 2025-08-07 04:28:08 +00:00
Matthew Mckee
ef1802b94f [ty] Repurpose FunctionType.into_bound_method_type to return BoundMethodType (#19793)
## Summary

As per our naming scheme (at least for callable types) this should
return a `BoundMethodType`, or be renamed, but it makes more sense to
change the return type.

I also ensure `ClassType.into_callable` returns a `Type::Callable` in
the changed branch.

Ideally we could return a `CallableType` from these `into_callable`
functions (and rename to `into_callable_type` but because of unions we
cannot do this.
2025-08-06 15:24:59 -07:00
David Peter
98df62db79 [ty] Validate writes to TypedDict keys (#19782)
## Summary

Validates writes to `TypedDict` keys, for example:

```py
class Person(TypedDict):
    name: str
    age: int | None


def f(person: Person):
    person["naem"] = "Alice"  # error: [invalid-key]

    person["age"] = "42"  # error: [invalid-assignment]
```

The new specialized `invalid-assignment` diagnostic looks like this:

<img width="1160" height="279" alt="image"
src="https://github.com/user-attachments/assets/51259455-3501-4829-a84e-df26ff90bd89"
/>

## Ecosystem analysis

As far as I can tell, all true positives!

There are some extremely long diagnostic messages. We should truncate
our display of overload sets somehow.

## Test Plan

New Markdown tests
2025-08-06 15:19:13 -07:00
Matthew Mckee
65b39f2ca9 [ty] Add support for using the test command emitted when a mdtest fails (#19794)
## Summary

When seeing a failed test like 

```bash
is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)

  crates/ty_python_semantic/resources/mdtest/type_properties/is_subtype_of.md:1810 unexpected error: [unresolved-reference] "Name `Aa` used when not defined"

To rerun this specific test, set the environment variable: MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)'
MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)' cargo test -p ty_python_semantic --test mdtest -- mdtest__type_properties_is_subtype_of
```

running the following now works

```bash
MDTEST_TEST_FILTER='is_subtype_of.md - Subtype relation - Callable - Class literals - Classes with `__new_… (1e9782853227c019)' cargo test -p ty_python_semantic --test mdtest -- mdtest__type_properties_is_subtype_of
```


## Test Plan

Do we have tests for the test runner? :)
2025-08-06 15:02:10 -07:00
Douglas Creager
585ce12ace [ty] typing.Self is bound by the method, not the class (#19784)
This fixes our logic for binding a legacy typevar with its binding
context. (To recap, a legacy typevar starts out "unbound" when it is
first created, and each time it's used in a generic class or function,
we "bind" it with the corresponding `Definition`.)

We treat `typing.Self` the same as a legacy typevar, and so we apply
this binding logic to it too. Before, we were using the enclosing class
as its binding context. But that's not correct — it's the method where
`typing.Self` is used that binds the typevar. (Each invocation of the
method will find a new specialization of `Self` based on the specific
instance type containing the invoked method.)

This required plumbing through some additional state to the
`in_type_expression` method.

This also revealed that we weren't handling `Self`-typed instance
attributes correctly (but were coincidentally not getting the expected
false positive diagnostics).
2025-08-06 17:26:17 -04:00
Ibraheem Ahmed
21ac16db85 [ty] Avoid overcounting shared memory usage (#19773)
## Summary

Use a global tracker to avoid double counting `Arc` instances.
2025-08-06 15:32:02 -04:00
Dan Parizher
745742e414 [pylint] Mark PLC0207 fixes as unsafe when *args unpacking is present (#19679)
## Summary

Fixes #19660
2025-08-06 14:19:49 -04:00
Dhruv Manilawala
ec5660d786 [ty] Avoid warning for old settings schema too aggresively (#19787)
## Summary

This PR avoids warning the users too aggressively by checking the
structure of the initialization and workspace options and avoids the
warning if they conform to the old schema.

## Test Plan



https://github.com/user-attachments/assets/9ade9dc4-90cb-4fd4-abd0-4bc4177df3db
2025-08-06 16:16:59 +00:00
David Peter
b96929ee19 [ty] Disallow typing.TypedDict in type expressions (#19777)
## Summary

Disallow `typing.TypedDict` in type expressions.

Related reference: https://github.com/python/mypy/issues/11030

## Test Plan

New Markdown tests, checked ecosystem and conformance test impact.
2025-08-06 15:58:35 +02:00
Dhruv Manilawala
fa711fa40f [ty] Warn users if server received unknown options (#19779)
## Summary

This PR updates the client settings handling to recognize unknown
options provided by the user and show a warning popup along with a
warning log message.

## Test Plan

Add E2E tests.
2025-08-06 13:11:13 +00:00
Dhruv Manilawala
1f29a04e9a [ty] Support LSP client settings (#19614)
## Summary

This PR implements support for providing LSP client settings.

The complementary PR in the ty VS Code extension:
astral-sh/ty-vscode#106.

Notes for the previous iteration of this PR is in
https://github.com/astral-sh/ruff/pull/19614#issuecomment-3136477864
(click on "Details").

Specifically, this PR splits the client settings into 3 distinct groups.
Keep in mind that these groups are not visible to the user, they're
merely an implementation detail. The groups are:
1. `GlobalOptions` - these are the options that are global to the
language server and will be the same for all the workspaces that are
handled by the server
2. `WorkspaceOptions` - these are the options that are specific to a
workspace and will be applied only when running any logic for that
workspace
3. `InitializationOptions` - these are the options that can be specified
during initialization

The initialization options are a superset that contains both the global
and workspace options flattened into a 1-dimensional structure. This
means that the user can specify any and all fields present in
`GlobalOptions` and `WorkspaceOptions` in the initialization options in
addition to the fields that are _specific_ to initialization options.

From the current set of available settings, following are only available
during initialization because they are required at that time, are static
during the runtime of the server and changing their values require a
restart to take effect:
- `logLevel`
- `logFile`

And, following are available under `GlobalOptions`:
- `diagnosticMode`

And, following under `WorkspaceOptions`:
- `disableLanguageServices`
- `pythonExtension` (Python environment information that is populated by
the ty VS Code extension)

### `workspace/configuration`

This request allows server to ask the client for configuration to a
specific workspace. But, this is only supported by the client that has
the `workspace.configuration` client capability set to `true`. What to
do for clients that don't support pulling configurations?

In that case, the settings needs to be provided in the initialization
options and updating the values of those settings can only be done by
restarting the server. With the way this is implemented, this means that
if the client does not support pulling workspace configuration then
there's no way to specify settings specific to a workspace. Earlier,
this would've been possible by providing an array of client options with
an additional field which specifies which workspace the options belong
to but that adds complexity and clients that actually do not support
`workspace/configuration` would usually not support multiple workspaces
either.

Now, for the clients that do support this, the server will initiate the
request to get the configuration for all the workspaces at the start of
the server. Once the server receives these options, it will resolve them
for each workspace as follows:
1. Combine the client options sent during initialization with the
options specific to the workspace creating the final client options
that's specific to this workspace
2. Create a global options by combining the global options from (1) for
all workspaces which in turn will also combine the global options sent
during initialization

The global options are resolved into the global settings and are
available on the `Session` which is initialized with the default global
settings. The workspace options are resolved into the workspace settings
and are available on the respective `Workspace`.

The `SessionSnapshot` contains the global settings while the document
snapshot contains the workspace settings. We could add the global
settings to the document snapshot but that's currently not needed.

### Document diagnostic dynamic registration

Currently, the document diagnostic server capability is created based on
the `diagnosticMode` sent during initialization. But, that wouldn't
provide us with the complete picture. This means the server needs to
defer registering the document diagnostic capability at a later point
once the settings have been resolved.

This is done using dynamic registration for clients that support it. For
clients that do not support dynamic registration for document diagnostic
capability, the server advertises itself as always supporting workspace
diagnostics and work done progress token.

This dynamic registration now allows us to change the server capability
for workspace diagnostics based on the resolved `diagnosticMode` value.
In the future, once `workspace/didChangeConfiguration` is supported, we
can avoid the server restart when users have changed any client
settings.

## Test Plan

Add integration tests and recorded videos on the user experience in
various editors:

### VS Code

For VS Code users, the settings experience is unchanged because the
extension defines it's own interface on how the user can specify the
server setting. This means everything is under the `ty.*` namespace as
usual.


https://github.com/user-attachments/assets/c2e5ba5c-7617-406e-a09d-e397ce9c3b93

### Zed

For Zed, the settings experience has changed. Users can specify settings
during initialization:

```json
{
  "lsp": {
    "ty": {
      "initialization_options": {
        "logLevel": "debug",
        "logFile": "~/.cache/ty.log",
        "diagnosticMode": "workspace",
        "disableLanguageServices": true
      }
    },
  }
}
```

Or, can specify the options under the `settings` key:

```json
{
  "lsp": {
    "ty": {
      "settings": {
        "ty": {
          "diagnosticMode": "openFilesOnly",
          "disableLanguageServices": true
        }
      },
      "initialization_options": {
        "logLevel": "debug",
        "logFile": "~/.cache/ty.log"
      }
    },
  }
}
```

The `logLevel` and `logFile` setting still needs to go under the
initialization options because they're required by the server during
initialization.

We can remove the nesting of the settings under the "ty" namespace by
updating the return type of
db9ea0cdfd/src/tychecker.rs (L45-L49)
to be wrapped inside `ty` directly so that users can avoid doing the
double nesting.

There's one issue here which is that if the `diagnosticMode` is
specified in both the initialization option and settings key, then the
resolution is a bit different - if either of them is set to be
`workspace`, then it wins which means that in the following
configuration, the diagnostic mode is `workspace`:

```json
{
  "lsp": {
    "ty": {
      "settings": {
        "ty": {
          "diagnosticMode": "openFilesOnly"
        }
      },
      "initialization_options": {
        "diagnosticMode": "workspace"
      }
    },
  }
}
```

This behavior is mainly a result of combining global options from
various workspace configuration results. Users should not be able to
provide global options in multiple workspaces but that restriction
cannot be done on the server side. The ty VS Code extension restricts
these global settings to only be set in the user settings and not in
workspace settings but we do not control extensions in other editors.


https://github.com/user-attachments/assets/8e2d6c09-18e6-49e5-ab78-6cf942fe1255

### Neovim

Same as in Zed.

### Other

Other editors that do not support `workspace/configuration`, the users
would need to provide the server settings during initialization.
2025-08-06 18:37:21 +05:30
Alex Waygood
529d81daca [ty] Improve subscript narrowing for "safe mutable classes" (#19781)
## Summary

This PR improves the `is_safe_mutable_class` function in `infer.rs` in
several ways:
- It uses `KnownClass::to_instance()` for all "safe mutable classes".
Previously, we were using `SpecialFormType::instance_fallback()` for
some variants -- I'm not totally sure why. Switching to
`KnownClass::to_instance()` for all "safe mutable classes" fixes a
number of TODOs in the `assignment.md` mdtest suite
- Rather than eagerly calling `.to_instance(db)` on all "safe mutable
classes" every time `is_safe_mutable_class` is called, we now only call
it lazily on each element, allowing us to short-circuit more
effectively.
- I removed the entry entirely for `TypedDict` from the list of "safe
mutable classes", as it's not correct.
`SpecialFormType::TypedDict.instance_fallback(db)` just returns an
instance type representing "any instance of `typing._SpecialForm`",
which I don't think was the intent of this code. No tests fail as a
result of removing this entry, as we already check separately whether an
object is an inhabitant of a `TypedDict` type (and consider that object
safe-mutable if so!).

## Test Plan

mdtests updated
2025-08-06 12:26:25 +01:00
David Peter
4887bdf205 [ty] Infer types for key-based access on TypedDicts (#19763)
## Summary

This PR adds type inference for key-based access on `TypedDict`s and a
new diagnostic for invalid subscript accesses:

```py
class Person(TypedDict):
    name: str
    age: int | None

alice = Person(name="Alice", age=25)

reveal_type(alice["name"])  # revealed: str
reveal_type(alice["age"])  # revealed: int | None

alice["naem"]  # Unknown key "naem" - did you mean "name"?
```

## Test Plan

Updated Markdown tests
2025-08-06 09:36:33 +02:00
Dan Parizher
e917d309f1 [flake8_import_conventions] Avoid false positives for NFKC-normalized __debug__ import aliases in ICN001 (#19411)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-06 06:42:51 +00:00
Matthew Mckee
18ad2848e3 Display generic function signature properly (#19544)
## Summary

Resolves https://github.com/astral-sh/ty/issues/817

## Test Plan

Update mdtest

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-08-05 16:35:08 -07:00
Brent Westbrook
5bfffe1aa7 [ty] Remap Jupyter notebook cell indices in ruff_db (#19698)
## Summary

This PR remaps ranges in Jupyter notebooks from simple `row:column`
indices in the concatenated source code to `cell:row:col` to match
Ruff's output. This is probably not a likely change to land upstream in
`annotate-snippets`, but I didn't see a good way around it.

The remapping logic is taken nearly verbatim from here:


cd6bf1457d/crates/ruff_linter/src/message/text.rs (L212-L222)


## Test Plan

New `full` rendering test for a notebook

I was mainly focused on Ruff, but in local tests this also works for ty:

```
error[invalid-assignment]: Object of type `Literal[1]` is not assignable to `str`
 --> Untitled.ipynb:cell 1:3:1
  |
1 | import math
2 |
3 | x: str = 1
  | ^
  |
info: rule `invalid-assignment` is enabled by default

error[invalid-assignment]: Object of type `Literal[1]` is not assignable to `str`
 --> Untitled.ipynb:cell 2:3:1
  |
1 | import math
2 |
3 | x: str = 1
  | ^
  |
info: rule `invalid-assignment` is enabled by default
```

This isn't a duplicate diagnostic, just an unimaginative example:

```py
# cell 1
import math

x: str = 1
# cell 2
import math

x: str = 1
```
2025-08-05 14:10:35 -04:00
Brent Westbrook
b324ae1be3 Hide empty snippets for full-file diagnostics (#19653)
Summary
--

This is the other commit I wanted to spin off from #19415, currently
stacked on #19644.

This PR suppresses blank snippets for empty ranges at the very beginning
of a file, and for empty ranges in non-existent files. Ruff includes
empty ranges for IO errors, for example.


f4e93b6335/crates/ruff_linter/src/message/text.rs (L100-L110)

The diagnostics now look like this (new snapshot test):

```
error[test-diagnostic]: main diagnostic message
--> example.py:1:1                             
```

Instead of [^*]

```
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
 |
 |
```

Test Plan
--

A new `ruff_db` test showing the expected output format

[^*]: This doesn't correspond precisely to the example in the PR because
of some details of the diagnostic builder helper methods in `ruff_db`,
but you can see another example in the current version of the summary in
#19415.
2025-08-05 11:20:31 -04:00
Brent Westbrook
2db4e5dbea Use fixed hash width for ty_server diagnostics (#19766)
Summary
--

Fixes a snapshot test failure I saw in #19653 locally and in Windows CI
by
padding the hex ID to 16 digits to match the regex in
`filter_result_id`.


78e5fe0a51/crates/ty_server/tests/e2e/pull_diagnostics.rs (L380-L384)

Test Plan
--

I applied this to the branch from #19653 locally and saw that the tests
now
pass. I couldn't reproduce this failure directly on `main` or this
branch,
though.
2025-08-05 10:55:17 -04:00
Alex Waygood
4090297a11 [ty] Fix more false positives related to Generic or Protocol being subscripted with a ParamSpec or TypeVarTuple (#19764) 2025-08-05 15:45:56 +01:00
Simon Lamon
934fd37d2b [ty] Diagnostics for async context managers (#19704)
## Summary

Implements diagnostics for async context managers. Fixes
https://github.com/astral-sh/ty/issues/918.

## Test Plan

Mdtests have been added.
2025-08-05 07:41:37 -07:00
Brent Westbrook
78e5fe0a51 Allow hiding the diagnostic severity in ruff_db (#19644)
## Summary

This PR is a spin-off from https://github.com/astral-sh/ruff/pull/19415.
It enables replacing the severity and lint name in a ty-style
diagnostic:

```
error[unused-import]: `os` imported but unused
```

with the noqa code and optional fix availability icon for a Ruff
diagnostic:

```
F401 [*] `os` imported but unused
F821 Undefined name `a`
```

or nothing at all for a Ruff syntax error:

```
SyntaxError: Expected one or more symbol names after import
```

Ruff adds the `SyntaxError` prefix to these messages manually.

Initially (d912458), I just passed a `hide_severity` flag through a
bunch of calls to get it into `annotate-snippets`, but after looking at
it again today, I think reusing the `None` severity/level gave a nicer
result. As I note in a lengthy code comment, I think all of this code
should be temporary and reverted when Ruff gets real severities, so
hopefully it's okay if it feels a little hacky.

I think the main visible downside of this approach is that we can't
style the asterisk in the fix availabilty icon in cyan, as in Ruff's
current output. It's part of the message in this PR and any styling gets
overwritten in `annotate-snippets`.

<img width="400" height="342" alt="image"
src="https://github.com/user-attachments/assets/57542ec9-a81c-4a01-91c7-bd6d7ec99f99"
/>

Hmm, I guess reusing `Level::None` also means the `F401` isn't red
anymore. Maybe my initial approach was better after all. In any case,
the rest of the PR should be basically the same, it just depends how we
want to toggle the severity.

## Test Plan

New `ruff_db` tests. These snapshots should be compared to the two tests
just above them (`hide_severity_output` vs `output` and
`hide_severity_syntax_errors` against `syntax_errors`).
2025-08-05 09:56:18 -04:00
David Peter
94947cbf65 [ty] Fix merge base calculation for typing-conformance workflow (#19761)
## Summary

Use `$GITHUB_SHA` (the merged state of `feature` + `main` branch)
instead of `{{ github.event.pull_request.head.sha }}` (just the latest
`feature` commit) for building the "new" version of `ty` in the typing
conformance workflow.

## Test Plan

None.
2025-08-05 14:32:47 +02:00
Alex Waygood
7dccb6a98c [ty] Improve effectiveness of KnownClass fast paths in instance.rs (#19762) 2025-08-05 13:26:14 +01:00
David Peter
948f3f856c [ty] Fix attribute access on TypedDicts (#19758)
## Summary

This PR fixes a few inaccuracies in attribute access on `TypedDict`s. It
also changes the return type of `type(person)` to `type[dict[str,
object]]` if `person: Person` is an inhabitant of a `TypedDict`
`Person`. We still use `type[Person]` as the *meta type* of Person,
however (see reasoning
[here](https://github.com/astral-sh/ruff/pull/19733#discussion_r2253297926)).

## Test Plan

Updated Markdown tests.
2025-08-05 13:59:10 +02:00
Alex Waygood
3af0b31de3 [ty] Speedup the known_class_doesnt_fallback_to_unknown_unexpectedly_on_low_python_version test (#19760) 2025-08-05 11:55:11 +00:00
David Peter
7df7be5c7d [ty] Keep track of type qualifiers in stub declarations without right-hand side (#19756)
## Summary

closes https://github.com/astral-sh/ty/issues/937

## Test Plan

Regression test
2025-08-05 12:07:05 +02:00
Dhruv Manilawala
2d2841e20d [ty] Fix typing repository commit output in CI (#19754)
## Summary

This PR fixes the issue mentioned in
https://github.com/astral-sh/ruff/pull/19736#issuecomment-3151903662
~~but I can't test it without merging it on `main` because GitHub
Actions still pickup the old version of the workflow file.~~ and is
tested by manually triggering the workflow, refer to the comment on this
PR
(https://github.com/astral-sh/ruff/pull/19754#issuecomment-3153894179)
which has the commit hash.
2025-08-05 14:53:55 +05:30
David Peter
14fbc2b167 [ty] New Type variant for TypedDict (#19733)
## Summary

This PR adds a new `Type::TypedDict` variant. Before this PR, we treated
`TypedDict`-based types as dynamic Todo-types, and I originally planned
to make this change a no-op. And we do in fact still treat that new
variant similar to a dynamic type when it comes to type properties such
as assignability and subtyping. But then I somehow tricked myself into
implementing some of the things correctly, so here we are. The two main
behavioral changes are: (1) we now also detect generic `TypedDict`s,
which removes a few false positives in the ecosystem, and (2) we now
support *attribute* access (not key-based indexing!) on these types,
i.e. we infer proper types for something like
`MyTypedDict.__required_keys__`. Nothing exciting yet, but gets the
infrastructure into place.

Note that with this PR, the type of (the type) `MyTypedDict` itself is
still represented as a `Type::ClassLiteral` or `Type::GenericAlias` (in
case `MyTypedDict` is generic). Only inhabitants of `MyTypedDict`
(instances of `dict` at runtime) are represented by `Type::TypedDict`.
We may want to revisit this decision in the future, if this turns out to
be too error-prone. Right now, we need to use `.is_typed_dict(db)` in
all the right places to distinguish between actual (generic) classes and
`TypedDict`s. But so far, it seemed unnecessary to add additional `Type`
variants for these as well.

part of https://github.com/astral-sh/ty/issues/154

## Ecosystem impact

The new diagnostics on `cloud-init` look like true positives to me.

## Test Plan

Updated and new Markdown tests
2025-08-05 11:19:49 +02:00
Shunsuke Shibayama
351121c5c5 [ty] fix incorrect lazy scope narrowing (#19744)
## Summary

This is a follow-up to #19321.

Narrowing constraints introduced in a class scope were not applied even
when they can be applied in lazy nested scopes. This PR fixes so that
they are now applied.
Conversely, there were cases where narrowing constraints were being
applied in places where they should not, so it is also fixed.

## Test Plan

Some TODOs in `narrow/conditionals/nested.md` are now work correctly.
2025-08-04 20:32:08 -07:00
Shunsuke Shibayama
64bcc8db2f [ty] fix lookup order of class variables before they are defined (#19743)
## Summary

This is a follow-up to #19321.

If we try to access a class variable before it is defined, the variable
is looked up in the global scope, rather than in any enclosing scopes.

Closes https://github.com/astral-sh/ty/issues/875.

## Test Plan

New tests in `narrow/conditionals/nested.md`.
2025-08-04 20:21:28 -07:00
Roman Kitaev
b0f01ba514 [flake8-blind-except] Change BLE001 to correctly parse exception tuples (#19747)
## Summary

This PR enhances the `BLE001` rule to correctly detect blind exception
handling in tuple exceptions. Previously, the rule only checked single
exception types, but Python allows catching multiple exceptions using
tuples like `except (Exception, ValueError):`.

## Test Plan

It fails the following (whereas the main branch does not):

```bash
cargo run -p ruff -- check somefile.py --no-cache --select=BLE001
```

```python
# somefile.py

try:
    1/0
except (ValueError, Exception) as e:
    print(e)
```

```
somefile.py:3:21: BLE001 Do not catch blind exception: `Exception`
  |
1 | try:
2 |     1/0
3 | except (ValueError, Exception) as e:
  |                     ^^^^^^^^^ BLE001
4 |     print(e)
  |

Found 1 error.
```
2025-08-04 21:12:45 +00:00
Alex Waygood
3a9341f7be [ty] Remove false positives when subscripting Generic or Protocol with a ParamSpec or TypeVarTuple (#19749) 2025-08-04 21:42:46 +01:00
David Peter
739c94f95a [ty] Support as-patterns in reachability analysis (#19728)
## Summary

Support `as` patterns in reachability analysis:

```py
from typing import assert_never


def f(subject: str | int):
    match subject:
        case int() as x:
            pass
        case str():
            pass
        case _:
            assert_never(subject)  # would previously emit an error
```

Note that we still don't support inferring correct types for the bound
name (`x`).

Closes https://github.com/astral-sh/ty/issues/928

## Test Plan

New Markdown tests
2025-08-04 20:13:50 +02:00
Alex Waygood
af8587eabf [ty] Link directly to typing conformance test suite when commenting the diff (#19736) 2025-08-04 15:51:42 +01:00
Alex Waygood
41207ec901 [ty] Infer type[tuple[int, str]] as the meta-type of tuple[int, str] (#19741) 2025-08-04 13:10:47 +00:00
Alex Waygood
bc6e8b58ce [ty] Return Option<TupleType> from infer_tuple_type_expression (#19735)
## Summary

This PR reduces the virality of some of the `Todo` types in
`infer_tuple_type_expression`. Rather than inferring `Todo`, we instead
infer `tuple[Todo, ...]`. This reflects the fact that whatever the
contents of the slice in a `tuple[]` type expression, we would always
infer some kind of tuple type as the result of the type expression. Any
tuple type should be assignable to `tuple[Todo, ...]`, so this shouldn't
introduce any new false positives; this can be seen in the ecosystem
report.

As a result of the change, we are now able to enforce in the signature
of `Type::infer_tuple_type_expression` that it returns an
`Option<TupleType<'db>>`, which is more strongly typed and expresses
clearly the invariant that a tuple type expression should always be
inferred as a `tuple` type. To enable this, it was necessary to refactor
several `TupleType` constructors in `tuple.rs` so that they return
`Option<TupleType>` rather than `Type`; this means that callers of these
constructor functions are now free to either propagate the
`Option<TupleType<'db>>` or convert it to a `Type<'db>`.

## Test Plan

Mdtests updated.
2025-08-04 13:48:19 +01:00
Micha Reiser
e4d6b54a16 [ty] Fix failing test on windows (#19742) 2025-08-04 14:39:36 +02:00
Micha Reiser
17ee2a28ba [ty] Fix workspace diagnostics being recomputed (#19689) 2025-08-04 13:49:38 +02:00
Leandro Braga
de77b29798 [ty] clear the terminal screen in watch mode (#19712)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-04 13:45:37 +02:00
Micha Reiser
f473f6b6e5 [ty] Implement long-polling for workspace diagnsotics (#19670) 2025-08-04 10:26:38 +00:00
Alex Waygood
736c4ab05a Remove myself as a codeowner from some ty crates (#19738) 2025-08-04 10:23:13 +00:00
Micha Reiser
8289432252 [ty] Always refresh diagnostics after a watched files change (#19697) 2025-08-04 12:19:18 +02:00
Micha Reiser
808c94d509 [ty] Implement streaming for workspace diagnostics (#19657) 2025-08-04 09:34:29 +00:00
Micha Reiser
b95d22c08e Don't flag pyrefly pragmas as unused code (ERA001) (#19731) 2025-08-04 10:15:37 +02:00
Micha Reiser
f3e66dd503 Revert "Update NPM Development dependencies" (#19730) 2025-08-04 07:33:58 +00:00
Micha Reiser
6516db7835 [ty] Add progress bar to watch (#19729) 2025-08-04 09:31:13 +02:00
renovate[bot]
03c873765e Update NPM Development dependencies (#19723)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-04 07:03:55 +00:00
renovate[bot]
c90707875e Update react monorepo to v19.1.1 (#19720)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 06:27:11 +00:00
renovate[bot]
8e20e589f1 Update dependency react-resizable-panels to v3.0.4 (#19717)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:21:56 +02:00
renovate[bot]
113e32b956 Update docker/metadata-action action to v5.8.0 (#19722)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:21:37 +02:00
renovate[bot]
fdc18eefc3 Update Swatinem/rust-cache action to v2.8.0 (#19725)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [Swatinem/rust-cache](https://redirect.github.com/Swatinem/rust-cache)
| action | minor | `v2.7.8` -> `v2.8.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>Swatinem/rust-cache (Swatinem/rust-cache)</summary>

###
[`v2.8.0`](https://redirect.github.com/Swatinem/rust-cache/releases/tag/v2.8.0)

[Compare
Source](https://redirect.github.com/Swatinem/rust-cache/compare/v2.7.8...v2.8.0)

#### What's Changed

- Add cache-workspace-crates feature by
[@&#8203;jbransen](https://redirect.github.com/jbransen) in
[https://github.com/Swatinem/rust-cache/pull/246](https://redirect.github.com/Swatinem/rust-cache/pull/246)
- Feat: support warpbuild cache provider by
[@&#8203;stegaBOB](https://redirect.github.com/stegaBOB) in
[https://github.com/Swatinem/rust-cache/pull/247](https://redirect.github.com/Swatinem/rust-cache/pull/247)

#### New Contributors

- [@&#8203;jbransen](https://redirect.github.com/jbransen) made their
first contribution in
[https://github.com/Swatinem/rust-cache/pull/246](https://redirect.github.com/Swatinem/rust-cache/pull/246)
- [@&#8203;stegaBOB](https://redirect.github.com/stegaBOB) made their
first contribution in
[https://github.com/Swatinem/rust-cache/pull/247](https://redirect.github.com/Swatinem/rust-cache/pull/247)

**Full Changelog**:
https://github.com/Swatinem/rust-cache/compare/v2.7.8...v2.8.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:38:05 +05:30
renovate[bot]
5f40651ae7 Update Rust crate notify to v8.2.0 (#19724)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [notify](https://redirect.github.com/notify-rs/notify) |
workspace.dependencies | minor | `8.1.0` -> `8.2.0` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>notify-rs/notify (notify)</summary>

###
[`v8.2.0`](https://redirect.github.com/notify-rs/notify/blob/HEAD/CHANGELOG.md#notify-820-2025-08-03)

[Compare
Source](https://redirect.github.com/notify-rs/notify/compare/notify-8.1.0...notify-8.2.0)

- FEATURE: notify user if inotify's `max_user_watches` has been reached
[#&#8203;698]
- FIX: `INotifyWatcher` ignore events with unknown watch descriptors
(instead of `EventMask::Q_OVERFLOW`) [#&#8203;700]

[#&#8203;698]: https://redirect.github.com/notify-rs/notify/pull/698

[#&#8203;700]: https://redirect.github.com/notify-rs/notify/pull/700

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:37:47 +05:30
renovate[bot]
ea031a3b39 Update taiki-e/install-action action to v2.57.6 (#19726)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[taiki-e/install-action](https://redirect.github.com/taiki-e/install-action)
| action | minor | `v2.56.19` -> `v2.57.6` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>taiki-e/install-action (taiki-e/install-action)</summary>

###
[`v2.57.6`](https://redirect.github.com/taiki-e/install-action/blob/HEAD/CHANGELOG.md#100---2021-12-30)

[Compare
Source](https://redirect.github.com/taiki-e/install-action/compare/v2.57.5...v2.57.6)

Initial release

[Unreleased]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.6...HEAD

[2.57.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.5...v2.57.6

[2.57.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.4...v2.57.5

[2.57.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.3...v2.57.4

[2.57.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.2...v2.57.3

[2.57.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.1...v2.57.2

[2.57.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.57.0...v2.57.1

[2.57.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.24...v2.57.0

[2.56.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.23...v2.56.24

[2.56.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.22...v2.56.23

[2.56.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.21...v2.56.22

[2.56.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.20...v2.56.21

[2.56.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.19...v2.56.20

[2.56.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.18...v2.56.19

[2.56.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.17...v2.56.18

[2.56.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.16...v2.56.17

[2.56.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.15...v2.56.16

[2.56.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.14...v2.56.15

[2.56.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.13...v2.56.14

[2.56.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.12...v2.56.13

[2.56.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.11...v2.56.12

[2.56.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.10...v2.56.11

[2.56.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.9...v2.56.10

[2.56.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.8...v2.56.9

[2.56.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.7...v2.56.8

[2.56.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.6...v2.56.7

[2.56.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.5...v2.56.6

[2.56.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.4...v2.56.5

[2.56.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.3...v2.56.4

[2.56.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.2...v2.56.3

[2.56.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.1...v2.56.2

[2.56.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.56.0...v2.56.1

[2.56.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.4...v2.56.0

[2.55.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.3...v2.55.4

[2.55.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.2...v2.55.3

[2.55.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.1...v2.55.2

[2.55.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.55.0...v2.55.1

[2.55.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.3...v2.55.0

[2.54.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.2...v2.54.3

[2.54.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.1...v2.54.2

[2.54.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.54.0...v2.54.1

[2.54.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.2...v2.54.0

[2.53.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.1...v2.53.2

[2.53.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.53.0...v2.53.1

[2.53.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.8...v2.53.0

[2.52.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.7...v2.52.8

[2.52.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.6...v2.52.7

[2.52.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.5...v2.52.6

[2.52.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.4...v2.52.5

[2.52.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.3...v2.52.4

[2.52.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.2...v2.52.3

[2.52.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.1...v2.52.2

[2.52.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.52.0...v2.52.1

[2.52.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.3...v2.52.0

[2.51.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.2...v2.51.3

[2.51.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.1...v2.51.2

[2.51.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.51.0...v2.51.1

[2.51.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.10...v2.51.0

[2.50.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.9...v2.50.10

[2.50.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.8...v2.50.9

[2.50.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.7...v2.50.8

[2.50.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.6...v2.50.7

[2.50.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.5...v2.50.6

[2.50.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.4...v2.50.5

[2.50.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.3...v2.50.4

[2.50.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.2...v2.50.3

[2.50.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.1...v2.50.2

[2.50.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.50.0...v2.50.1

[2.50.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.50...v2.50.0

[2.49.50]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.49...v2.49.50

[2.49.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.48...v2.49.49

[2.49.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.47...v2.49.48

[2.49.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.46...v2.49.47

[2.49.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.45...v2.49.46

[2.49.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.44...v2.49.45

[2.49.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.43...v2.49.44

[2.49.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.42...v2.49.43

[2.49.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.41...v2.49.42

[2.49.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.40...v2.49.41

[2.49.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.39...v2.49.40

[2.49.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.38...v2.49.39

[2.49.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.37...v2.49.38

[2.49.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.36...v2.49.37

[2.49.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.35...v2.49.36

[2.49.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.34...v2.49.35

[2.49.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.33...v2.49.34

[2.49.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.32...v2.49.33

[2.49.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.31...v2.49.32

[2.49.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.30...v2.49.31

[2.49.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.29...v2.49.30

[2.49.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.28...v2.49.29

[2.49.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.27...v2.49.28

[2.49.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.26...v2.49.27

[2.49.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.25...v2.49.26

[2.49.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.24...v2.49.25

[2.49.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.23...v2.49.24

[2.49.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.22...v2.49.23

[2.49.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.21...v2.49.22

[2.49.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.20...v2.49.21

[2.49.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.19...v2.49.20

[2.49.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.18...v2.49.19

[2.49.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.17...v2.49.18

[2.49.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.16...v2.49.17

[2.49.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.15...v2.49.16

[2.49.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.14...v2.49.15

[2.49.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.13...v2.49.14

[2.49.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.12...v2.49.13

[2.49.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.11...v2.49.12

[2.49.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.10...v2.49.11

[2.49.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.9...v2.49.10

[2.49.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.8...v2.49.9

[2.49.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.7...v2.49.8

[2.49.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.6...v2.49.7

[2.49.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.5...v2.49.6

[2.49.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.4...v2.49.5

[2.49.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.3...v2.49.4

[2.49.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.2...v2.49.3

[2.49.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.1...v2.49.2

[2.49.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.49.0...v2.49.1

[2.49.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.22...v2.49.0

[2.48.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.21...v2.48.22

[2.48.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.20...v2.48.21

[2.48.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.19...v2.48.20

[2.48.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.18...v2.48.19

[2.48.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.17...v2.48.18

[2.48.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.16...v2.48.17

[2.48.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.15...v2.48.16

[2.48.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.14...v2.48.15

[2.48.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.13...v2.48.14

[2.48.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.12...v2.48.13

[2.48.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.11...v2.48.12

[2.48.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.10...v2.48.11

[2.48.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.9...v2.48.10

[2.48.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.8...v2.48.9

[2.48.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.7...v2.48.8

[2.48.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.6...v2.48.7

[2.48.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.5...v2.48.6

[2.48.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.4...v2.48.5

[2.48.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.3...v2.48.4

[2.48.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.2...v2.48.3

[2.48.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.1...v2.48.2

[2.48.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.48.0...v2.48.1

[2.48.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.32...v2.48.0

[2.47.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.31...v2.47.32

[2.47.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.30...v2.47.31

[2.47.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.29...v2.47.30

[2.47.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.28...v2.47.29

[2.47.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.27...v2.47.28

[2.47.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.26...v2.47.27

[2.47.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.25...v2.47.26

[2.47.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.24...v2.47.25

[2.47.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.23...v2.47.24

[2.47.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.22...v2.47.23

[2.47.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.21...v2.47.22

[2.47.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.20...v2.47.21

[2.47.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.19...v2.47.20

[2.47.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.18...v2.47.19

[2.47.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.17...v2.47.18

[2.47.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.16...v2.47.17

[2.47.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.15...v2.47.16

[2.47.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.14...v2.47.15

[2.47.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.13...v2.47.14

[2.47.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.12...v2.47.13

[2.47.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.11...v2.47.12

[2.47.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.10...v2.47.11

[2.47.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.9...v2.47.10

[2.47.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.8...v2.47.9

[2.47.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.7...v2.47.8

[2.47.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.6...v2.47.7

[2.47.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.5...v2.47.6

[2.47.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.4...v2.47.5

[2.47.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.3...v2.47.4

[2.47.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.2...v2.47.3

[2.47.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.1...v2.47.2

[2.47.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.47.0...v2.47.1

[2.47.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.20...v2.47.0

[2.46.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.19...v2.46.20

[2.46.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.18...v2.46.19

[2.46.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.17...v2.46.18

[2.46.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.16...v2.46.17

[2.46.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.15...v2.46.16

[2.46.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.14...v2.46.15

[2.46.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.13...v2.46.14

[2.46.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.12...v2.46.13

[2.46.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.11...v2.46.12

[2.46.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.10...v2.46.11

[2.46.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.9...v2.46.10

[2.46.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.8...v2.46.9

[2.46.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.7...v2.46.8

[2.46.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.6...v2.46.7

[2.46.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.5...v2.46.6

[2.46.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.4...v2.46.5

[2.46.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.3...v2.46.4

[2.46.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.2...v2.46.3

[2.46.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.1...v2.46.2

[2.46.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.46.0...v2.46.1

[2.46.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.15...v2.46.0

[2.45.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.14...v2.45.15

[2.45.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.13...v2.45.14

[2.45.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.12...v2.45.13

[2.45.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.11...v2.45.12

[2.45.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.10...v2.45.11

[2.45.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.9...v2.45.10

[2.45.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.8...v2.45.9

[2.45.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.7...v2.45.8

[2.45.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.6...v2.45.7

[2.45.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.5...v2.45.6

[2.45.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.4...v2.45.5

[2.45.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.3...v2.45.4

[2.45.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.2...v2.45.3

[2.45.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.1...v2.45.2

[2.45.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.45.0...v2.45.1

[2.45.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.72...v2.45.0

[2.44.72]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.71...v2.44.72

[2.44.71]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.70...v2.44.71

[2.44.70]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.69...v2.44.70

[2.44.69]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.68...v2.44.69

[2.44.68]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.67...v2.44.68

[2.44.67]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.66...v2.44.67

[2.44.66]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.65...v2.44.66

[2.44.65]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.64...v2.44.65

[2.44.64]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.63...v2.44.64

[2.44.63]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.62...v2.44.63

[2.44.62]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.61...v2.44.62

[2.44.61]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.60...v2.44.61

[2.44.60]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.59...v2.44.60

[2.44.59]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.58...v2.44.59

[2.44.58]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.57...v2.44.58

[2.44.57]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.56...v2.44.57

[2.44.56]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.55...v2.44.56

[2.44.55]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.54...v2.44.55

[2.44.54]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.53...v2.44.54

[2.44.53]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.52...v2.44.53

[2.44.52]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.51...v2.44.52

[2.44.51]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.50...v2.44.51

[2.44.50]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.49...v2.44.50

[2.44.49]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.48...v2.44.49

[2.44.48]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.47...v2.44.48

[2.44.47]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.46...v2.44.47

[2.44.46]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.45...v2.44.46

[2.44.45]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.44...v2.44.45

[2.44.44]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.43...v2.44.44

[2.44.43]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.42...v2.44.43

[2.44.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.41...v2.44.42

[2.44.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.40...v2.44.41

[2.44.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.39...v2.44.40

[2.44.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.38...v2.44.39

[2.44.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.37...v2.44.38

[2.44.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.36...v2.44.37

[2.44.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.35...v2.44.36

[2.44.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.34...v2.44.35

[2.44.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.33...v2.44.34

[2.44.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.32...v2.44.33

[2.44.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.31...v2.44.32

[2.44.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.30...v2.44.31

[2.44.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.29...v2.44.30

[2.44.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.28...v2.44.29

[2.44.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.27...v2.44.28

[2.44.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.26...v2.44.27

[2.44.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.25...v2.44.26

[2.44.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.24...v2.44.25

[2.44.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.23...v2.44.24

[2.44.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.22...v2.44.23

[2.44.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.21...v2.44.22

[2.44.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.20...v2.44.21

[2.44.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.19...v2.44.20

[2.44.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.18...v2.44.19

[2.44.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.17...v2.44.18

[2.44.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.16...v2.44.17

[2.44.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.15...v2.44.16

[2.44.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.14...v2.44.15

[2.44.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.13...v2.44.14

[2.44.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.12...v2.44.13

[2.44.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.11...v2.44.12

[2.44.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.10...v2.44.11

[2.44.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.9...v2.44.10

[2.44.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.8...v2.44.9

[2.44.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.7...v2.44.8

[2.44.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.6...v2.44.7

[2.44.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.5...v2.44.6

[2.44.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.4...v2.44.5

[2.44.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.3...v2.44.4

[2.44.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.2...v2.44.3

[2.44.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.1...v2.44.2

[2.44.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.44.0...v2.44.1

[2.44.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.7...v2.44.0

[2.43.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.6...v2.43.7

[2.43.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.5...v2.43.6

[2.43.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.4...v2.43.5

[2.43.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.3...v2.43.4

[2.43.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.2...v2.43.3

[2.43.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.1...v2.43.2

[2.43.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.43.0...v2.43.1

[2.43.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.42...v2.43.0

[2.42.42]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.41...v2.42.42

[2.42.41]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.40...v2.42.41

[2.42.40]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.39...v2.42.40

[2.42.39]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.38...v2.42.39

[2.42.38]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.37...v2.42.38

[2.42.37]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.36...v2.42.37

[2.42.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.35...v2.42.36

[2.42.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.34...v2.42.35

[2.42.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.33...v2.42.34

[2.42.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.32...v2.42.33

[2.42.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.31...v2.42.32

[2.42.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.30...v2.42.31

[2.42.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.29...v2.42.30

[2.42.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.28...v2.42.29

[2.42.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.27...v2.42.28

[2.42.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.26...v2.42.27

[2.42.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.25...v2.42.26

[2.42.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.24...v2.42.25

[2.42.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.23...v2.42.24

[2.42.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.22...v2.42.23

[2.42.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.21...v2.42.22

[2.42.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.20...v2.42.21

[2.42.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.19...v2.42.20

[2.42.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.18...v2.42.19

[2.42.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.17...v2.42.18

[2.42.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.16...v2.42.17

[2.42.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.15...v2.42.16

[2.42.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.14...v2.42.15

[2.42.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.13...v2.42.14

[2.42.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.12...v2.42.13

[2.42.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.11...v2.42.12

[2.42.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.10...v2.42.11

[2.42.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.9...v2.42.10

[2.42.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.8...v2.42.9

[2.42.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.7...v2.42.8

[2.42.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.6...v2.42.7

[2.42.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.5...v2.42.6

[2.42.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.4...v2.42.5

[2.42.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.3...v2.42.4

[2.42.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.2...v2.42.3

[2.42.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.1...v2.42.2

[2.42.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.42.0...v2.42.1

[2.42.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.18...v2.42.0

[2.41.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.17...v2.41.18

[2.41.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.16...v2.41.17

[2.41.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.15...v2.41.16

[2.41.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.14...v2.41.15

[2.41.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.13...v2.41.14

[2.41.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.12...v2.41.13

[2.41.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.11...v2.41.12

[2.41.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.10...v2.41.11

[2.41.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.9...v2.41.10

[2.41.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.8...v2.41.9

[2.41.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.7...v2.41.8

[2.41.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.6...v2.41.7

[2.41.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.5...v2.41.6

[2.41.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.4...v2.41.5

[2.41.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.3...v2.41.4

[2.41.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.2...v2.41.3

[2.41.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.1...v2.41.2

[2.41.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.41.0...v2.41.1

[2.41.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.2...v2.41.0

[2.40.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.1...v2.40.2

[2.40.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.40.0...v2.40.1

[2.40.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.2...v2.40.0

[2.39.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.1...v2.39.2

[2.39.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.39.0...v2.39.1

[2.39.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.7...v2.39.0

[2.38.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.6...v2.38.7

[2.38.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.5...v2.38.6

[2.38.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.4...v2.38.5

[2.38.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.3...v2.38.4

[2.38.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.2...v2.38.3

[2.38.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.1...v2.38.2

[2.38.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.38.0...v2.38.1

[2.38.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.37.0...v2.38.0

[2.37.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.36.0...v2.37.0

[2.36.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.35.0...v2.36.0

[2.35.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.3...v2.35.0

[2.34.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.2...v2.34.3

[2.34.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.1...v2.34.2

[2.34.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.34.0...v2.34.1

[2.34.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.36...v2.34.0

[2.33.36]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.35...v2.33.36

[2.33.35]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.34...v2.33.35

[2.33.34]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.33...v2.33.34

[2.33.33]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.32...v2.33.33

[2.33.32]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.31...v2.33.32

[2.33.31]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.30...v2.33.31

[2.33.30]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.29...v2.33.30

[2.33.29]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.28...v2.33.29

[2.33.28]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.27...v2.33.28

[2.33.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.26...v2.33.27

[2.33.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.25...v2.33.26

[2.33.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.24...v2.33.25

[2.33.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.23...v2.33.24

[2.33.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.22...v2.33.23

[2.33.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.21...v2.33.22

[2.33.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.20...v2.33.21

[2.33.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.19...v2.33.20

[2.33.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.18...v2.33.19

[2.33.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.17...v2.33.18

[2.33.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.16...v2.33.17

[2.33.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.15...v2.33.16

[2.33.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.14...v2.33.15

[2.33.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.13...v2.33.14

[2.33.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.12...v2.33.13

[2.33.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.11...v2.33.12

[2.33.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.10...v2.33.11

[2.33.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.9...v2.33.10

[2.33.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.8...v2.33.9

[2.33.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.7...v2.33.8

[2.33.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.6...v2.33.7

[2.33.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.5...v2.33.6

[2.33.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.4...v2.33.5

[2.33.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.3...v2.33.4

[2.33.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.2...v2.33.3

[2.33.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.1...v2.33.2

[2.33.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.33.0...v2.33.1

[2.33.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.20...v2.33.0

[2.32.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.19...v2.32.20

[2.32.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.18...v2.32.19

[2.32.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.17...v2.32.18

[2.32.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.16...v2.32.17

[2.32.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.15...v2.32.16

[2.32.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.14...v2.32.15

[2.32.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.13...v2.32.14

[2.32.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.12...v2.32.13

[2.32.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.11...v2.32.12

[2.32.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.10...v2.32.11

[2.32.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.9...v2.32.10

[2.32.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.8...v2.32.9

[2.32.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.7...v2.32.8

[2.32.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.6...v2.32.7

[2.32.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.5...v2.32.6

[2.32.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.4...v2.32.5

[2.32.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.3...v2.32.4

[2.32.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.2...v2.32.3

[2.32.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.1...v2.32.2

[2.32.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.32.0...v2.32.1

[2.32.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.3...v2.32.0

[2.31.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.2...v2.31.3

[2.31.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.1...v2.31.2

[2.31.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.31.0...v2.31.1

[2.31.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.30.0...v2.31.0

[2.30.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.8...v2.30.0

[2.29.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.7...v2.29.8

[2.29.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.6...v2.29.7

[2.29.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.5...v2.29.6

[2.29.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.4...v2.29.5

[2.29.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.3...v2.29.4

[2.29.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.2...v2.29.3

[2.29.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.1...v2.29.2

[2.29.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.29.0...v2.29.1

[2.29.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.16...v2.29.0

[2.28.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.15...v2.28.16

[2.28.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.14...v2.28.15

[2.28.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.13...v2.28.14

[2.28.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.12...v2.28.13

[2.28.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.11...v2.28.12

[2.28.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.10...v2.28.11

[2.28.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.9...v2.28.10

[2.28.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.8...v2.28.9

[2.28.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.7...v2.28.8

[2.28.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.6...v2.28.7

[2.28.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.5...v2.28.6

[2.28.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.4...v2.28.5

[2.28.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.3...v2.28.4

[2.28.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.2...v2.28.3

[2.28.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.1...v2.28.2

[2.28.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.28.0...v2.28.1

[2.28.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.15...v2.28.0

[2.27.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.14...v2.27.15

[2.27.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.13...v2.27.14

[2.27.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.12...v2.27.13

[2.27.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.11...v2.27.12

[2.27.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.10...v2.27.11

[2.27.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.9...v2.27.10

[2.27.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.8...v2.27.9

[2.27.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.7...v2.27.8

[2.27.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.6...v2.27.7

[2.27.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.5...v2.27.6

[2.27.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.4...v2.27.5

[2.27.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.3...v2.27.4

[2.27.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.2...v2.27.3

[2.27.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.1...v2.27.2

[2.27.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.27.0...v2.27.1

[2.27.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.20...v2.27.0

[2.26.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.19...v2.26.20

[2.26.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.18...v2.26.19

[2.26.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.17...v2.26.18

[2.26.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.16...v2.26.17

[2.26.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.15...v2.26.16

[2.26.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.14...v2.26.15

[2.26.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.13...v2.26.14

[2.26.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.12...v2.26.13

[2.26.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.11...v2.26.12

[2.26.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.10...v2.26.11

[2.26.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.9...v2.26.10

[2.26.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.8...v2.26.9

[2.26.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.7...v2.26.8

[2.26.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.6...v2.26.7

[2.26.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.5...v2.26.6

[2.26.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.4...v2.26.5

[2.26.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.3...v2.26.4

[2.26.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.2...v2.26.3

[2.26.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.1...v2.26.2

[2.26.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.26.0...v2.26.1

[2.26.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.11...v2.26.0

[2.25.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.10...v2.25.11

[2.25.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.9...v2.25.10

[2.25.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.8...v2.25.9

[2.25.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.7...v2.25.8

[2.25.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.6...v2.25.7

[2.25.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.5...v2.25.6

[2.25.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.4...v2.25.5

[2.25.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.3...v2.25.4

[2.25.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.2...v2.25.3

[2.25.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.1...v2.25.2

[2.25.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.25.0...v2.25.1

[2.25.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.4...v2.25.0

[2.24.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.3...v2.24.4

[2.24.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.2...v2.24.3

[2.24.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.1...v2.24.2

[2.24.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.24.0...v2.24.1

[2.24.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.9...v2.24.0

[2.23.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.8...v2.23.9

[2.23.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.7...v2.23.8

[2.23.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.6...v2.23.7

[2.23.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.5...v2.23.6

[2.23.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.4...v2.23.5

[2.23.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.3...v2.23.4

[2.23.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.2...v2.23.3

[2.23.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.1...v2.23.2

[2.23.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.23.0...v2.23.1

[2.23.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.10...v2.23.0

[2.22.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.9...v2.22.10

[2.22.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.8...v2.22.9

[2.22.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.7...v2.22.8

[2.22.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.6...v2.22.7

[2.22.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.5...v2.22.6

[2.22.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.4...v2.22.5

[2.22.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.3...v2.22.4

[2.22.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.2...v2.22.3

[2.22.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.1...v2.22.2

[2.22.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.22.0...v2.22.1

[2.22.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.27...v2.22.0

[2.21.27]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.26...v2.21.27

[2.21.26]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.25...v2.21.26

[2.21.25]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.24...v2.21.25

[2.21.24]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.23...v2.21.24

[2.21.23]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.22...v2.21.23

[2.21.22]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.21...v2.21.22

[2.21.21]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.20...v2.21.21

[2.21.20]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.19...v2.21.20

[2.21.19]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.18...v2.21.19

[2.21.18]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.17...v2.21.18

[2.21.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.16...v2.21.17

[2.21.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.15...v2.21.16

[2.21.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.14...v2.21.15

[2.21.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.13...v2.21.14

[2.21.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.12...v2.21.13

[2.21.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.11...v2.21.12

[2.21.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.10...v2.21.11

[2.21.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.9...v2.21.10

[2.21.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.8...v2.21.9

[2.21.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.7...v2.21.8

[2.21.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.6...v2.21.7

[2.21.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.5...v2.21.6

[2.21.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.4...v2.21.5

[2.21.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.3...v2.21.4

[2.21.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.2...v2.21.3

[2.21.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.1...v2.21.2

[2.21.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.21.0...v2.21.1

[2.21.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.17...v2.21.0

[2.20.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.16...v2.20.17

[2.20.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.15...v2.20.16

[2.20.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.14...v2.20.15

[2.20.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.13...v2.20.14

[2.20.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.12...v2.20.13

[2.20.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.11...v2.20.12

[2.20.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.10...v2.20.11

[2.20.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.9...v2.20.10

[2.20.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.8...v2.20.9

[2.20.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.7...v2.20.8

[2.20.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.6...v2.20.7

[2.20.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.5...v2.20.6

[2.20.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.4...v2.20.5

[2.20.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.3...v2.20.4

[2.20.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.2...v2.20.3

[2.20.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.1...v2.20.2

[2.20.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.20.0...v2.20.1

[2.20.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.4...v2.20.0

[2.19.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.3...v2.19.4

[2.19.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.2...v2.19.3

[2.19.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.1...v2.19.2

[2.19.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.19.0...v2.19.1

[2.19.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.17...v2.19.0

[2.18.17]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.16...v2.18.17

[2.18.16]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.15...v2.18.16

[2.18.15]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.14...v2.18.15

[2.18.14]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.13...v2.18.14

[2.18.13]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.12...v2.18.13

[2.18.12]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.11...v2.18.12

[2.18.11]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.10...v2.18.11

[2.18.10]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.9...v2.18.10

[2.18.9]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.8...v2.18.9

[2.18.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.7...v2.18.8

[2.18.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.6...v2.18.7

[2.18.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.5...v2.18.6

[2.18.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.4...v2.18.5

[2.18.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.3...v2.18.4

[2.18.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.2...v2.18.3

[2.18.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.1...v2.18.2

[2.18.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.18.0...v2.18.1

[2.18.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.8...v2.18.0

[2.17.8]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.7...v2.17.8

[2.17.7]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.6...v2.17.7

[2.17.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.5...v2.17.6

[2.17.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.4...v2.17.5

[2.17.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.3...v2.17.4

[2.17.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.2...v2.17.3

[2.17.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.1...v2.17.2

[2.17.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.17.0...v2.17.1

[2.17.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.5...v2.17.0

[2.16.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.4...v2.16.5

[2.16.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.3...v2.16.4

[2.16.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.2...v2.16.3

[2.16.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.1...v2.16.2

[2.16.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.16.0...v2.16.1

[2.16.0]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.6...v2.16.0

[2.15.6]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.5...v2.15.6

[2.15.5]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.4...v2.15.5

[2.15.4]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.3...v2.15.4

[2.15.3]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.2...v2.15.3

[2.15.2]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.1...v2.15.2

[2.15.1]:
https://redirect.github.com/taiki-e/install-action/compare/v2.15.0...v2.15.1

[2.15.0]: https://redirect.github.com/taiki-e/install-action/compar

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:37:14 +05:30
renovate[bot]
93b64daa4a Update pre-commit hook astral-sh/ruff-pre-commit to v0.12.7 (#19719)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[astral-sh/ruff-pre-commit](https://redirect.github.com/astral-sh/ruff-pre-commit)
| repository | patch | `v0.12.5` -> `v0.12.7` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

Note: The `pre-commit` manager in Renovate is not supported by the
`pre-commit` maintainers or community. Please do not report any problems
there, instead [create a Discussion in the Renovate
repository](https://redirect.github.com/renovatebot/renovate/discussions/new)
if you have any questions.

---

### Release Notes

<details>
<summary>astral-sh/ruff-pre-commit (astral-sh/ruff-pre-commit)</summary>

###
[`v0.12.7`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.12.7)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.12.6...v0.12.7)

See: https://github.com/astral-sh/ruff/releases/tag/0.12.7

###
[`v0.12.6`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.12.6)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.12.5...v0.12.6)

See: https://github.com/astral-sh/ruff/releases/tag/0.12.7

Ruff's 0.12.6 release was yanked. See the linked release notes for more
information.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:02:49 +05:30
renovate[bot]
74376375e4 Update dependency ruff to v0.12.7 (#19718)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [ruff](https://docs.astral.sh/ruff)
([source](https://redirect.github.com/astral-sh/ruff),
[changelog](https://redirect.github.com/astral-sh/ruff/blob/main/CHANGELOG.md))
| `==0.12.5` -> `==0.12.7` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/ruff/0.12.7?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/ruff/0.12.5/0.12.7?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>astral-sh/ruff (ruff)</summary>

###
[`v0.12.7`](https://redirect.github.com/astral-sh/ruff/blob/HEAD/CHANGELOG.md#0127)

This is a follow-up release to 0.12.6. Because of an issue in the
package metadata, 0.12.6 failed to publish fully to PyPI and has been
yanked. Similarly, there is no GitHub release or Git tag for 0.12.6. The
contents of the 0.12.7 release are identical to 0.12.6, except for the
updated metadata.

###
[`v0.12.6`](https://redirect.github.com/astral-sh/ruff/blob/HEAD/CHANGELOG.md#0126)

##### Preview features

- \[`flake8-commas`] Add support for trailing comma checks in type
parameter lists (`COM812`, `COM819`)
([#&#8203;19390](https://redirect.github.com/astral-sh/ruff/pull/19390))
- \[`pylint`] Implement auto-fix for `missing-maxsplit-arg` (`PLC0207`)
([#&#8203;19387](https://redirect.github.com/astral-sh/ruff/pull/19387))
- \[`ruff`] Offer fixes for `RUF039` in more cases
([#&#8203;19065](https://redirect.github.com/astral-sh/ruff/pull/19065))

##### Bug fixes

- Support `.pyi` files in ruff analyze graph
([#&#8203;19611](https://redirect.github.com/astral-sh/ruff/pull/19611))
- \[`flake8-pyi`] Preserve inline comment in ellipsis removal (`PYI013`)
([#&#8203;19399](https://redirect.github.com/astral-sh/ruff/pull/19399))
- \[`perflint`] Ignore rule if target is `global` or `nonlocal`
(`PERF401`)
([#&#8203;19539](https://redirect.github.com/astral-sh/ruff/pull/19539))
- \[`pyupgrade`] Fix `UP030` to avoid modifying double curly braces in
format strings
([#&#8203;19378](https://redirect.github.com/astral-sh/ruff/pull/19378))
- \[`refurb`] Ignore decorated functions for `FURB118`
([#&#8203;19339](https://redirect.github.com/astral-sh/ruff/pull/19339))
- \[`refurb`] Mark `int` and `bool` cases for `Decimal.from_float` as
safe fixes (`FURB164`)
([#&#8203;19468](https://redirect.github.com/astral-sh/ruff/pull/19468))
- \[`ruff`] Fix `RUF033` for named default expressions
([#&#8203;19115](https://redirect.github.com/astral-sh/ruff/pull/19115))

##### Rule changes

- \[`flake8-blind-except`] Change `BLE001` to permit
`logging.critical(..., exc_info=True)`
([#&#8203;19520](https://redirect.github.com/astral-sh/ruff/pull/19520))

##### Performance

- Add support for specifying minimum dots in detected string imports
([#&#8203;19538](https://redirect.github.com/astral-sh/ruff/pull/19538))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:01:53 +05:30
renovate[bot]
30f52d8cf5 Update cargo-bins/cargo-binstall action to v1.14.3 (#19716)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[cargo-bins/cargo-binstall](https://redirect.github.com/cargo-bins/cargo-binstall)
| action | patch | `v1.14.2` -> `v1.14.3` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>cargo-bins/cargo-binstall (cargo-bins/cargo-binstall)</summary>

###
[`v1.14.3`](https://redirect.github.com/cargo-bins/cargo-binstall/releases/tag/v1.14.3)

[Compare
Source](https://redirect.github.com/cargo-bins/cargo-binstall/compare/v1.14.2...v1.14.3)

*Binstall is a tool to fetch and install Rust-based executables as
binaries. It aims to be a drop-in replacement for `cargo install` in
most cases. Install it today with `cargo install cargo-binstall`, from
the binaries below, or if you already have it, upgrade with `cargo
binstall cargo-binstall`.*

##### In this release:

- Fix race condition in target detections
([#&#8203;2238](https://redirect.github.com/cargo-bins/cargo-binstall/issues/2238))

##### Other changes:

- Upgrade dependencies

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:01:21 +05:30
renovate[bot]
77bc32b9b9 Update Rust crate serde_json to v1.0.142 (#19721)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [serde_json](https://redirect.github.com/serde-rs/json) |
workspace.dependencies | patch | `1.0.141` -> `1.0.142` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>serde-rs/json (serde_json)</summary>

###
[`v1.0.142`](https://redirect.github.com/serde-rs/json/releases/tag/v1.0.142)

[Compare
Source](https://redirect.github.com/serde-rs/json/compare/v1.0.141...v1.0.142)

- impl Default for \&Value
([#&#8203;1265](https://redirect.github.com/serde-rs/json/issues/1265),
thanks [@&#8203;aatifsyed](https://redirect.github.com/aatifsyed))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS41MS4xIiwidXBkYXRlZEluVmVyIjoiNDEuNTEuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 08:00:58 +05:30
Dan Parizher
1a368b0bf9 [flake8-simplify] Fix raw string handling in SIM905 for embedded quotes (#19591)
## Summary

When splitting triple-quoted, raw strings one has to take care before attempting to make each item have single-quotes.

Fixes #19577

---------

Co-authored-by: dylwil3 <dylwil3@gmail.com>
2025-08-03 11:31:28 -05:00
Jérémy Scanvic
134435415e Change 'associative' to 'commutative' in docs describing union (#19706)
Thanks for the great tool!

I noticed a small typo [in the
docs](https://docs.astral.sh/ruff/rules/none-not-at-end-of-union/): it's
[commutativity](Commutative_property) that makes the order not matter in
type unions, not
[associativity](https://en.wikipedia.org/wiki/Associative_property)
which is something different.

I make the change in this PR.
2025-08-03 16:30:56 +00:00
cristian64
bc6e105c18 Include column numbers in GitLab output format. (#19708) 2025-08-03 12:37:01 +00:00
Micha Reiser
6bd413df6c [ty] Update salsa (#19710) 2025-08-03 09:18:10 +00:00
Nathaniel Roman
85bd961fd3 [ty] resolve file symlinks in src walk (#19674)
Co-authored-by: Nathaniel Roman <nroman@openai.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-08-01 22:52:04 +02:00
Douglas Creager
d37911685f [ty] Correctly instantiate generic class that inherits __init__ from generic base class (#19693)
This is subtle, and the root cause became more apparent with #19604,
since we now have many more cases of superclasses and subclasses using
different typevars. The issue is easiest to see in the following:

```py
class C[T]:
    def __init__(self, t: T) -> None: ...

class D[U](C[T]):
    pass

reveal_type(C(1))  # revealed: C[int]
reveal_type(D(1))  # should be: D[int]
```

When instantiating a generic class, the `__init__` method inherits the
generic context of that class. This lets our call binding machinery
infer a specialization for that context.

Prior to this PR, the instantiation of `C` worked just fine. Its
`__init__` method would inherit the `[T]` generic context, and we would
infer `{T = int}` as the specialization based on the argument
parameters.

It didn't work for `D`. The issue is that the `__init__` method was
inheriting the generic context of the class where `__init__` was defined
(here, `C` and `[T]`). At the call site, we would then infer `{T = int}`
as the specialization — but that wouldn't help us specialize `D[U]`,
since `D` does not have `T` in its generic context!

Instead, the `__init__` method should inherit the generic context of the
class that we are performing the lookup on (here, `D` and `[U]`). That
lets us correctly infer `{U = int}` as the specialization, which we can
successfully apply to `D[U]`.

(Note that `__init__` refers to `C`'s typevars in its signature, but
that's okay; our member lookup logic already applies the `T = U`
specialization when returning a member of `C` while performing a lookup
on `D`, transforming its signature from `(Self, T) -> None` to `(Self,
U) -> None`.)

Closes https://github.com/astral-sh/ty/issues/588
2025-08-01 15:29:18 -04:00
GiGaGon
580577e667 [refurb] Make example error out-of-the-box (FURB157) (#19695)
## Summary

Part of #18972

This PR makes [verbose-decimal-constructor
(FURB157)](https://docs.astral.sh/ruff/rules/verbose-decimal-constructor/#verbose-decimal-constructor-furb157)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/0930015c-ad45-4490-800e-66ed057bfe34)
```py
Decimal("0")
Decimal(float("Infinity"))
```

[New example](https://play.ruff.rs/516e5992-322f-4203-afe7-46d8cad53368)
```py
from decimal import Decimal

Decimal("0")
Decimal(float("Infinity"))
```

Imports were also added to the "Use Instead" section.
2025-08-01 12:55:48 -05:00
Dan Parizher
dce25da19a [flake8-errmsg] Exclude typing.cast from EM101 (#19656)
## Summary

Fixes #19596
2025-08-01 13:37:44 -04:00
Douglas Creager
06cd249a9b [ty] Track different uses of legacy typevars, including context when rendering typevars (#19604)
This PR introduces a few related changes:

- We now keep track of each time a legacy typevar is bound in a
different generic context (e.g. class, function), and internally create
a new `TypeVarInstance` for each usage. This means the rest of the code
can now assume that salsa-equivalent `TypeVarInstance`s refer to the
same typevar, even taking into account that legacy typevars can be used
more than once.

- We also go ahead and track the binding context of PEP 695 typevars.
That's _much_ easier to track since we have the binding context right
there during type inference.

- With that in place, we can now include the name of the binding context
when rendering typevars (e.g. `T@f` instead of `T`)
2025-08-01 12:20:32 -04:00
David Peter
48d5bd13fa [ty] Initial test suite for TypedDict (#19686)
## Summary

Adds an initial set of tests based on the highest-priority items in
https://github.com/astral-sh/ty/issues/154. This is certainly not yet
exhaustive (required/non-required, `total`, and other things are
missing), but will be useful to measure progress on this feature.

## Test Plan

Checked intended behavior against runtime and other type checkers.
2025-08-01 16:56:02 +02:00
Alex Waygood
e7e7b7bf21 [ty] Improve debuggability of protocol types (#19662) 2025-08-01 15:16:13 +01:00
Alex Waygood
57e2e8664f [ty] Simplify lifetime requirements for PySlice trait (#19687) 2025-08-01 15:13:47 +01:00
Alex Waygood
18aae21b9a [ty] Improve isinstance() truthiness analysis for generic types (#19668) 2025-08-01 14:44:22 +01:00
GiGaGon
d8151f0239 [refurb] Make example error out-of-the-box (FURB164) (#19673)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Part of #18972

This PR makes [unnecessary-from-float
(FURB164)](https://docs.astral.sh/ruff/rules/unnecessary-from-float/#unnecessary-from-float-furb164)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/807ef72f-9671-408d-87ab-8b8bad65b33f)
```py
Decimal.from_float(4.2)
Decimal.from_float(float("inf"))
Fraction.from_float(4.2)
Fraction.from_decimal(Decimal("4.2"))
```

[New example](https://play.ruff.rs/303680d1-8a68-4b6c-a5fd-d79c56eb0f88)
```py
from decimal import Decimal
from fractions import Fraction

Decimal.from_float(4.2)
Decimal.from_float(float("inf"))
Fraction.from_float(4.2)
Fraction.from_decimal(Decimal("4.2"))
```

The "Use instead" section also had imports added, and one of the fixed
examples was slightly wrong and needed modification.

## Test Plan

<!-- How was it tested? -->

N/A, no functionality/tests affected
2025-08-01 07:49:58 -05:00
Hunter Hogan
2ee56735e2 Fix link: unused_import.rs (#19648) 2025-08-01 08:47:52 +00:00
David Peter
ade6a4262a [ty] Remove Specialization::display (full) (#19682)
## Summary

This seems to be unused
2025-08-01 10:44:11 +02:00
David Peter
d43e6fb9c6 [ty] Remove KnownModule::is_enum (#19681)
## Summary

Changes the visibility of `KnownModule` and removes an unneeded
function.
2025-08-01 10:31:12 +02:00
Matthew Mckee
b30d97e5e0 [ty] Support __setitem__ and improve __getitem__ related diagnostics (#19578)
## Summary

Adds validation to subscript assignment expressions.

```py
class Foo: ...

class Bar:
    __setattr__ = None

class Baz:
    def __setitem__(self, index: str, value: int) -> None:
        pass

# We now emit a diagnostic on these statements
Foo()[1] = 2
Bar()[1] = 2
Baz()[1] = 2

```

Also improves error messages on invalid `__getitem__` expressions

## Test Plan

Update mdtests and add more to `subscript/instance.md`

---------

Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Co-authored-by: David Peter <mail@david-peter.de>
2025-08-01 09:23:27 +02:00
github-actions[bot]
5c5d50d57a [ty] Sync vendored typeshed stubs (#19676)
Close and reopen this PR to trigger CI

---------

Co-authored-by: typeshedbot <>
2025-08-01 08:43:42 +02:00
Dan Parizher
b3a26a50ad [flake8-use-pathlib] Expand PTH201 to check all PurePath subclasses (#19440)
## Summary

Fixes #19437
2025-07-31 22:18:07 -04:00
GiGaGon
6a2d358d7a [refurb] Make example error out-of-the-box (FURB180) (#19672)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Part of #18972

This PR makes [meta-class-abc-meta
(FURB180)](https://docs.astral.sh/ruff/rules/meta-class-abc-meta/#meta-class-abc-meta-furb180)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/6beca1be-45cd-4e5a-aafa-6a0584c10d64)
```py
class C(metaclass=ABCMeta):
    pass
```

[New example](https://play.ruff.rs/bbad34da-bf07-44e6-9f34-53337e8f57d4)
```py
import abc


class C(metaclass=abc.ABCMeta):
    pass
```

The "Use instead" section as also modified similarly.

## Test Plan

<!-- How was it tested? -->

N/A, no functionality/tests affected
2025-07-31 17:14:15 -04:00
Dan Parizher
b07def07c9 [pyupgrade] Prevent infinite loop with I002 (UP010, UP035) (#19413)
## Summary

Fixes #18729 and fixes #16802

## Test Plan

Manually verified via CLI that Ruff no longer enters an infinite loop by
running:
```sh
echo 1 | ruff --isolated check - --select I002,UP010 --fix
```
with `required-imports = ["from __future__ import generator_stop"]` set
in the config, confirming “All checks passed!” and no snapshots were
generated.

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-07-31 15:17:27 -04:00
Alex Waygood
2ab1502e51 [ty] Improve the Display for generic type[] types (#19667) 2025-07-31 19:45:01 +01:00
Alex Waygood
a3f28baab4 [ty] Refactor TypeInferenceBuilder::infer_subscript_expression_types (#19658) 2025-07-31 13:38:43 +00:00
Brent Westbrook
a71513bae1 Fix tests on 32-bit architectures (#19652)
Summary
--

Fixes #19640. I'm not sure these are the exact fixes we really want, but
I
reproduced the issue in a 32-bit Docker container and tracked down the
causes,
so I figured I'd open a PR.

As I commented on the issue, the `goto_references` test depends on the
iteration
order of the files in an `FxHashSet` in `Indexed`. In this case, we can
just
sort the output in test code.

Similarly, the tuple case depended on the order of overloads inserted in
an
`FxHashMap`. `FxIndexMap` seemed like a convenient drop-in replacement,
but I
don't know if that will have other detrimental effects. I did have to
change the
assertion for the tuple test, but I think it should now be stable across
architectures.

Test Plan
--

Running the tests in the aforementioned Docker container
2025-07-31 08:52:19 -04:00
Alex Waygood
d2d4b115e3 [ty] Move pandas-stubs to bad.txt (#19659) 2025-07-31 12:33:24 +01:00
Alex Waygood
27b03a9d7b [ty] Remove special casing for string-literal-in-tuple __contains__ (#19642) 2025-07-31 11:28:03 +01:00
Harshil
32c454bb56 Update pre-commit's ruff id (#19654) 2025-07-31 07:17:04 +02:00
Ibraheem Ahmed
f6b7418def Update salsa (#19449)
## Summary

Pulls in a bunch of salsa micro-optimizations.
2025-07-30 15:31:46 -04:00
Ibraheem Ahmed
8f8c39c435 Simplify get_size2 usage (#19643)
## Summary

These were added in the 0.5.0 release.
2025-07-30 15:31:37 -04:00
Matthew Mckee
4739bc8d14 [ty] Fix incorrect diagnostic when calling __setitem__ (#19645)
## Summary

Resolves https://github.com/astral-sh/ty/issues/862 by not emitting a
diagnostic.

## Test Plan

Add test to show we don't emit the diagnostic
2025-07-30 20:34:52 +02:00
Alex Waygood
7b4103bcb6 [ty] Remove special casing for tuple addition (#19636) 2025-07-30 16:25:42 +00:00
Jim Hoekstra
38049aae12 fix missing-required-imports introducing syntax error after dosctring ending with backslash (#19505)
Issue: https://github.com/astral-sh/ruff/issues/19498

## Summary


[missing-required-import](https://docs.astral.sh/ruff/rules/missing-required-import/)
inserts the missing import on the line immediately following the last
line of the docstring. However, if the dosctring is immediately followed
by a continuation token (i.e. backslash) then this leads to a syntax
error because Python interprets the docstring and the inserted import to
be on the same line.

The proposed solution in this PR is to check if the first token after a
file docstring is a continuation character, and if so, to advance an
additional line before inserting the missing import.

## Test Plan

Added a unit test, and the following example was verified manually:

Given this simple test Python file:

```python
"Hello, World!"\

print(__doc__)
```

and this ruff linting configuration in the `pyproject.toml` file:

```toml
[tool.ruff.lint]
select = ["I"]

[tool.ruff.lint.isort]
required-imports = ["import sys"]
```

Without the changes in this PR, the ruff linter would try to insert the
missing import in line 2, resulting in a syntax error, and report the
following:

`error: Fix introduced a syntax error. Reverting all changes.`

With the changes in this PR, ruff correctly advances one more line
before adding the missing import, resulting in the following output:

```python
"Hello, World!"\

import sys

print(__doc__)
```

---------

Co-authored-by: Jim Hoekstra <jim.hoekstra@pacmed.nl>
2025-07-30 12:12:46 -04:00
Alex Waygood
ec3d5ebda2 [ty] Upcast heterogeneous and mixed tuples to homogeneous tuples where it's necessary to solve a TypeVar (#19635)
## Summary

This PR improves our generics solver such that we are able to solve the
`TypeVar` in this snippet to `int | str` (the union of the elements in
the heterogeneous tuple) by upcasting the heterogeneous tuple to its
pure-homogeneous-tuple supertype:

```py
def f[T](x: tuple[T, ...]) -> T:
    return x[0]

def g(x: tuple[int, str]):
    reveal_type(f(x))
```

## Test Plan

Mdtests. Some TODOs remain in the mdtest regarding solving `TypeVar`s
for mixed tuples, but I think this PR on its own is a significant step
forward for our generics solver when it comes to tuple types.

---------

Co-authored-by: Douglas Creager <dcreager@dcreager.net>
2025-07-30 17:12:21 +01:00
Micha Reiser
d797592f70 [ty] Fix server panic in workspace diagnostics request handler when typing (#19631) 2025-07-30 16:40:42 +01:00
David Peter
eb02aa5676 [ty] Async for loops and async iterables (#19634)
## Summary

Add support for `async for` loops and async iterables.

part of https://github.com/astral-sh/ty/issues/151

## Ecosystem impact

```diff
- boostedblob/listing.py:445:54: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
```

This is correct. We now find a true positive in the `# type: ignore`'d
code.

All of the other ecosystem hits are of the type

```diff
trio (https://github.com/python-trio/trio)
+ src/trio/_core/_tests/test_guest_mode.py:532:24: error[not-iterable] Object of type `MemorySendChannel[int] | MemoryReceiveChannel[int]` may not be iterable
```

The message is correct, because only `MemoryReceiveChannel` has an
`__aiter__` method, but `MemorySendChannel` does not. What's not correct
is our inferred type here. It should be `MemoryReceiveChannel[int]`, not
the union of the two. This is due to missing unpacking support for tuple
subclasses, which @AlexWaygood is working on. I don't think this should
block merging this PR, because those wrong types are already there,
without this PR.

## Test Plan

New Markdown tests and snapshot tests for diagnostics.
2025-07-30 17:40:24 +02:00
Dan Parizher
e593761232 [ruff] Parenthesize generator expressions in f-strings (RUF010) (#19434)
## Summary

Fixes #19433

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-07-30 15:02:31 +00:00
Brent Westbrook
8979271ea8 Always expand tabs to four spaces in diagnostics (#19618)
## Summary

I was a bit stuck on some snapshot differences I was seeing in #19415,
but @BurntSushi pointed out that `annotate-snippets` already normalizes
tabs on its own, which was very helpful! Instead of applying this change
directly to the other branch, I wanted to try applying it in
`ruff_linter` first. This should very slightly reduce the number of
changes in #19415 proper.

It looks like `annotate-snippets` always expands a tab to four spaces,
whereas I think we were aligning to tab stops:

```diff
  6 | spam(ham[1], { eggs: 2})
  7 | #: E201:1:6
- 8 | spam(   ham[1], {eggs: 2})
-   |      ^^^ E201
+ 8 | spam(    ham[1], {eggs: 2})
+   |      ^^^^ E201
```

```diff
61 | #: E203:2:15 E702:2:16
 62 | if x == 4:
-63 |     print(x, y) ; x, y = y, x
-   |                ^ E203
+63 |     print(x, y)    ; x, y = y, x
+   |                ^^^^ E203
```

```diff
 E27.py:15:6: E271 [*] Multiple spaces after keyword
    |
-13 | True        and False
+13 | True        and    False
 14 | #: E271
 15 | a and  b
    |      ^^ E271
```

I don't think this is too bad and has the major benefit of allowing us
to pass the non-tab-expanded range to `annotate-snippets` in #19415,
where it's also displayed in the header. Ruff doesn't have this problem
currently because it uses its own concise diagnostic output as the
header for full diagnostics, where the pre-expansion range is used
directly.

## Test Plan

Existing tests with a few snapshot updates
2025-07-30 11:00:36 -04:00
Andrew Gallant
d1a286226c [ty] Update module resolution diagram to account for typeshed VERSIONS file
This does unfortunately add a fair bit of complexity to the flow
diagram.

Ref https://github.com/astral-sh/ruff/pull/19620#issuecomment-3133684294
2025-07-30 10:34:58 -04:00
Micha Reiser
1ba32684da Fix copy and line separator colors in dark mode (#19630) 2025-07-30 15:08:31 +01:00
Micha Reiser
70d4b271da [ty] Remove AssertUnwindSafe requirement from ProgressReporter (#19637) 2025-07-30 12:46:44 +00:00
Alex Waygood
feaedb1812 [ty] Synthesize precise __getitem__ overloads for tuple subclasses (#19493) 2025-07-30 11:25:44 +00:00
Micha Reiser
6237ecb4db [ty] Add progress reporting to workspace diagnostics (#19616) 2025-07-30 10:27:34 +00:00
Micha Reiser
2a5ace6e55 [ty] Implement diagnostic caching (#19605) 2025-07-30 11:04:34 +01:00
David Peter
4ecf1d205a [ty] Support async/await, async with and yield from (#19595)
## Summary

- Add support for the return types of `async` functions
- Add type inference for `await` expressions
- Add support for `async with` / async context managers
- Add support for `yield from` expressions

This PR is generally lacking proper error handling in some cases (e.g.
illegal `__await__` attributes). I'm planning to work on this in a
follow-up.

part of https://github.com/astral-sh/ty/issues/151

closes https://github.com/astral-sh/ty/issues/736

## Ecosystem

There are a lot of true positives on `prefect` which look similar to:
```diff
prefect (https://github.com/PrefectHQ/prefect)
+ src/integrations/prefect-aws/tests/workers/test_ecs_worker.py:406:12: error[unresolved-attribute] Type `str` has no attribute `status_code`
```

This is due to a wrong return type annotation
[here](e926b8c4c1/src/integrations/prefect-aws/tests/workers/test_ecs_worker.py (L355-L391)).

```diff
mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ test/mitmproxy/addons/test_clientplayback.py:18:1: error[invalid-argument-type] Argument to function `asynccontextmanager` is incorrect: Expected `(...) -> AsyncIterator[Unknown]`, found `def tcp_server(handle_conn, **server_args) -> Unknown | tuple[str, int]`
```


[This](a4d794c59a/test/mitmproxy/addons/test_clientplayback.py (L18-L19))
is a true positive. That function should return
`AsyncIterator[Address]`, not `Address`.

I looked through almost all of the other new diagnostics and they all
look like known problems or true positives.

## Typing conformance

The typing conformance diff looks good.

## Test Plan

New Markdown tests
2025-07-30 11:51:21 +02:00
Brent Westbrook
c5ac998892 Bump 0.12.7 (#19627)
## Test Plan

- [x] Download the [sdist
artifact](https://github.com/astral-sh/ruff/actions/runs/16608501774/artifacts/3643617012)
and check that the LICENSE is present
2025-07-29 18:18:42 -04:00
Brent Westbrook
04a8f64cd7 Revert license and license-files changes in pyproject.toml (#19624)
Summary
--

This partially reverts commit 13634ff433
after issues in the release today.

Test Plan
--

```shell
uv build --sdist
tar -tzf dist/ruff-0.12.6.tar.gz | grep ruff-0.12.6/LICENSE
```

which finds the license now.
2025-07-29 17:27:55 -04:00
Brent Westbrook
6e00adf308 Bump 0.12.6 (#19622) 2025-07-29 16:31:01 -04:00
Brent Westbrook
864196b988 Add Checker::context method, deduplicate Unicode checks (#19609)
Summary
--

This PR adds a `Checker::context` method that returns the underlying
`LintContext` to unify `Candidate::into_diagnostic` and
`Candidate::report_diagnostic` in our ambiguous Unicode character
checks. This avoids some duplication and also avoids collecting a `Vec`
of `Candidate`s only to iterate over it later.

Test Plan
--

Existing tests
2025-07-29 16:07:55 -04:00
Thomas Mattone
ae26fa020c [flake8-pyi] Preserve inline comment in ellipsis removal (PYI013) (#19399)
## Summary

Fixes #19385.

Based on [unnecessary-placeholder
(PIE790)](https://docs.astral.sh/ruff/rules/unnecessary-placeholder/)
behavior, [ellipsis-in-non-empty-class-body
(PYI013)](https://docs.astral.sh/ruff/rules/ellipsis-in-non-empty-class-body/)
now safely preserve inline comment on ellipsis removal.

## Test Plan

A new test class was added:

```python
class NonEmptyChildWithInlineComment:
    value: int
    ... # preserve me
```

with the following expected fix:

```python
class NonEmptyChildWithInlineComment:
    value: int
    # preserve me
```
2025-07-29 15:06:04 -04:00
Andrew Gallant
88a679945c [ty] Add flow diagram for import resolution
The diagram is written in the Dot language, which can
be converted to SVG (or any other image) by GraphViz.

I thought it was a good idea to write this down in
preparation for adding routines that list modules.
Code reuse is likely to be difficult and I wanted to
be sure I understood how it worked.
2025-07-29 14:49:20 -04:00
Andrew Gallant
941be52358 [ty] Add comments to some core resolver functions
Some of the contracts were a little tricky to discover from just the
parameter types, so I added some docs (and fixed what I believe was one
typo).
2025-07-29 14:49:20 -04:00
Andrew Gallant
13624ce17f [ty] Add missing ticks and use consistent quoting
This irked me while I was reading the code, so I just tried to fix what
I could see.
2025-07-29 14:49:20 -04:00
Andrew Gallant
edb2f8e997 [ty] Reflow some long lines
I mostly just did this because the long string literals were annoying
me. And these can make rustfmt give up on formatting.

I also re-flowed some long comment lines while I was here.
2025-07-29 14:49:20 -04:00
Andrew Gallant
5e6ad849ff [ty] Unexport helper function
I'm not sure if this used to be used elsewhere, but it no longer is.
And it looks like an internal-only helper function, so just un-export
it.

And note that `ModuleNameIngredient` is also un-exported, so this
function isn't really usable outside of its defining module anyway.
2025-07-29 14:49:20 -04:00
Andrew Gallant
865a9b3424 [ty] Remove offset from CompletionTargetTokens::Unknown
At some point, the surrounding code was refactored so that the
cursor offset was always passed around, so storing it here is
no longer necessary.
2025-07-29 14:49:20 -04:00
Dan Parizher
d449c541cb [pyupgrade] Fix UP030 to avoid modifying double curly braces in format strings (#19378)
## Summary

Fixes #19348

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-07-29 18:35:54 +00:00
हिमांशु
f7c6a6b2d0 [ty] fix a typo (#19621)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-07-29 17:53:44 +00:00
justin
656273bf3d [ty] synthesize __replace__ for dataclasses (>=3.13) (#19545)
## Summary
https://github.com/astral-sh/ty/issues/111

adds support for the new `copy.replace` and `__replace__` protocol
[added in 3.13](https://docs.python.org/3/whatsnew/3.13.html#copy)

- docs: https://docs.python.org/3/library/copy.html#object.__replace__
- some discussion on pyright/mypy implementations:
https://discuss.python.org/t/dataclass-transform-and-replace/69067



### Burndown
- [x] add tests
- [x] implement `__replace__`
- [ ]
[collections.namedtuple()](https://docs.python.org/3/library/collections.html#collections.namedtuple)
- [x]
[dataclasses.dataclass](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass)

## Test Plan
new mdtests

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-07-29 17:32:01 +02:00
449 changed files with 18893 additions and 4601 deletions

6
.github/CODEOWNERS vendored
View File

@@ -19,6 +19,10 @@
# ty
/crates/ty* @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_project/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_server/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_wasm/ @carljm @MichaReiser @sharkdp @dcreager
/scripts/ty_benchmark/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ty_python_semantic @carljm @AlexWaygood @sharkdp @dcreager

View File

@@ -63,7 +63,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
with:
images: ${{ env.RUFF_BASE_IMG }}
# Defining this makes sure the org.opencontainers.image.version OCI label becomes the actual release version and not the branch name
@@ -123,7 +123,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
with:
images: ${{ env.RUFF_BASE_IMG }}
# Order is on purpose such that the label org.opencontainers.image.version has the first pattern with the full version
@@ -219,7 +219,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
# ghcr.io prefers index level annotations
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
@@ -266,7 +266,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:

View File

@@ -240,11 +240,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-insta
- name: ty mdtests (GitHub annotations)
@@ -298,11 +298,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@702b1908b5edf30d71a8d1666b724e0f0c6fa035 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-insta
- name: "Run tests"
@@ -325,7 +325,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-nextest
- name: "Run tests"
@@ -429,7 +429,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@808dcb1b503398677d089d3216c51ac7cc11e7ab # v1.14.2
uses: cargo-bins/cargo-binstall@dd6a0ac24caa1243d18df0f770b941e990e8facc # v1.14.3
with:
tool: cargo-fuzz@0.11.2
- name: "Install cargo-fuzz"
@@ -682,7 +682,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@808dcb1b503398677d089d3216c51ac7cc11e7ab # v1.14.2
- uses: cargo-bins/cargo-binstall@dd6a0ac24caa1243d18df0f770b941e990e8facc # v1.14.3
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
@@ -903,7 +903,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-codspeed
@@ -936,7 +936,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@c99cc51b309eee71a866715cfa08c922f11cf898 # v2.56.19
uses: taiki-e/install-action@6064345e6658255e90e9500fdf9a06ab77e6909c # v2.57.6
with:
tool: cargo-codspeed

View File

@@ -83,7 +83,7 @@ jobs:
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
workspaces: "ruff"

View File

@@ -24,6 +24,7 @@ env:
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
RUST_BACKTRACE: 1
CONFORMANCE_SUITE_COMMIT: d4f39b27a4a47aac8b6d4019e1b0b5b3156fabdc
jobs:
typing_conformance:
@@ -40,13 +41,10 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: python/typing
ref: d4f39b27a4a47aac8b6d4019e1b0b5b3156fabdc
ref: ${{ env.CONFORMANCE_SUITE_COMMIT }}
path: typing
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@e92bafb6253dcd438e0484186d7669ea7a8ca1cc # v6.4.3
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
with:
workspaces: "ruff"
@@ -64,14 +62,13 @@ jobs:
cd ruff
echo "new commit"
git checkout -b new_commit "${{ github.event.pull_request.head.sha }}"
git rev-list --format=%s --max-count=1 new_commit
git rev-list --format=%s --max-count=1 "$GITHUB_SHA"
cargo build --release --bin ty
mv target/release/ty ty-new
echo "old commit (merge base)"
MERGE_BASE="$(git merge-base "$GITHUB_SHA" "origin/$GITHUB_BASE_REF")"
git checkout -b old_commit "$MERGE_BASE"
echo "old commit (merge base)"
git rev-list --format=%s --max-count=1 old_commit
cargo build --release --bin ty
mv target/release/ty ty-old
@@ -95,6 +92,7 @@ jobs:
fi
echo ${{ github.event.number }} > pr-number
echo "${CONFORMANCE_SUITE_COMMIT}" > conformance-suite-commit
- name: Upload diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
@@ -107,3 +105,9 @@ jobs:
with:
name: pr-number
path: pr-number
- name: Upload conformance suite commit
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: conformance-suite-commit
path: conformance-suite-commit

View File

@@ -32,6 +32,14 @@ jobs:
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download typing conformance suite commit
with:
name: conformance-suite-commit
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download typing_conformance results"
id: download-typing_conformance_diff
@@ -61,7 +69,14 @@ jobs:
# subsequent runs
echo '<!-- generated-comment typing_conformance_diagnostics_diff -->' >> comment.txt
echo '## Diagnostic diff on typing conformance tests' >> comment.txt
if [[ -f conformance-suite-commit ]]
then
echo "## Diagnostic diff on [typing conformance tests](https://github.com/python/typing/tree/$(<conformance-suite-commit)/conformance)" >> comment.txt
else
echo "conformance-suite-commit file not found"
echo "## Diagnostic diff on typing conformance tests" >> comment.txt
fi
if [ -s "pr/typing_conformance_diagnostics_diff/typing_conformance_diagnostics.diff" ]; then
echo '<details>' >> comment.txt
echo '<summary>Changes were detected when running ty on typing conformance tests</summary>' >> comment.txt

View File

@@ -81,7 +81,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.12.5
rev: v0.12.7
hooks:
- id: ruff-format
- id: ruff-check

View File

@@ -1,5 +1,69 @@
# Changelog
## 0.12.8
### Preview features
- \[`flake8-use-pathlib`\] Expand `PTH201` to check all `PurePath` subclasses ([#19440](https://github.com/astral-sh/ruff/pull/19440))
### Bug fixes
- \[`flake8-blind-except`\] Change `BLE001` to correctly parse exception tuples ([#19747](https://github.com/astral-sh/ruff/pull/19747))
- \[`flake8-errmsg`\] Exclude `typing.cast` from `EM101` ([#19656](https://github.com/astral-sh/ruff/pull/19656))
- \[`flake8-simplify`\] Fix raw string handling in `SIM905` for embedded quotes ([#19591](https://github.com/astral-sh/ruff/pull/19591))
- \[`flake8-import-conventions`\] Avoid false positives for NFKC-normalized `__debug__` import aliases in `ICN001` ([#19411](https://github.com/astral-sh/ruff/pull/19411))
- \[`isort`\] Fix syntax error after docstring ending with backslash (`I002`) ([#19505](https://github.com/astral-sh/ruff/pull/19505))
- \[`pylint`\] Mark `PLC0207` fixes as unsafe when `*args` unpacking is present ([#19679](https://github.com/astral-sh/ruff/pull/19679))
- \[`pyupgrade`\] Prevent infinite loop with `I002` (`UP010`, `UP035`) ([#19413](https://github.com/astral-sh/ruff/pull/19413))
- \[`ruff`\] Parenthesize generator expressions in f-strings (`RUF010`) ([#19434](https://github.com/astral-sh/ruff/pull/19434))
### Rule changes
- \[`eradicate`\] Don't flag `pyrefly` pragmas as unused code (`ERA001`) ([#19731](https://github.com/astral-sh/ruff/pull/19731))
### Documentation
- Replace "associative" with "commutative" in docs for `RUF036` ([#19706](https://github.com/astral-sh/ruff/pull/19706))
- Fix copy and line separator colors in dark mode ([#19630](https://github.com/astral-sh/ruff/pull/19630))
- Fix link to `typing` documentation ([#19648](https://github.com/astral-sh/ruff/pull/19648))
- \[`refurb`\] Make more examples error out-of-the-box ([#19695](https://github.com/astral-sh/ruff/pull/19695),[#19673](https://github.com/astral-sh/ruff/pull/19673),[#19672](https://github.com/astral-sh/ruff/pull/19672))
### Other changes
- Include column numbers in GitLab output format ([#19708](https://github.com/astral-sh/ruff/pull/19708))
- Always expand tabs to four spaces in diagnostics ([#19618](https://github.com/astral-sh/ruff/pull/19618))
- Update pre-commit's `ruff` id ([#19654](https://github.com/astral-sh/ruff/pull/19654))
## 0.12.7
This is a follow-up release to 0.12.6. Because of an issue in the package metadata, 0.12.6 failed to publish fully to PyPI and has been yanked. Similarly, there is no GitHub release or Git tag for 0.12.6. The contents of the 0.12.7 release are identical to 0.12.6, except for the updated metadata.
## 0.12.6
### Preview features
- \[`flake8-commas`\] Add support for trailing comma checks in type parameter lists (`COM812`, `COM819`) ([#19390](https://github.com/astral-sh/ruff/pull/19390))
- \[`pylint`\] Implement auto-fix for `missing-maxsplit-arg` (`PLC0207`) ([#19387](https://github.com/astral-sh/ruff/pull/19387))
- \[`ruff`\] Offer fixes for `RUF039` in more cases ([#19065](https://github.com/astral-sh/ruff/pull/19065))
### Bug fixes
- Support `.pyi` files in ruff analyze graph ([#19611](https://github.com/astral-sh/ruff/pull/19611))
- \[`flake8-pyi`\] Preserve inline comment in ellipsis removal (`PYI013`) ([#19399](https://github.com/astral-sh/ruff/pull/19399))
- \[`perflint`\] Ignore rule if target is `global` or `nonlocal` (`PERF401`) ([#19539](https://github.com/astral-sh/ruff/pull/19539))
- \[`pyupgrade`\] Fix `UP030` to avoid modifying double curly braces in format strings ([#19378](https://github.com/astral-sh/ruff/pull/19378))
- \[`refurb`\] Ignore decorated functions for `FURB118` ([#19339](https://github.com/astral-sh/ruff/pull/19339))
- \[`refurb`\] Mark `int` and `bool` cases for `Decimal.from_float` as safe fixes (`FURB164`) ([#19468](https://github.com/astral-sh/ruff/pull/19468))
- \[`ruff`\] Fix `RUF033` for named default expressions ([#19115](https://github.com/astral-sh/ruff/pull/19115))
### Rule changes
- \[`flake8-blind-except`\] Change `BLE001` to permit `logging.critical(..., exc_info=True)` ([#19520](https://github.com/astral-sh/ruff/pull/19520))
### Performance
- Add support for specifying minimum dots in detected string imports ([#19538](https://github.com/astral-sh/ruff/pull/19538))
## 0.12.5
### Preview features

405
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -23,6 +23,7 @@ ruff_graph = { path = "crates/ruff_graph" }
ruff_index = { path = "crates/ruff_index" }
ruff_linter = { path = "crates/ruff_linter" }
ruff_macros = { path = "crates/ruff_macros" }
ruff_memory_usage = { path = "crates/ruff_memory_usage" }
ruff_notebook = { path = "crates/ruff_notebook" }
ruff_options_metadata = { path = "crates/ruff_options_metadata" }
ruff_python_ast = { path = "crates/ruff_python_ast" }
@@ -40,6 +41,7 @@ ruff_text_size = { path = "crates/ruff_text_size" }
ruff_workspace = { path = "crates/ruff_workspace" }
ty = { path = "crates/ty" }
ty_combine = { path = "crates/ty_combine" }
ty_ide = { path = "crates/ty_ide" }
ty_project = { path = "crates/ty_project", default-features = false }
ty_python_semantic = { path = "crates/ty_python_semantic" }
@@ -83,7 +85,7 @@ etcetera = { version = "0.10.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
get-size2 = { version = "0.6.0", features = [
get-size2 = { version = "0.6.2", features = [
"derive",
"smallvec",
"hashbrown",
@@ -141,7 +143,12 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa", rev = "dba66f1a37acca014c2402f231ed5b361bd7d8fe" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "b121ee46c4483ba74c19e933a3522bd548eb7343", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",
"inventory",
] }
schemars = { version = "0.8.16" }
seahash = { version = "4.1.0" }
serde = { version = "1.0.197", features = ["derive"] }

View File

@@ -148,8 +148,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.12.5/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.12.5/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.12.8/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.12.8/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -182,7 +182,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.12.5
rev: v0.12.8
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.12.5"
version = "0.12.8"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -798,7 +798,7 @@ fn stdin_parse_error() {
success: false
exit_code: 1
----- stdout -----
-:1:16: SyntaxError: Expected one or more symbol names after import
-:1:16: invalid-syntax: Expected one or more symbol names after import
|
1 | from foo import
| ^
@@ -818,14 +818,14 @@ fn stdin_multiple_parse_error() {
success: false
exit_code: 1
----- stdout -----
-:1:16: SyntaxError: Expected one or more symbol names after import
-:1:16: invalid-syntax: Expected one or more symbol names after import
|
1 | from foo import
| ^
2 | bar =
|
-:2:6: SyntaxError: Expected an expression
-:2:6: invalid-syntax: Expected an expression
|
1 | from foo import
2 | bar =
@@ -847,7 +847,7 @@ fn parse_error_not_included() {
success: false
exit_code: 1
----- stdout -----
-:1:6: SyntaxError: Expected an expression
-:1:6: invalid-syntax: Expected an expression
|
1 | foo =
| ^

View File

@@ -4996,6 +4996,37 @@ fn flake8_import_convention_invalid_aliases_config_module_name() -> Result<()> {
Ok(())
}
#[test]
fn flake8_import_convention_nfkc_normalization() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
[lint.flake8-import-conventions.aliases]
"test.module" = "_𝘥𝘦𝘣𝘶𝘨"
"#,
)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
, @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Invalid alias for module 'test.module': alias normalizes to '__debug__', which is not allowed.
");});
Ok(())
}
#[test]
fn flake8_import_convention_unused_aliased_import() {
assert_cmd_snapshot!(
@@ -5389,7 +5420,7 @@ fn walrus_before_py38() {
success: false
exit_code: 1
----- stdout -----
test.py:1:2: SyntaxError: Cannot use named assignment expression (`:=`) on Python 3.7 (syntax was added in Python 3.8)
test.py:1:2: invalid-syntax: Cannot use named assignment expression (`:=`) on Python 3.7 (syntax was added in Python 3.8)
Found 1 error.
----- stderr -----
@@ -5435,15 +5466,15 @@ match 2:
print("it's one")
"#
),
@r###"
@r"
success: false
exit_code: 1
----- stdout -----
test.py:2:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
test.py:2:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 1 error.
----- stderr -----
"###
"
);
// syntax error on 3.9 with preview
@@ -5464,7 +5495,7 @@ match 2:
success: false
exit_code: 1
----- stdout -----
test.py:2:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
test.py:2:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 1 error.
----- stderr -----
@@ -5492,7 +5523,7 @@ fn cache_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
main.py:1:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----
"
@@ -5505,7 +5536,7 @@ fn cache_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
main.py:1:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----
"
@@ -5618,7 +5649,7 @@ fn semantic_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:3: SyntaxError: assignment expression cannot rebind comprehension variable
main.py:1:3: invalid-syntax: assignment expression cannot rebind comprehension variable
main.py:1:20: F821 Undefined name `foo`
----- stderr -----
@@ -5632,7 +5663,7 @@ fn semantic_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
main.py:1:3: SyntaxError: assignment expression cannot rebind comprehension variable
main.py:1:3: invalid-syntax: assignment expression cannot rebind comprehension variable
main.py:1:20: F821 Undefined name `foo`
----- stderr -----
@@ -5651,7 +5682,7 @@ fn semantic_syntax_errors() -> Result<()> {
success: false
exit_code: 1
----- stdout -----
-:1:3: SyntaxError: assignment expression cannot rebind comprehension variable
-:1:3: invalid-syntax: assignment expression cannot rebind comprehension variable
Found 1 error.
----- stderr -----

View File

@@ -95,6 +95,6 @@ is stricter, which could affect the suggested fix. See [this FAQ section](https:
## References
- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
- [Typing documentation: interface conventions](https://typing.python.org/en/latest/source/libraries.html#library-interface-public-and-private-symbols)
- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)
----- stderr -----

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=1;columnnumber=8;code=F401;]`os` imported but unused
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=2;columnnumber=5;code=F821;]Undefined name `y`
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=3;columnnumber=1;]SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
##vso[task.logissue type=error;sourcepath=[TMP]/input.py;linenumber=3;columnnumber=1;code=invalid-syntax;]Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----

View File

@@ -18,7 +18,7 @@ exit_code: 1
----- stdout -----
input.py:1:8: F401 [*] `os` imported but unused
input.py:2:5: F821 Undefined name `y`
input.py:3:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
input.py:3:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 3 errors.
[*] 1 fixable with the `--fix` option.

View File

@@ -34,7 +34,7 @@ input.py:2:5: F821 Undefined name `y`
4 | case _: ...
|
input.py:3:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
input.py:3:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
|
1 | import os # F401
2 | x = y # F821

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
::error title=Ruff (F401),file=[TMP]/input.py,line=1,col=8,endLine=1,endColumn=10::input.py:1:8: F401 `os` imported but unused
::error title=Ruff (F821),file=[TMP]/input.py,line=2,col=5,endLine=2,endColumn=6::input.py:2:5: F821 Undefined name `y`
::error title=Ruff,file=[TMP]/input.py,line=3,col=1,endLine=3,endColumn=6::input.py:3:1: SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
::error title=Ruff (invalid-syntax),file=[TMP]/input.py,line=3,col=1,endLine=3,endColumn=6::input.py:3:1: invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----

View File

@@ -22,11 +22,17 @@ exit_code: 1
"description": "`os` imported but unused",
"fingerprint": "4dbad37161e65c72",
"location": {
"lines": {
"begin": 1,
"end": 1
},
"path": "input.py"
"path": "input.py",
"positions": {
"begin": {
"column": 8,
"line": 1
},
"end": {
"column": 10,
"line": 1
}
}
},
"severity": "major"
},
@@ -35,11 +41,17 @@ exit_code: 1
"description": "Undefined name `y`",
"fingerprint": "7af59862a085230",
"location": {
"lines": {
"begin": 2,
"end": 2
},
"path": "input.py"
"path": "input.py",
"positions": {
"begin": {
"column": 5,
"line": 2
},
"end": {
"column": 6,
"line": 2
}
}
},
"severity": "major"
},
@@ -48,11 +60,17 @@ exit_code: 1
"description": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"fingerprint": "e558cec859bb66e8",
"location": {
"lines": {
"begin": 3,
"end": 3
},
"path": "input.py"
"path": "input.py",
"positions": {
"begin": {
"column": 1,
"line": 3
},
"end": {
"column": 6,
"line": 3
}
}
},
"severity": "major"
}

View File

@@ -19,7 +19,7 @@ exit_code: 1
input.py:
1:8 F401 [*] `os` imported but unused
2:5 F821 Undefined name `y`
3:1 SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
3:1 invalid-syntax: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
Found 3 errors.
[*] 1 fixable with the `--fix` option.

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
{"cell":null,"code":"F401","end_location":{"column":10,"row":1},"filename":"[TMP]/input.py","fix":{"applicability":"safe","edits":[{"content":"","end_location":{"column":1,"row":2},"location":{"column":1,"row":1}}],"message":"Remove unused import: `os`"},"location":{"column":8,"row":1},"message":"`os` imported but unused","noqa_row":1,"url":"https://docs.astral.sh/ruff/rules/unused-import"}
{"cell":null,"code":"F821","end_location":{"column":6,"row":2},"filename":"[TMP]/input.py","fix":null,"location":{"column":5,"row":2},"message":"Undefined name `y`","noqa_row":2,"url":"https://docs.astral.sh/ruff/rules/undefined-name"}
{"cell":null,"code":null,"end_location":{"column":6,"row":3},"filename":"[TMP]/input.py","fix":null,"location":{"column":1,"row":3},"message":"SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)","noqa_row":null,"url":null}
{"cell":null,"code":"invalid-syntax","end_location":{"column":6,"row":3},"filename":"[TMP]/input.py","fix":null,"location":{"column":1,"row":3},"message":"Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)","noqa_row":null,"url":null}
----- stderr -----

View File

@@ -69,7 +69,7 @@ exit_code: 1
},
{
"cell": null,
"code": null,
"code": "invalid-syntax",
"end_location": {
"column": 6,
"row": 3
@@ -80,7 +80,7 @@ exit_code: 1
"column": 1,
"row": 3
},
"message": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"message": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)",
"noqa_row": null,
"url": null
}

View File

@@ -26,7 +26,7 @@ exit_code: 1
<failure message="Undefined name `y`">line 2, col 5, Undefined name `y`</failure>
</testcase>
<testcase name="org.ruff.invalid-syntax" classname="[TMP]/input" line="3" column="1">
<failure message="SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)">line 3, col 1, SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)</failure>
<failure message="Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)">line 3, col 1, Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)</failure>
</testcase>
</testsuite>
</testsuites>

View File

@@ -18,6 +18,6 @@ exit_code: 1
----- stdout -----
input.py:1: [F401] `os` imported but unused
input.py:2: [F821] Undefined name `y`
input.py:3: [invalid-syntax] SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
input.py:3: [invalid-syntax] Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)
----- stderr -----

View File

@@ -90,7 +90,7 @@ exit_code: 1
}
}
},
"message": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
"message": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
}
],
"severity": "WARNING",

View File

@@ -83,9 +83,9 @@ exit_code: 1
}
],
"message": {
"text": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
"text": "Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
},
"ruleId": null
"ruleId": "invalid-syntax"
}
],
"tool": {
@@ -95,7 +95,7 @@ exit_code: 1
"rules": [
{
"fullDescription": {
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Preview\nWhen [preview](https://docs.astral.sh/ruff/preview/) is enabled,\nthe criterion for determining whether an import is first-party\nis stricter, which could affect the suggested fix. See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/source/libraries.html#library-interface-public-and-private-symbols)\n"
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Preview\nWhen [preview](https://docs.astral.sh/ruff/preview/) is enabled,\nthe criterion for determining whether an import is first-party\nis stricter, which could affect the suggested fix. See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n"
},
"help": {
"text": "`{name}` imported but unused; consider using `importlib.util.find_spec` to test for availability"

View File

@@ -1,3 +1,5 @@
#![expect(clippy::needless_doctest_main)]
//! A library for formatting of text or programming code snippets.
//!
//! It's primary purpose is to build an ASCII-graphical representation of the snippet

View File

@@ -193,9 +193,14 @@ impl DisplaySet<'_> {
stylesheet: &Stylesheet,
buffer: &mut StyledBuffer,
) -> fmt::Result {
let hide_severity = annotation.annotation_type.is_none();
let color = get_annotation_style(&annotation.annotation_type, stylesheet);
let formatted_len = if let Some(id) = &annotation.id {
2 + id.len() + annotation_type_len(&annotation.annotation_type)
if hide_severity {
id.len()
} else {
2 + id.len() + annotation_type_len(&annotation.annotation_type)
}
} else {
annotation_type_len(&annotation.annotation_type)
};
@@ -209,18 +214,66 @@ impl DisplaySet<'_> {
if formatted_len == 0 {
self.format_label(line_offset, &annotation.label, stylesheet, buffer)
} else {
let id = match &annotation.id {
Some(id) => format!("[{id}]"),
None => String::new(),
};
buffer.append(
line_offset,
&format!("{}{}", annotation_type_str(&annotation.annotation_type), id),
*color,
);
// TODO(brent) All of this complicated checking of `hide_severity` should be reverted
// once we have real severities in Ruff. This code is trying to account for two
// different cases:
//
// - main diagnostic message
// - subdiagnostic message
//
// In the first case, signaled by `hide_severity = true`, we want to print the ID (the
// noqa code for a ruff lint diagnostic, e.g. `F401`, or `invalid-syntax` for a syntax
// error) without brackets. Instead, for subdiagnostics, we actually want to print the
// severity (usually `help`) regardless of the `hide_severity` setting. This is signaled
// by an ID of `None`.
//
// With real severities these should be reported more like in ty:
//
// ```
// error[F401]: `math` imported but unused
// error[invalid-syntax]: Cannot use `match` statement on Python 3.9...
// ```
//
// instead of the current versions intended to mimic the old Ruff output format:
//
// ```
// F401 `math` imported but unused
// invalid-syntax: Cannot use `match` statement on Python 3.9...
// ```
//
// Note that the `invalid-syntax` colon is added manually in `ruff_db`, not here. We
// could eventually add a colon to Ruff lint diagnostics (`F401:`) and then make the
// colon below unconditional again.
//
// This also applies to the hard-coded `stylesheet.error()` styling of the
// hidden-severity `id`. This should just be `*color` again later, but for now we don't
// want an unformatted `id`, which is what `get_annotation_style` returns for
// `DisplayAnnotationType::None`.
let annotation_type = annotation_type_str(&annotation.annotation_type);
if let Some(id) = annotation.id {
if hide_severity {
buffer.append(line_offset, &format!("{id} "), *stylesheet.error());
} else {
buffer.append(line_offset, &format!("{annotation_type}[{id}]"), *color);
}
} else {
buffer.append(line_offset, annotation_type, *color);
}
if annotation.is_fixable {
buffer.append(line_offset, "[", stylesheet.none);
buffer.append(line_offset, "*", stylesheet.help);
buffer.append(line_offset, "]", stylesheet.none);
// In the hide-severity case, we need a space instead of the colon and space below.
if hide_severity {
buffer.append(line_offset, " ", stylesheet.none);
}
}
if !is_annotation_empty(annotation) {
buffer.append(line_offset, ": ", stylesheet.none);
if annotation.id.is_none() || !hide_severity {
buffer.append(line_offset, ": ", stylesheet.none);
}
self.format_label(line_offset, &annotation.label, stylesheet, buffer)?;
}
Ok(())
@@ -249,11 +302,15 @@ impl DisplaySet<'_> {
let lineno_color = stylesheet.line_no();
buffer.puts(line_offset, lineno_width, header_sigil, *lineno_color);
buffer.puts(line_offset, lineno_width + 4, path, stylesheet.none);
if let Some((col, row)) = pos {
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, col.to_string().as_str(), stylesheet.none);
if let Some(Position { row, col, cell }) = pos {
if let Some(cell) = cell {
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, &format!("cell {cell}"), stylesheet.none);
}
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, row.to_string().as_str(), stylesheet.none);
buffer.append(line_offset, ":", stylesheet.none);
buffer.append(line_offset, col.to_string().as_str(), stylesheet.none);
}
Ok(())
}
@@ -768,6 +825,7 @@ pub(crate) struct Annotation<'a> {
pub(crate) annotation_type: DisplayAnnotationType,
pub(crate) id: Option<&'a str>,
pub(crate) label: Vec<DisplayTextFragment<'a>>,
pub(crate) is_fixable: bool,
}
/// A single line used in `DisplayList`.
@@ -833,6 +891,13 @@ impl DisplaySourceAnnotation<'_> {
}
}
#[derive(Debug, PartialEq)]
pub(crate) struct Position {
row: usize,
col: usize,
cell: Option<usize>,
}
/// Raw line - a line which does not have the `lineno` part and is not considered
/// a part of the snippet.
#[derive(Debug, PartialEq)]
@@ -841,7 +906,7 @@ pub(crate) enum DisplayRawLine<'a> {
/// slice in the project structure.
Origin {
path: &'a str,
pos: Option<(usize, usize)>,
pos: Option<Position>,
header_type: DisplayHeaderType,
},
@@ -920,6 +985,13 @@ pub(crate) enum DisplayAnnotationType {
Help,
}
impl DisplayAnnotationType {
#[inline]
const fn is_none(&self) -> bool {
matches!(self, Self::None)
}
}
impl From<snippet::Level> for DisplayAnnotationType {
fn from(at: snippet::Level) -> Self {
match at {
@@ -1015,11 +1087,12 @@ fn format_message<'m>(
title,
footer,
snippets,
is_fixable,
} = message;
let mut sets = vec![];
let body = if !snippets.is_empty() || primary {
vec![format_title(level, id, title)]
vec![format_title(level, id, title, is_fixable)]
} else {
format_footer(level, id, title)
};
@@ -1060,12 +1133,18 @@ fn format_message<'m>(
sets
}
fn format_title<'a>(level: crate::Level, id: Option<&'a str>, label: &'a str) -> DisplayLine<'a> {
fn format_title<'a>(
level: crate::Level,
id: Option<&'a str>,
label: &'a str,
is_fixable: bool,
) -> DisplayLine<'a> {
DisplayLine::Raw(DisplayRawLine::Annotation {
annotation: Annotation {
annotation_type: DisplayAnnotationType::from(level),
id,
label: format_label(Some(label), Some(DisplayTextStyle::Emphasis)),
is_fixable,
},
source_aligned: false,
continuation: false,
@@ -1084,6 +1163,7 @@ fn format_footer<'a>(
annotation_type: DisplayAnnotationType::from(level),
id,
label: format_label(Some(line), None),
is_fixable: false,
},
source_aligned: true,
continuation: i != 0,
@@ -1118,6 +1198,23 @@ fn format_snippet<'m>(
let main_range = snippet.annotations.first().map(|x| x.range.start);
let origin = snippet.origin;
let need_empty_header = origin.is_some() || is_first;
let is_file_level = snippet.annotations.iter().any(|ann| ann.is_file_level);
if is_file_level {
assert!(
snippet.source.is_empty(),
"Non-empty file-level snippet that won't be rendered: {:?}",
snippet.source
);
let header = format_header(origin, main_range, &[], is_first, snippet.cell_index);
return DisplaySet {
display_lines: header.map_or_else(Vec::new, |header| vec![header]),
margin: Margin::new(0, 0, 0, 0, term_width, 0),
};
}
let cell_index = snippet.cell_index;
let mut body = format_body(
snippet,
need_empty_header,
@@ -1126,7 +1223,13 @@ fn format_snippet<'m>(
anonymized_line_numbers,
cut_indicator,
);
let header = format_header(origin, main_range, &body.display_lines, is_first);
let header = format_header(
origin,
main_range,
&body.display_lines,
is_first,
cell_index,
);
if let Some(header) = header {
body.display_lines.insert(0, header);
@@ -1146,6 +1249,7 @@ fn format_header<'a>(
main_range: Option<usize>,
body: &[DisplayLine<'_>],
is_first: bool,
cell_index: Option<usize>,
) -> Option<DisplayLine<'a>> {
let display_header = if is_first {
DisplayHeaderType::Initial
@@ -1182,7 +1286,11 @@ fn format_header<'a>(
return Some(DisplayLine::Raw(DisplayRawLine::Origin {
path,
pos: Some((line_offset, col)),
pos: Some(Position {
row: line_offset,
col,
cell: cell_index,
}),
header_type: display_header,
}));
}
@@ -1472,6 +1580,7 @@ fn format_body<'m>(
annotation_type,
id: None,
label: format_label(annotation.label, None),
is_fixable: false,
},
range,
annotation_type: DisplayAnnotationType::from(annotation.level),
@@ -1511,6 +1620,7 @@ fn format_body<'m>(
annotation_type,
id: None,
label: vec![],
is_fixable: false,
},
range,
annotation_type: DisplayAnnotationType::from(annotation.level),
@@ -1580,6 +1690,7 @@ fn format_body<'m>(
annotation_type,
id: None,
label: format_label(annotation.label, None),
is_fixable: false,
},
range,
annotation_type: DisplayAnnotationType::from(annotation.level),

View File

@@ -22,6 +22,7 @@ pub struct Message<'a> {
pub(crate) title: &'a str,
pub(crate) snippets: Vec<Snippet<'a>>,
pub(crate) footer: Vec<Message<'a>>,
pub(crate) is_fixable: bool,
}
impl<'a> Message<'a> {
@@ -49,6 +50,15 @@ impl<'a> Message<'a> {
self.footer.extend(footer);
self
}
/// Whether or not the diagnostic for this message is fixable.
///
/// This is rendered as a `[*]` indicator after the `id` in an annotation header, if the
/// annotation also has `Level::None`.
pub fn is_fixable(mut self, yes: bool) -> Self {
self.is_fixable = yes;
self
}
}
/// Structure containing the slice of text to be annotated and
@@ -65,6 +75,10 @@ pub struct Snippet<'a> {
pub(crate) annotations: Vec<Annotation<'a>>,
pub(crate) fold: bool,
/// The optional cell index in a Jupyter notebook, used for reporting source locations along
/// with the ranges on `annotations`.
pub(crate) cell_index: Option<usize>,
}
impl<'a> Snippet<'a> {
@@ -75,6 +89,7 @@ impl<'a> Snippet<'a> {
source,
annotations: vec![],
fold: false,
cell_index: None,
}
}
@@ -103,6 +118,12 @@ impl<'a> Snippet<'a> {
self.fold = fold;
self
}
/// Attach a Jupyter notebook cell index.
pub fn cell_index(mut self, index: Option<usize>) -> Self {
self.cell_index = index;
self
}
}
/// An annotation for a [`Snippet`].
@@ -114,6 +135,7 @@ pub struct Annotation<'a> {
pub(crate) range: Range<usize>,
pub(crate) label: Option<&'a str>,
pub(crate) level: Level,
pub(crate) is_file_level: bool,
}
impl<'a> Annotation<'a> {
@@ -121,6 +143,11 @@ impl<'a> Annotation<'a> {
self.label = Some(label);
self
}
pub fn is_file_level(mut self, yes: bool) -> Self {
self.is_file_level = yes;
self
}
}
/// Types of annotations.
@@ -145,6 +172,7 @@ impl Level {
title,
snippets: vec![],
footer: vec![],
is_fixable: false,
}
}
@@ -154,6 +182,7 @@ impl Level {
range: span,
label: None,
level: self,
is_file_level: false,
}
}
}

View File

@@ -351,6 +351,41 @@ fn benchmark_many_tuple_assignments(criterion: &mut Criterion) {
});
}
fn benchmark_tuple_implicit_instance_attributes(criterion: &mut Criterion) {
setup_rayon();
criterion.bench_function("ty_micro[many_tuple_assignments]", |b| {
b.iter_batched_ref(
|| {
// This is a regression benchmark for a case that used to hang:
// https://github.com/astral-sh/ty/issues/765
setup_micro_case(
r#"
from typing import Any
class A:
foo: tuple[Any, ...]
class B(A):
def __init__(self, parent: "C", x: tuple[Any]):
self.foo = parent.foo + x
class C(A):
def __init__(self, parent: B, x: tuple[Any]):
self.foo = parent.foo + x
"#,
)
},
|case| {
let Case { db, .. } = case;
let result = db.check();
assert_eq!(result.len(), 0);
},
BatchSize::SmallInput,
);
});
}
fn benchmark_complex_constrained_attributes_1(criterion: &mut Criterion) {
setup_rayon();
@@ -630,6 +665,7 @@ criterion_group!(
micro,
benchmark_many_string_assignments,
benchmark_many_tuple_assignments,
benchmark_tuple_implicit_instance_attributes,
benchmark_complex_constrained_attributes_1,
benchmark_complex_constrained_attributes_2,
benchmark_many_enum_members,

View File

@@ -14,6 +14,7 @@ license = { workspace = true }
ruff_annotate_snippets = { workspace = true }
ruff_cache = { workspace = true, optional = true }
ruff_diagnostics = { workspace = true }
ruff_memory_usage = { workspace = true }
ruff_notebook = { workspace = true }
ruff_python_ast = { workspace = true, features = ["get-size"] }
ruff_python_parser = { workspace = true }
@@ -42,7 +43,6 @@ serde_json = { workspace = true, optional = true }
thiserror = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true, optional = true }
unicode-width = { workspace = true }
zip = { workspace = true }
[target.'cfg(target_arch="wasm32")'.dependencies]

View File

@@ -21,7 +21,7 @@ mod stylesheet;
/// characteristics in the inputs given to the tool. Typically, but not always,
/// a characteristic is a deficiency. An example of a characteristic that is
/// _not_ a deficiency is the `reveal_type` diagnostic for our type checker.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct Diagnostic {
/// The actual diagnostic.
///
@@ -212,7 +212,7 @@ impl Diagnostic {
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
pub fn concise_message(&self) -> ConciseMessage {
pub fn concise_message(&self) -> ConciseMessage<'_> {
let main = self.inner.message.as_str();
let annotation = self
.primary_annotation()
@@ -366,6 +366,16 @@ impl Diagnostic {
self.inner.secondary_code.as_ref()
}
/// Returns the secondary code for the diagnostic if it exists, or the lint name otherwise.
///
/// This is a common pattern for Ruff diagnostics, which want to use the noqa code in general,
/// but fall back on the `invalid-syntax` identifier for syntax errors, which don't have
/// secondary codes.
pub fn secondary_code_or_id(&self) -> &str {
self.secondary_code()
.map_or_else(|| self.inner.id.as_str(), SecondaryCode::as_str)
}
/// Set the secondary code for this diagnostic.
pub fn set_secondary_code(&mut self, code: SecondaryCode) {
Arc::make_mut(&mut self.inner).secondary_code = Some(code);
@@ -479,7 +489,7 @@ impl Diagnostic {
}
}
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
struct DiagnosticInner {
id: DiagnosticId,
severity: Severity,
@@ -555,7 +565,7 @@ impl Eq for RenderingSortKey<'_> {}
/// Currently, the order in which sub-diagnostics are rendered relative to one
/// another (for a single parent diagnostic) is the order in which they were
/// attached to the diagnostic.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct SubDiagnostic {
/// Like with `Diagnostic`, we box the `SubDiagnostic` to make it
/// pointer-sized.
@@ -644,7 +654,7 @@ impl SubDiagnostic {
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
pub fn concise_message(&self) -> ConciseMessage {
pub fn concise_message(&self) -> ConciseMessage<'_> {
let main = self.inner.message.as_str();
let annotation = self
.primary_annotation()
@@ -659,7 +669,7 @@ impl SubDiagnostic {
}
}
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
struct SubDiagnosticInner {
severity: SubDiagnosticSeverity,
message: DiagnosticMessage,
@@ -687,7 +697,7 @@ struct SubDiagnosticInner {
///
/// Messages attached to annotations should also be as brief and specific as
/// possible. Long messages could negative impact the quality of rendering.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct Annotation {
/// The span of this annotation, corresponding to some subsequence of the
/// user's input that we want to highlight.
@@ -702,6 +712,11 @@ pub struct Annotation {
is_primary: bool,
/// The diagnostic tags associated with this annotation.
tags: Vec<DiagnosticTag>,
/// Whether this annotation is a file-level or full-file annotation.
///
/// When set, rendering will only include the file's name and (optional) range. Everything else
/// is omitted, including any file snippet or message.
is_file_level: bool,
}
impl Annotation {
@@ -720,6 +735,7 @@ impl Annotation {
message: None,
is_primary: true,
tags: Vec::new(),
is_file_level: false,
}
}
@@ -736,6 +752,7 @@ impl Annotation {
message: None,
is_primary: false,
tags: Vec::new(),
is_file_level: false,
}
}
@@ -801,13 +818,28 @@ impl Annotation {
pub fn push_tag(&mut self, tag: DiagnosticTag) {
self.tags.push(tag);
}
/// Set whether or not this annotation is file-level.
///
/// File-level annotations are only rendered with their file name and range, if available. This
/// is intended for backwards compatibility with Ruff diagnostics, which historically used
/// `TextRange::default` to indicate a file-level diagnostic. In the new diagnostic model, a
/// [`Span`] with a range of `None` should be used instead, as mentioned in the `Span`
/// documentation.
///
/// TODO(brent) update this usage in Ruff and remove `is_file_level` entirely. See
/// <https://github.com/astral-sh/ruff/issues/19688>, especially my first comment, for more
/// details.
pub fn set_file_level(&mut self, yes: bool) {
self.is_file_level = yes;
}
}
/// Tags that can be associated with an annotation.
///
/// These tags are used to provide additional information about the annotation.
/// and are passed through to the language server protocol.
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
#[derive(Debug, Clone, Eq, PartialEq, Hash, get_size2::GetSize)]
pub enum DiagnosticTag {
/// Unused or unnecessary code. Used for unused parameters, unreachable code, etc.
Unnecessary,
@@ -1016,7 +1048,7 @@ impl std::fmt::Display for DiagnosticId {
///
/// This enum presents a unified interface to these two types for the sake of creating [`Span`]s and
/// emitting diagnostics from both ty and ruff.
#[derive(Debug, Clone, PartialEq, Eq, get_size2::GetSize)]
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
pub enum UnifiedFile {
Ty(File),
Ruff(SourceFile),
@@ -1067,7 +1099,7 @@ enum DiagnosticSource {
impl DiagnosticSource {
/// Returns this input as a `SourceCode` for convenient querying.
fn as_source_code(&self) -> SourceCode {
fn as_source_code(&self) -> SourceCode<'_, '_> {
match self {
DiagnosticSource::Ty(input) => SourceCode::new(input.text.as_str(), &input.line_index),
DiagnosticSource::Ruff(source) => SourceCode::new(source.source_text(), source.index()),
@@ -1080,7 +1112,7 @@ impl DiagnosticSource {
/// It consists of a `File` and an optional range into that file. When the
/// range isn't present, it semantically implies that the diagnostic refers to
/// the entire file. For example, when the file should be executable but isn't.
#[derive(Debug, Clone, PartialEq, Eq, get_size2::GetSize)]
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
pub struct Span {
file: UnifiedFile,
range: Option<TextRange>,
@@ -1158,7 +1190,7 @@ impl From<crate::files::FileRange> for Span {
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, get_size2::GetSize)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, Hash, get_size2::GetSize)]
pub enum Severity {
Info,
Warning,
@@ -1193,7 +1225,7 @@ impl Severity {
/// This type only exists to add an additional `Help` severity that isn't present in `Severity` or
/// used for main diagnostics. If we want to add `Severity::Help` in the future, this type could be
/// deleted and the two combined again.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, get_size2::GetSize)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Ord, PartialOrd, Hash, get_size2::GetSize)]
pub enum SubDiagnosticSeverity {
Help,
Info,
@@ -1428,7 +1460,7 @@ impl std::fmt::Display for ConciseMessage<'_> {
/// In most cases, callers shouldn't need to use this. Instead, there is
/// a blanket trait implementation for `IntoDiagnosticMessage` for
/// anything that implements `std::fmt::Display`.
#[derive(Clone, Debug, Eq, PartialEq, get_size2::GetSize)]
#[derive(Clone, Debug, Eq, PartialEq, Hash, get_size2::GetSize)]
pub struct DiagnosticMessage(Box<str>);
impl DiagnosticMessage {

View File

@@ -135,7 +135,7 @@ impl std::fmt::Display for DisplayDiagnostics<'_> {
.none(stylesheet.none);
for diag in self.diagnostics {
let resolved = Resolved::new(self.resolver, diag);
let resolved = Resolved::new(self.resolver, diag, self.config);
let renderable = resolved.to_renderable(self.config.context);
for diag in renderable.diagnostics.iter() {
writeln!(f, "{}", renderer.render(diag.to_annotate()))?;
@@ -191,9 +191,13 @@ struct Resolved<'a> {
impl<'a> Resolved<'a> {
/// Creates a new resolved set of diagnostics.
fn new(resolver: &'a dyn FileResolver, diag: &'a Diagnostic) -> Resolved<'a> {
fn new(
resolver: &'a dyn FileResolver,
diag: &'a Diagnostic,
config: &DisplayDiagnosticConfig,
) -> Resolved<'a> {
let mut diagnostics = vec![];
diagnostics.push(ResolvedDiagnostic::from_diagnostic(resolver, diag));
diagnostics.push(ResolvedDiagnostic::from_diagnostic(resolver, config, diag));
for sub in &diag.inner.subs {
diagnostics.push(ResolvedDiagnostic::from_sub_diagnostic(resolver, sub));
}
@@ -223,12 +227,14 @@ struct ResolvedDiagnostic<'a> {
id: Option<String>,
message: String,
annotations: Vec<ResolvedAnnotation<'a>>,
is_fixable: bool,
}
impl<'a> ResolvedDiagnostic<'a> {
/// Resolve a single diagnostic.
fn from_diagnostic(
resolver: &'a dyn FileResolver,
config: &DisplayDiagnosticConfig,
diag: &'a Diagnostic,
) -> ResolvedDiagnostic<'a> {
let annotations: Vec<_> = diag
@@ -238,16 +244,38 @@ impl<'a> ResolvedDiagnostic<'a> {
.filter_map(|ann| {
let path = ann.span.file.path(resolver);
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
ResolvedAnnotation::new(path, &diagnostic_source, ann, resolver)
})
.collect();
let id = Some(diag.inner.id.to_string());
let message = diag.inner.message.as_str().to_string();
let id = if config.hide_severity {
// Either the rule code alone (e.g. `F401`), or the lint id with a colon (e.g.
// `invalid-syntax:`). When Ruff gets real severities, we should put the colon back in
// `DisplaySet::format_annotation` for both cases, but this is a small hack to improve
// the formatting of syntax errors for now. This should also be kept consistent with the
// concise formatting.
Some(diag.secondary_code().map_or_else(
|| format!("{id}:", id = diag.inner.id),
|code| code.to_string(),
))
} else {
Some(diag.inner.id.to_string())
};
let level = if config.hide_severity {
AnnotateLevel::None
} else {
diag.inner.severity.to_annotate()
};
ResolvedDiagnostic {
level: diag.inner.severity.to_annotate(),
level,
id,
message,
message: diag.inner.message.as_str().to_string(),
annotations,
is_fixable: diag
.fix()
.is_some_and(|fix| fix.applies(config.fix_applicability)),
}
}
@@ -263,7 +291,7 @@ impl<'a> ResolvedDiagnostic<'a> {
.filter_map(|ann| {
let path = ann.span.file.path(resolver);
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
ResolvedAnnotation::new(path, &diagnostic_source, ann, resolver)
})
.collect();
ResolvedDiagnostic {
@@ -271,6 +299,7 @@ impl<'a> ResolvedDiagnostic<'a> {
id: None,
message: diag.inner.message.as_str().to_string(),
annotations,
is_fixable: false,
}
}
@@ -301,20 +330,49 @@ impl<'a> ResolvedDiagnostic<'a> {
&prev.diagnostic_source.as_source_code(),
context,
prev.line_end,
prev.notebook_index.as_ref(),
)
.get();
let this_context_begins = context_before(
&ann.diagnostic_source.as_source_code(),
context,
ann.line_start,
ann.notebook_index.as_ref(),
)
.get();
// For notebooks, check whether the end of the
// previous annotation and the start of the current
// annotation are in different cells.
let prev_cell_index = prev.notebook_index.as_ref().map(|notebook_index| {
let prev_end = prev
.diagnostic_source
.as_source_code()
.line_column(prev.range.end());
notebook_index.cell(prev_end.line).unwrap_or_default().get()
});
let this_cell_index = ann.notebook_index.as_ref().map(|notebook_index| {
let this_start = ann
.diagnostic_source
.as_source_code()
.line_column(ann.range.start());
notebook_index
.cell(this_start.line)
.unwrap_or_default()
.get()
});
let in_different_cells = prev_cell_index != this_cell_index;
// The boundary case here is when `prev_context_ends`
// is exactly one less than `this_context_begins`. In
// that case, the context windows are adjacent and we
// should fall through below to add this annotation to
// the existing snippet.
if this_context_begins.saturating_sub(prev_context_ends) > 1 {
//
// For notebooks, also check that the context windows
// are in the same cell. Windows from different cells
// should never be considered adjacent.
if in_different_cells || this_context_begins.saturating_sub(prev_context_ends) > 1 {
snippet_by_path
.entry(path)
.or_default()
@@ -338,6 +396,7 @@ impl<'a> ResolvedDiagnostic<'a> {
id: self.id.as_deref(),
message: &self.message,
snippets_by_input,
is_fixable: self.is_fixable,
}
}
}
@@ -357,6 +416,8 @@ struct ResolvedAnnotation<'a> {
line_end: OneIndexed,
message: Option<&'a str>,
is_primary: bool,
is_file_level: bool,
notebook_index: Option<NotebookIndex>,
}
impl<'a> ResolvedAnnotation<'a> {
@@ -369,6 +430,7 @@ impl<'a> ResolvedAnnotation<'a> {
path: &'a str,
diagnostic_source: &DiagnosticSource,
ann: &'a Annotation,
resolver: &'a dyn FileResolver,
) -> Option<ResolvedAnnotation<'a>> {
let source = diagnostic_source.as_source_code();
let (range, line_start, line_end) = match (ann.span.range(), ann.message.is_some()) {
@@ -402,6 +464,8 @@ impl<'a> ResolvedAnnotation<'a> {
line_end,
message: ann.get_message(),
is_primary: ann.is_primary,
is_file_level: ann.is_file_level,
notebook_index: resolver.notebook_index(&ann.span.file),
})
}
}
@@ -436,6 +500,10 @@ struct RenderableDiagnostic<'r> {
/// should be from the same file, and none of the snippets inside of a
/// collection should overlap with one another or be directly adjacent.
snippets_by_input: Vec<RenderableSnippets<'r>>,
/// Whether or not the diagnostic is fixable.
///
/// This is rendered as a `[*]` indicator after the diagnostic ID.
is_fixable: bool,
}
impl RenderableDiagnostic<'_> {
@@ -448,7 +516,7 @@ impl RenderableDiagnostic<'_> {
.iter()
.map(|snippet| snippet.to_annotate(path))
});
let mut message = self.level.title(self.message);
let mut message = self.level.title(self.message).is_fixable(self.is_fixable);
if let Some(id) = self.id {
message = message.id(id);
}
@@ -530,17 +598,27 @@ struct RenderableSnippet<'r> {
/// Whether this snippet contains at least one primary
/// annotation.
has_primary: bool,
/// The cell index in a Jupyter notebook, if this snippet refers to a notebook.
///
/// This is used for rendering annotations with offsets like `cell 1:2:3` instead of simple row
/// and column numbers.
cell_index: Option<usize>,
}
impl<'r> RenderableSnippet<'r> {
/// Creates a new snippet with one or more annotations that is ready to be
/// renderer.
/// rendered.
///
/// The first line of the snippet is the smallest line number on which one
/// of the annotations begins, minus the context window size. The last line
/// is the largest line number on which one of the annotations ends, plus
/// the context window size.
///
/// For Jupyter notebooks, the context window may also be truncated at cell
/// boundaries. If multiple annotations are present, and they point to
/// different cells, these will have already been split into separate
/// snippets by `ResolvedDiagnostic::to_renderable`.
///
/// Callers should guarantee that the `input` on every `ResolvedAnnotation`
/// given is identical.
///
@@ -557,19 +635,19 @@ impl<'r> RenderableSnippet<'r> {
"creating a renderable snippet requires a non-zero number of annotations",
);
let diagnostic_source = &anns[0].diagnostic_source;
let notebook_index = anns[0].notebook_index.as_ref();
let source = diagnostic_source.as_source_code();
let has_primary = anns.iter().any(|ann| ann.is_primary);
let line_start = context_before(
&source,
context,
anns.iter().map(|ann| ann.line_start).min().unwrap(),
);
let line_end = context_after(
&source,
context,
anns.iter().map(|ann| ann.line_end).max().unwrap(),
);
let content_start_index = anns.iter().map(|ann| ann.line_start).min().unwrap();
let line_start = context_before(&source, context, content_start_index, notebook_index);
let start = source.line_column(anns[0].range.start());
let cell_index = notebook_index
.map(|notebook_index| notebook_index.cell(start.line).unwrap_or_default().get());
let content_end_index = anns.iter().map(|ann| ann.line_end).max().unwrap();
let line_end = context_after(&source, context, content_end_index, notebook_index);
let snippet_start = source.line_start(line_start);
let snippet_end = source.line_end(line_end);
@@ -585,14 +663,20 @@ impl<'r> RenderableSnippet<'r> {
let EscapedSourceCode {
text: snippet,
annotations,
} = replace_whitespace_and_unprintable(snippet, annotations)
.fix_up_empty_spans_after_line_terminator();
} = replace_unprintable(snippet, annotations).fix_up_empty_spans_after_line_terminator();
let line_start = notebook_index.map_or(line_start, |notebook_index| {
notebook_index
.cell_row(line_start)
.unwrap_or(OneIndexed::MIN)
});
RenderableSnippet {
snippet,
line_start,
annotations,
has_primary,
cell_index,
}
}
@@ -606,6 +690,7 @@ impl<'r> RenderableSnippet<'r> {
.iter()
.map(RenderableAnnotation::to_annotate),
)
.cell_index(self.cell_index)
}
}
@@ -620,6 +705,8 @@ struct RenderableAnnotation<'r> {
message: Option<&'r str>,
/// Whether this annotation is considered "primary" or not.
is_primary: bool,
/// Whether this annotation applies to an entire file, rather than a snippet within it.
is_file_level: bool,
}
impl<'r> RenderableAnnotation<'r> {
@@ -637,6 +724,7 @@ impl<'r> RenderableAnnotation<'r> {
range,
message: ann.message,
is_primary: ann.is_primary,
is_file_level: ann.is_file_level,
}
}
@@ -662,7 +750,7 @@ impl<'r> RenderableAnnotation<'r> {
if let Some(message) = self.message {
ann = ann.label(message);
}
ann
ann.is_file_level(self.is_file_level)
}
}
@@ -789,7 +877,15 @@ pub struct Input {
///
/// The line number returned is guaranteed to be less than
/// or equal to `start`.
fn context_before(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) -> OneIndexed {
///
/// In Jupyter notebooks, lines outside the cell containing
/// `start` will be omitted.
fn context_before(
source: &SourceCode<'_, '_>,
len: usize,
start: OneIndexed,
notebook_index: Option<&NotebookIndex>,
) -> OneIndexed {
let mut line = start.saturating_sub(len);
// Trim leading empty lines.
while line < start {
@@ -798,6 +894,17 @@ fn context_before(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) ->
}
line = line.saturating_add(1);
}
if let Some(index) = notebook_index {
let content_start_cell = index.cell(start).unwrap_or(OneIndexed::MIN);
while line < start {
if index.cell(line).unwrap_or(OneIndexed::MIN) == content_start_cell {
break;
}
line = line.saturating_add(1);
}
}
line
}
@@ -807,7 +914,15 @@ fn context_before(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) ->
/// The line number returned is guaranteed to be greater
/// than or equal to `start` and no greater than the
/// number of lines in `source`.
fn context_after(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) -> OneIndexed {
///
/// In Jupyter notebooks, lines outside the cell containing
/// `start` will be omitted.
fn context_after(
source: &SourceCode<'_, '_>,
len: usize,
start: OneIndexed,
notebook_index: Option<&NotebookIndex>,
) -> OneIndexed {
let max_lines = OneIndexed::from_zero_indexed(source.line_count());
let mut line = start.saturating_add(len).min(max_lines);
// Trim trailing empty lines.
@@ -817,6 +932,17 @@ fn context_after(source: &SourceCode<'_, '_>, len: usize, start: OneIndexed) ->
}
line = line.saturating_sub(1);
}
if let Some(index) = notebook_index {
let content_end_cell = index.cell(start).unwrap_or(OneIndexed::MIN);
while line > start {
if index.cell(line).unwrap_or(OneIndexed::MIN) == content_end_cell {
break;
}
line = line.saturating_sub(1);
}
}
line
}
@@ -828,13 +954,18 @@ fn relativize_path<'p>(cwd: &SystemPath, path: &'p str) -> &'p str {
path
}
/// Given some source code and annotation ranges, this routine replaces tabs
/// with ASCII whitespace, and unprintable characters with printable
/// representations of them.
/// Given some source code and annotation ranges, this routine replaces
/// unprintable characters with printable representations of them.
///
/// The source code and annotations returned are updated to reflect changes made
/// to the source code (if any).
fn replace_whitespace_and_unprintable<'r>(
///
/// We don't need to normalize whitespace, such as converting tabs to spaces,
/// because `annotate-snippets` handles that internally. Similarly, it's safe to
/// modify the annotation ranges by inserting 3-byte Unicode replacements
/// because `annotate-snippets` will account for their actual width when
/// rendering and displaying the column to the user.
fn replace_unprintable<'r>(
source: &'r str,
mut annotations: Vec<RenderableAnnotation<'r>>,
) -> EscapedSourceCode<'r> {
@@ -866,48 +997,17 @@ fn replace_whitespace_and_unprintable<'r>(
}
};
const TAB_SIZE: usize = 4;
let mut width = 0;
let mut column = 0;
let mut last_end = 0;
let mut result = String::new();
for (index, c) in source.char_indices() {
let old_width = width;
match c {
'\n' | '\r' => {
width = 0;
column = 0;
}
'\t' => {
let tab_offset = TAB_SIZE - (column % TAB_SIZE);
width += tab_offset;
column += tab_offset;
if let Some(printable) = unprintable_replacement(c) {
result.push_str(&source[last_end..index]);
let tab_width =
u32::try_from(width - old_width).expect("small width because of tab size");
result.push_str(&source[last_end..index]);
let len = printable.text_len().to_u32();
update_ranges(result.text_len().to_usize(), len);
update_ranges(result.text_len().to_usize(), tab_width);
for _ in 0..tab_width {
result.push(' ');
}
last_end = index + 1;
}
_ => {
width += unicode_width::UnicodeWidthChar::width(c).unwrap_or(0);
column += 1;
if let Some(printable) = unprintable_replacement(c) {
result.push_str(&source[last_end..index]);
let len = printable.text_len().to_u32();
update_ranges(result.text_len().to_usize(), len);
result.push(printable);
last_end = index + 1;
}
}
result.push(printable);
last_end = index + 1;
}
}
@@ -2544,7 +2644,12 @@ watermelon
/// of the corresponding line minus one. (The "minus one" is because
/// otherwise, the span will end where the next line begins, and this
/// confuses `ruff_annotate_snippets` as of 2025-03-13.)
fn span(&self, path: &str, line_offset_start: &str, line_offset_end: &str) -> Span {
pub(super) fn span(
&self,
path: &str,
line_offset_start: &str,
line_offset_end: &str,
) -> Span {
let span = self.path(path);
let file = span.expect_ty_file();
@@ -2567,7 +2672,7 @@ watermelon
}
/// Like `span`, but only attaches a file path.
fn path(&self, path: &str) -> Span {
pub(super) fn path(&self, path: &str) -> Span {
let file = system_path_to_file(&self.db, path).unwrap();
Span::from(file)
}
@@ -2681,7 +2786,7 @@ watermelon
///
/// See the docs on `TestEnvironment::span` for the meaning of
/// `path`, `line_offset_start` and `line_offset_end`.
fn secondary(
pub(super) fn secondary(
mut self,
path: &str,
line_offset_start: &str,
@@ -2717,7 +2822,7 @@ watermelon
}
/// Adds a "help" sub-diagnostic with the given message.
fn help(mut self, message: impl IntoDiagnosticMessage) -> DiagnosticBuilder<'e> {
pub(super) fn help(mut self, message: impl IntoDiagnosticMessage) -> DiagnosticBuilder<'e> {
self.diag.help(message);
self
}
@@ -2877,10 +2982,10 @@ if call(foo
env.format(format);
let diagnostics = vec![
env.invalid_syntax("SyntaxError: Expected one or more symbol names after import")
env.invalid_syntax("Expected one or more symbol names after import")
.primary("syntax_errors.py", "1:14", "1:15", "")
.build(),
env.invalid_syntax("SyntaxError: Expected ')', found newline")
env.invalid_syntax("Expected ')', found newline")
.primary("syntax_errors.py", "3:11", "3:12", "")
.build(),
];
@@ -2888,7 +2993,8 @@ if call(foo
(env, diagnostics)
}
/// Create Ruff-style diagnostics for testing the various output formats for a notebook.
/// A Jupyter notebook for testing diagnostics.
///
///
/// The concatenated cells look like this:
///
@@ -2908,17 +3014,7 @@ if call(foo
/// The first diagnostic is on the unused `os` import with location cell 1, row 2, column 8
/// (`cell 1:2:8`). The second diagnostic is the unused `math` import at `cell 2:2:8`, and the
/// third diagnostic is an unfixable unused variable at `cell 3:4:5`.
#[allow(
dead_code,
reason = "This is currently only used for JSON but will be needed soon for other formats"
)]
pub(crate) fn create_notebook_diagnostics(
format: DiagnosticFormat,
) -> (TestEnvironment, Vec<Diagnostic>) {
let mut env = TestEnvironment::new();
env.add(
"notebook.ipynb",
r##"
pub(super) static NOTEBOOK: &str = r##"
{
"cells": [
{
@@ -2957,8 +3053,14 @@ if call(foo
"nbformat": 4,
"nbformat_minor": 5
}
"##,
);
"##;
/// Create Ruff-style diagnostics for testing the various output formats for a notebook.
pub(crate) fn create_notebook_diagnostics(
format: DiagnosticFormat,
) -> (TestEnvironment, Vec<Diagnostic>) {
let mut env = TestEnvironment::new();
env.add("notebook.ipynb", NOTEBOOK);
env.format(format);
let diagnostics = vec![

View File

@@ -50,10 +50,8 @@ impl AzureRenderer<'_> {
}
writeln!(
f,
"{code}]{body}",
code = diag
.secondary_code()
.map_or_else(String::new, |code| format!("code={code};")),
"code={code};]{body}",
code = diag.secondary_code_or_id(),
body = diag.body(),
)?;
}

View File

@@ -69,6 +69,12 @@ impl<'a> ConciseRenderer<'a> {
"{code} ",
code = fmt_styled(code, stylesheet.secondary_code)
)?;
} else {
write!(
f,
"{id}: ",
id = fmt_styled(diag.inner.id.as_str(), stylesheet.secondary_code)
)?;
}
if self.config.show_fix_status {
if let Some(fix) = diag.fix() {
@@ -156,8 +162,8 @@ mod tests {
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
syntax_errors.py:1:15: SyntaxError: Expected one or more symbol names after import
syntax_errors.py:3:12: SyntaxError: Expected ')', found newline
syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
syntax_errors.py:3:12: invalid-syntax: Expected ')', found newline
");
}
@@ -165,8 +171,8 @@ mod tests {
fn syntax_errors() {
let (env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Concise);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
syntax_errors.py:1:15: error[invalid-syntax] SyntaxError: Expected one or more symbol names after import
syntax_errors.py:3:12: error[invalid-syntax] SyntaxError: Expected ')', found newline
syntax_errors.py:1:15: error[invalid-syntax] Expected one or more symbol names after import
syntax_errors.py:3:12: error[invalid-syntax] Expected ')', found newline
");
}

View File

@@ -1,8 +1,14 @@
#[cfg(test)]
mod tests {
use ruff_diagnostics::Applicability;
use ruff_text_size::TextRange;
use crate::diagnostic::{
DiagnosticFormat, Severity,
render::tests::{TestEnvironment, create_diagnostics, create_syntax_error_diagnostics},
Annotation, DiagnosticFormat, Severity,
render::tests::{
NOTEBOOK, TestEnvironment, create_diagnostics, create_notebook_diagnostics,
create_syntax_error_diagnostics,
},
};
#[test]
@@ -42,7 +48,7 @@ mod tests {
fn syntax_errors() {
let (env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Full);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[invalid-syntax]: SyntaxError: Expected one or more symbol names after import
error[invalid-syntax]: Expected one or more symbol names after import
--> syntax_errors.py:1:15
|
1 | from os import
@@ -51,7 +57,71 @@ mod tests {
3 | if call(foo
|
error[invalid-syntax]: SyntaxError: Expected ')', found newline
error[invalid-syntax]: Expected ')', found newline
--> syntax_errors.py:3:12
|
1 | from os import
2 |
3 | if call(foo
| ^
4 | def bar():
5 | pass
|
");
}
#[test]
fn hide_severity_output() {
let (mut env, diagnostics) = create_diagnostics(DiagnosticFormat::Full);
env.hide_severity(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r#"
F401 [*] `os` imported but unused
--> fib.py:1:8
|
1 | import os
| ^^
|
help: Remove unused import: `os`
F841 [*] Local variable `x` is assigned to but never used
--> fib.py:6:5
|
4 | def fibonacci(n):
5 | """Compute the nth number in the Fibonacci sequence."""
6 | x = 1
| ^
7 | if n == 0:
8 | return 0
|
help: Remove assignment to unused variable `x`
F821 Undefined name `a`
--> undef.py:1:4
|
1 | if a == 1: pass
| ^
|
"#);
}
#[test]
fn hide_severity_syntax_errors() {
let (mut env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Full);
env.hide_severity(true);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
invalid-syntax: Expected one or more symbol names after import
--> syntax_errors.py:1:15
|
1 | from os import
| ^
2 |
3 | if call(foo
|
invalid-syntax: Expected ')', found newline
--> syntax_errors.py:3:12
|
1 | from os import
@@ -177,4 +247,157 @@ print()
Ok(())
}
/// Ensure that the header column matches the column in the user's input, even if we've replaced
/// tabs with spaces for rendering purposes.
#[test]
fn tab_replacement() {
let mut env = TestEnvironment::new();
env.add("example.py", "def foo():\n\treturn 1");
env.format(DiagnosticFormat::Full);
let diagnostic = env.err().primary("example.py", "2:1", "2:9", "").build();
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:2:2
|
1 | def foo():
2 | return 1
| ^^^^^^^^
|
");
}
/// For file-level diagnostics, we expect to see the header line with the diagnostic information
/// and the `-->` line with the file information but no lines of source code.
#[test]
fn file_level() {
let mut env = TestEnvironment::new();
env.add("example.py", "");
env.format(DiagnosticFormat::Full);
let mut diagnostic = env.err().build();
let span = env.path("example.py").with_range(TextRange::default());
let mut annotation = Annotation::primary(span);
annotation.set_file_level(true);
diagnostic.annotate(annotation);
insta::assert_snapshot!(env.render(&diagnostic), @r"
error[test-diagnostic]: main diagnostic message
--> example.py:1:1
");
}
/// Check that ranges in notebooks are remapped relative to the cells.
#[test]
fn notebook_output() {
let (env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Full);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[unused-import][*]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
help: Remove unused import: `os`
error[unused-import][*]: `math` imported but unused
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^
3 |
4 | print('hello world')
|
help: Remove unused import: `math`
error[unused-variable]: Local variable `x` is assigned to but never used
--> notebook.ipynb:cell 3:4:5
|
2 | def foo():
3 | print()
4 | x = 1
| ^
|
help: Remove assignment to unused variable `x`
");
}
/// Check notebook handling for multiple annotations in a single diagnostic that span cells.
#[test]
fn notebook_output_multiple_annotations() {
let mut env = TestEnvironment::new();
env.add("notebook.ipynb", NOTEBOOK);
let diagnostics = vec![
// adjacent context windows
env.builder("unused-import", Severity::Error, "`os` imported but unused")
.primary("notebook.ipynb", "2:7", "2:9", "")
.secondary("notebook.ipynb", "4:7", "4:11", "second cell")
.help("Remove unused import: `os`")
.build(),
// non-adjacent context windows
env.builder("unused-import", Severity::Error, "`os` imported but unused")
.primary("notebook.ipynb", "2:7", "2:9", "")
.secondary("notebook.ipynb", "10:4", "10:5", "second cell")
.help("Remove unused import: `os`")
.build(),
// adjacent context windows in the same cell
env.err()
.primary("notebook.ipynb", "4:7", "4:11", "second cell")
.secondary("notebook.ipynb", "6:0", "6:5", "print statement")
.help("Remove `print` statement")
.build(),
];
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
error[unused-import]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
::: notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ---- second cell
3 |
4 | print('hello world')
|
help: Remove unused import: `os`
error[unused-import]: `os` imported but unused
--> notebook.ipynb:cell 1:2:8
|
1 | # cell 1
2 | import os
| ^^
|
::: notebook.ipynb:cell 3:4:5
|
2 | def foo():
3 | print()
4 | x = 1
| - second cell
|
help: Remove unused import: `os`
error[test-diagnostic]: main diagnostic message
--> notebook.ipynb:cell 2:2:8
|
1 | # cell 2
2 | import math
| ^^^^ second cell
3 |
4 | print('hello world')
| ----- print statement
|
help: Remove `print` statement
");
}
}

View File

@@ -6,7 +6,7 @@ use ruff_notebook::NotebookIndex;
use ruff_source_file::{LineColumn, OneIndexed};
use ruff_text_size::Ranged;
use crate::diagnostic::{Diagnostic, DiagnosticSource, DisplayDiagnosticConfig, SecondaryCode};
use crate::diagnostic::{Diagnostic, DiagnosticSource, DisplayDiagnosticConfig};
use super::FileResolver;
@@ -99,7 +99,7 @@ pub(super) fn diagnostic_to_json<'a>(
// In preview, the locations and filename can be optional.
if config.preview {
JsonDiagnostic {
code: diagnostic.secondary_code(),
code: diagnostic.secondary_code_or_id(),
url: diagnostic.to_ruff_url(),
message: diagnostic.body(),
fix,
@@ -111,7 +111,7 @@ pub(super) fn diagnostic_to_json<'a>(
}
} else {
JsonDiagnostic {
code: diagnostic.secondary_code(),
code: diagnostic.secondary_code_or_id(),
url: diagnostic.to_ruff_url(),
message: diagnostic.body(),
fix,
@@ -221,7 +221,7 @@ impl Serialize for ExpandedEdits<'_> {
#[derive(Serialize)]
pub(crate) struct JsonDiagnostic<'a> {
cell: Option<OneIndexed>,
code: Option<&'a SecondaryCode>,
code: &'a str,
end_location: Option<JsonLocation>,
filename: Option<&'a str>,
fix: Option<JsonFix<'a>>,
@@ -302,7 +302,7 @@ mod tests {
[
{
"cell": null,
"code": null,
"code": "test-diagnostic",
"end_location": {
"column": 1,
"row": 1
@@ -336,7 +336,7 @@ mod tests {
[
{
"cell": null,
"code": null,
"code": "test-diagnostic",
"end_location": null,
"filename": null,
"fix": null,

View File

@@ -2,5 +2,5 @@
source: crates/ruff_db/src/diagnostic/render/azure.rs
expression: env.render_diagnostics(&diagnostics)
---
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=1;columnnumber=15;]SyntaxError: Expected one or more symbol names after import
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=3;columnnumber=12;]SyntaxError: Expected ')', found newline
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=1;columnnumber=15;code=invalid-syntax;]Expected one or more symbol names after import
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=3;columnnumber=12;code=invalid-syntax;]Expected ')', found newline

View File

@@ -5,7 +5,7 @@ expression: env.render_diagnostics(&diagnostics)
[
{
"cell": null,
"code": null,
"code": "invalid-syntax",
"end_location": {
"column": 1,
"row": 2
@@ -16,13 +16,13 @@ expression: env.render_diagnostics(&diagnostics)
"column": 15,
"row": 1
},
"message": "SyntaxError: Expected one or more symbol names after import",
"message": "Expected one or more symbol names after import",
"noqa_row": null,
"url": null
},
{
"cell": null,
"code": null,
"code": "invalid-syntax",
"end_location": {
"column": 1,
"row": 4
@@ -33,7 +33,7 @@ expression: env.render_diagnostics(&diagnostics)
"column": 12,
"row": 3
},
"message": "SyntaxError: Expected ')', found newline",
"message": "Expected ')', found newline",
"noqa_row": null,
"url": null
}

View File

@@ -2,5 +2,5 @@
source: crates/ruff_db/src/diagnostic/render/json_lines.rs
expression: env.render_diagnostics(&diagnostics)
---
{"cell":null,"code":null,"end_location":{"column":1,"row":2},"filename":"syntax_errors.py","fix":null,"location":{"column":15,"row":1},"message":"SyntaxError: Expected one or more symbol names after import","noqa_row":null,"url":null}
{"cell":null,"code":null,"end_location":{"column":1,"row":4},"filename":"syntax_errors.py","fix":null,"location":{"column":12,"row":3},"message":"SyntaxError: Expected ')', found newline","noqa_row":null,"url":null}
{"cell":null,"code":"invalid-syntax","end_location":{"column":1,"row":2},"filename":"syntax_errors.py","fix":null,"location":{"column":15,"row":1},"message":"Expected one or more symbol names after import","noqa_row":null,"url":null}
{"cell":null,"code":"invalid-syntax","end_location":{"column":1,"row":4},"filename":"syntax_errors.py","fix":null,"location":{"column":12,"row":3},"message":"Expected ')', found newline","noqa_row":null,"url":null}

View File

@@ -6,10 +6,10 @@ expression: env.render_diagnostics(&diagnostics)
<testsuites name="ruff" tests="2" failures="2" errors="0">
<testsuite name="syntax_errors.py" tests="2" disabled="0" errors="0" failures="2" package="org.ruff">
<testcase name="org.ruff.invalid-syntax" classname="syntax_errors" line="1" column="15">
<failure message="SyntaxError: Expected one or more symbol names after import">line 1, col 15, SyntaxError: Expected one or more symbol names after import</failure>
<failure message="Expected one or more symbol names after import">line 1, col 15, Expected one or more symbol names after import</failure>
</testcase>
<testcase name="org.ruff.invalid-syntax" classname="syntax_errors" line="3" column="12">
<failure message="SyntaxError: Expected &apos;)&apos;, found newline">line 3, col 12, SyntaxError: Expected &apos;)&apos;, found newline</failure>
<failure message="Expected &apos;)&apos;, found newline">line 3, col 12, Expected &apos;)&apos;, found newline</failure>
</testcase>
</testsuite>
</testsuites>

View File

@@ -2,5 +2,5 @@
source: crates/ruff_db/src/diagnostic/render/pylint.rs
expression: env.render_diagnostics(&diagnostics)
---
syntax_errors.py:1: [invalid-syntax] SyntaxError: Expected one or more symbol names after import
syntax_errors.py:3: [invalid-syntax] SyntaxError: Expected ')', found newline
syntax_errors.py:1: [invalid-syntax] Expected one or more symbol names after import
syntax_errors.py:3: [invalid-syntax] Expected ')', found newline

View File

@@ -21,7 +21,7 @@ expression: env.render_diagnostics(&diagnostics)
}
}
},
"message": "SyntaxError: Expected one or more symbol names after import"
"message": "Expected one or more symbol names after import"
},
{
"code": {
@@ -40,7 +40,7 @@ expression: env.render_diagnostics(&diagnostics)
}
}
},
"message": "SyntaxError: Expected ')', found newline"
"message": "Expected ')', found newline"
}
],
"severity": "WARNING",

View File

@@ -21,7 +21,7 @@ use crate::source::source_text;
/// reflected in the changed AST offsets.
/// The other reason is that Ruff's AST doesn't implement `Eq` which Salsa requires
/// for determining if a query result is unchanged.
#[salsa::tracked(returns(ref), no_eq, heap_size=get_size2::GetSize::get_heap_size)]
#[salsa::tracked(returns(ref), no_eq, heap_size=ruff_memory_usage::heap_size)]
pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
let _span = tracing::trace_span!("parsed_module", ?file).entered();

View File

@@ -9,7 +9,7 @@ use crate::Db;
use crate::files::{File, FilePath};
/// Reads the source text of a python text file (must be valid UTF8) or notebook.
#[salsa::tracked(heap_size=get_size2::GetSize::get_heap_size)]
#[salsa::tracked(heap_size=ruff_memory_usage::heap_size)]
pub fn source_text(db: &dyn Db, file: File) -> SourceText {
let path = file.path(db);
let _span = tracing::trace_span!("source_text", file = %path).entered();
@@ -69,21 +69,21 @@ impl SourceText {
pub fn as_str(&self) -> &str {
match &self.inner.kind {
SourceTextKind::Text(source) => source,
SourceTextKind::Notebook(notebook) => notebook.source_code(),
SourceTextKind::Notebook { notebook } => notebook.source_code(),
}
}
/// Returns the underlying notebook if this is a notebook file.
pub fn as_notebook(&self) -> Option<&Notebook> {
match &self.inner.kind {
SourceTextKind::Notebook(notebook) => Some(notebook),
SourceTextKind::Notebook { notebook } => Some(notebook),
SourceTextKind::Text(_) => None,
}
}
/// Returns `true` if this is a notebook source file.
pub fn is_notebook(&self) -> bool {
matches!(&self.inner.kind, SourceTextKind::Notebook(_))
matches!(&self.inner.kind, SourceTextKind::Notebook { .. })
}
/// Returns `true` if there was an error when reading the content of the file.
@@ -108,7 +108,7 @@ impl std::fmt::Debug for SourceText {
SourceTextKind::Text(text) => {
dbg.field(text);
}
SourceTextKind::Notebook(notebook) => {
SourceTextKind::Notebook { notebook } => {
dbg.field(notebook);
}
}
@@ -123,23 +123,15 @@ struct SourceTextInner {
read_error: Option<SourceTextError>,
}
#[derive(Eq, PartialEq)]
#[derive(Eq, PartialEq, get_size2::GetSize)]
enum SourceTextKind {
Text(String),
Notebook(Box<Notebook>),
}
impl get_size2::GetSize for SourceTextKind {
fn get_heap_size(&self) -> usize {
match self {
SourceTextKind::Text(text) => text.get_heap_size(),
// TODO: The `get-size` derive does not support ignoring enum variants.
//
// Jupyter notebooks are not very relevant for memory profiling, and contain
// arbitrary JSON values that do not implement the `GetSize` trait.
SourceTextKind::Notebook(_) => 0,
}
}
Notebook {
// Jupyter notebooks are not very relevant for memory profiling, and contain
// arbitrary JSON values that do not implement the `GetSize` trait.
#[get_size(ignore)]
notebook: Box<Notebook>,
},
}
impl From<String> for SourceTextKind {
@@ -150,7 +142,9 @@ impl From<String> for SourceTextKind {
impl From<Notebook> for SourceTextKind {
fn from(notebook: Notebook) -> Self {
SourceTextKind::Notebook(Box::new(notebook))
SourceTextKind::Notebook {
notebook: Box::new(notebook),
}
}
}
@@ -163,7 +157,7 @@ pub enum SourceTextError {
}
/// Computes the [`LineIndex`] for `file`.
#[salsa::tracked(heap_size=get_size2::GetSize::get_heap_size)]
#[salsa::tracked(heap_size=ruff_memory_usage::heap_size)]
pub fn line_index(db: &dyn Db, file: File) -> LineIndex {
let _span = tracing::trace_span!("line_index", ?file).entered();

View File

@@ -236,7 +236,7 @@ impl SystemPath {
///
/// [`CurDir`]: camino::Utf8Component::CurDir
#[inline]
pub fn components(&self) -> camino::Utf8Components {
pub fn components(&self) -> camino::Utf8Components<'_> {
self.0.components()
}

View File

@@ -195,7 +195,7 @@ impl VendoredFileSystem {
///
/// ## Panics:
/// If the current thread already holds the lock.
fn lock_archive(&self) -> LockedZipArchive {
fn lock_archive(&self) -> LockedZipArchive<'_> {
self.inner.lock().unwrap()
}
}
@@ -360,7 +360,7 @@ impl VendoredZipArchive {
Ok(Self(ZipArchive::new(io::Cursor::new(data))?))
}
fn lookup_path(&mut self, path: &NormalizedVendoredPath) -> Result<ZipFile> {
fn lookup_path(&mut self, path: &NormalizedVendoredPath) -> Result<ZipFile<'_>> {
Ok(self.0.by_name(path.as_str())?)
}

View File

@@ -37,7 +37,7 @@ impl VendoredPath {
self.0.as_std_path()
}
pub fn components(&self) -> Utf8Components {
pub fn components(&self) -> Utf8Components<'_> {
self.0.components()
}

View File

@@ -348,7 +348,7 @@ fn format_dev_multi_project(
debug!(parent: None, "Starting {}", project_path.display());
match format_dev_project(
&[project_path.clone()],
std::slice::from_ref(&project_path),
args.stability_check,
args.write,
args.preview,
@@ -628,7 +628,7 @@ struct CheckRepoResult {
}
impl CheckRepoResult {
fn display(&self, format: Format) -> DisplayCheckRepoResult {
fn display(&self, format: Format) -> DisplayCheckRepoResult<'_> {
DisplayCheckRepoResult {
result: self,
format,
@@ -665,7 +665,7 @@ struct Diagnostic {
}
impl Diagnostic {
fn display(&self, format: Format) -> DisplayDiagnostic {
fn display(&self, format: Format) -> DisplayDiagnostic<'_> {
DisplayDiagnostic {
diagnostic: self,
format,

View File

@@ -43,7 +43,7 @@ pub enum IsolationLevel {
}
/// A collection of [`Edit`] elements to be applied to a source file.
#[derive(Debug, PartialEq, Eq, Clone, get_size2::GetSize)]
#[derive(Debug, PartialEq, Eq, Clone, Hash, get_size2::GetSize)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct Fix {
/// The [`Edit`] elements to be applied, sorted by [`Edit::start`] in ascending order.

View File

@@ -562,7 +562,7 @@ struct RemoveSoftLinebreaksSnapshot {
pub trait BufferExtensions: Buffer + Sized {
/// Returns a new buffer that calls the passed inspector for every element that gets written to the output
#[must_use]
fn inspect<F>(&mut self, inspector: F) -> Inspect<Self::Context, F>
fn inspect<F>(&mut self, inspector: F) -> Inspect<'_, Self::Context, F>
where
F: FnMut(&FormatElement),
{
@@ -607,7 +607,7 @@ pub trait BufferExtensions: Buffer + Sized {
/// # }
/// ```
#[must_use]
fn start_recording(&mut self) -> Recording<Self> {
fn start_recording(&mut self) -> Recording<'_, Self> {
Recording::new(self)
}

View File

@@ -340,7 +340,7 @@ impl<Context> Format<Context> for SourcePosition {
/// Creates a text from a dynamic string.
///
/// This is done by allocating a new string internally.
pub fn text(text: &str) -> Text {
pub fn text(text: &str) -> Text<'_> {
debug_assert_no_newlines(text);
Text { text }
@@ -459,7 +459,10 @@ fn debug_assert_no_newlines(text: &str) {
/// # }
/// ```
#[inline]
pub fn line_suffix<Content, Context>(inner: &Content, reserved_width: u32) -> LineSuffix<Context>
pub fn line_suffix<Content, Context>(
inner: &Content,
reserved_width: u32,
) -> LineSuffix<'_, Context>
where
Content: Format<Context>,
{
@@ -597,7 +600,10 @@ impl<Context> Format<Context> for LineSuffixBoundary {
/// Use `Memoized.inspect(f)?.has_label(LabelId::of::<SomeLabelId>()` if you need to know if some content breaks that should
/// only be written later.
#[inline]
pub fn labelled<Content, Context>(label_id: LabelId, content: &Content) -> FormatLabelled<Context>
pub fn labelled<Content, Context>(
label_id: LabelId,
content: &Content,
) -> FormatLabelled<'_, Context>
where
Content: Format<Context>,
{
@@ -700,7 +706,7 @@ impl<Context> Format<Context> for Space {
/// # }
/// ```
#[inline]
pub fn indent<Content, Context>(content: &Content) -> Indent<Context>
pub fn indent<Content, Context>(content: &Content) -> Indent<'_, Context>
where
Content: Format<Context>,
{
@@ -771,7 +777,7 @@ impl<Context> std::fmt::Debug for Indent<'_, Context> {
/// # }
/// ```
#[inline]
pub fn dedent<Content, Context>(content: &Content) -> Dedent<Context>
pub fn dedent<Content, Context>(content: &Content) -> Dedent<'_, Context>
where
Content: Format<Context>,
{
@@ -846,7 +852,7 @@ impl<Context> std::fmt::Debug for Dedent<'_, Context> {
///
/// This resembles the behaviour of Prettier's `align(Number.NEGATIVE_INFINITY, content)` IR element.
#[inline]
pub fn dedent_to_root<Content, Context>(content: &Content) -> Dedent<Context>
pub fn dedent_to_root<Content, Context>(content: &Content) -> Dedent<'_, Context>
where
Content: Format<Context>,
{
@@ -960,7 +966,7 @@ where
///
/// - tab indentation: Printer indents the expression with two tabs because the `align` increases the indentation level.
/// - space indentation: Printer indents the expression by 4 spaces (one indentation level) **and** 2 spaces for the align.
pub fn align<Content, Context>(count: u8, content: &Content) -> Align<Context>
pub fn align<Content, Context>(count: u8, content: &Content) -> Align<'_, Context>
where
Content: Format<Context>,
{
@@ -1030,7 +1036,7 @@ impl<Context> std::fmt::Debug for Align<'_, Context> {
/// # }
/// ```
#[inline]
pub fn block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::Block,
@@ -1101,7 +1107,7 @@ pub fn block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Cont
/// # }
/// ```
#[inline]
pub fn soft_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn soft_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::Soft,
@@ -1175,7 +1181,9 @@ pub fn soft_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent
/// # }
/// ```
#[inline]
pub fn soft_line_indent_or_space<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn soft_line_indent_or_space<Context>(
content: &impl Format<Context>,
) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::SoftLineOrSpace,
@@ -1308,7 +1316,9 @@ impl<Context> std::fmt::Debug for BlockIndent<'_, Context> {
/// # Ok(())
/// # }
/// ```
pub fn soft_space_or_block_indent<Context>(content: &impl Format<Context>) -> BlockIndent<Context> {
pub fn soft_space_or_block_indent<Context>(
content: &impl Format<Context>,
) -> BlockIndent<'_, Context> {
BlockIndent {
content: Argument::new(content),
mode: IndentMode::SoftSpace,
@@ -1388,7 +1398,7 @@ pub fn soft_space_or_block_indent<Context>(content: &impl Format<Context>) -> Bl
/// # }
/// ```
#[inline]
pub fn group<Context>(content: &impl Format<Context>) -> Group<Context> {
pub fn group<Context>(content: &impl Format<Context>) -> Group<'_, Context> {
Group {
content: Argument::new(content),
id: None,
@@ -1551,7 +1561,7 @@ impl<Context> std::fmt::Debug for Group<'_, Context> {
#[inline]
pub fn best_fit_parenthesize<Context>(
content: &impl Format<Context>,
) -> BestFitParenthesize<Context> {
) -> BestFitParenthesize<'_, Context> {
BestFitParenthesize {
content: Argument::new(content),
group_id: None,
@@ -1691,7 +1701,7 @@ impl<Context> std::fmt::Debug for BestFitParenthesize<'_, Context> {
pub fn conditional_group<Content, Context>(
content: &Content,
condition: Condition,
) -> ConditionalGroup<Context>
) -> ConditionalGroup<'_, Context>
where
Content: Format<Context>,
{
@@ -1852,7 +1862,7 @@ impl<Context> Format<Context> for ExpandParent {
/// # }
/// ```
#[inline]
pub fn if_group_breaks<Content, Context>(content: &Content) -> IfGroupBreaks<Context>
pub fn if_group_breaks<Content, Context>(content: &Content) -> IfGroupBreaks<'_, Context>
where
Content: Format<Context>,
{
@@ -1933,7 +1943,7 @@ where
/// # }
/// ```
#[inline]
pub fn if_group_fits_on_line<Content, Context>(flat_content: &Content) -> IfGroupBreaks<Context>
pub fn if_group_fits_on_line<Content, Context>(flat_content: &Content) -> IfGroupBreaks<'_, Context>
where
Content: Format<Context>,
{
@@ -2122,7 +2132,7 @@ impl<Context> std::fmt::Debug for IfGroupBreaks<'_, Context> {
pub fn indent_if_group_breaks<Content, Context>(
content: &Content,
group_id: GroupId,
) -> IndentIfGroupBreaks<Context>
) -> IndentIfGroupBreaks<'_, Context>
where
Content: Format<Context>,
{
@@ -2205,7 +2215,7 @@ impl<Context> std::fmt::Debug for IndentIfGroupBreaks<'_, Context> {
/// # Ok(())
/// # }
/// ```
pub fn fits_expanded<Content, Context>(content: &Content) -> FitsExpanded<Context>
pub fn fits_expanded<Content, Context>(content: &Content) -> FitsExpanded<'_, Context>
where
Content: Format<Context>,
{

View File

@@ -197,7 +197,7 @@ pub const LINE_TERMINATORS: [char; 3] = ['\r', LINE_SEPARATOR, PARAGRAPH_SEPARAT
/// Replace the line terminators matching the provided list with "\n"
/// since its the only line break type supported by the printer
pub fn normalize_newlines<const N: usize>(text: &str, terminators: [char; N]) -> Cow<str> {
pub fn normalize_newlines<const N: usize>(text: &str, terminators: [char; N]) -> Cow<'_, str> {
let mut result = String::new();
let mut last_end = 0;

View File

@@ -222,7 +222,7 @@ impl FormatContext for IrFormatContext<'_> {
&IrFormatOptions
}
fn source_code(&self) -> SourceCode {
fn source_code(&self) -> SourceCode<'_> {
self.source_code
}
}

View File

@@ -193,7 +193,7 @@ pub trait FormatContext {
fn options(&self) -> &Self::Options;
/// Returns the source code from the document that gets formatted.
fn source_code(&self) -> SourceCode;
fn source_code(&self) -> SourceCode<'_>;
}
/// Options customizing how the source code should be formatted.
@@ -239,7 +239,7 @@ impl FormatContext for SimpleFormatContext {
&self.options
}
fn source_code(&self) -> SourceCode {
fn source_code(&self) -> SourceCode<'_> {
SourceCode::new(&self.source_code)
}
}
@@ -326,7 +326,7 @@ where
printer.print_with_indent(&self.document, indent)
}
fn create_printer(&self) -> Printer {
fn create_printer(&self) -> Printer<'_> {
let source_code = self.context.source_code();
let print_options = self.context.options().as_print_options();

View File

@@ -69,7 +69,7 @@ impl<'a> Resolver<'a> {
}
/// Resolves a module name to a module.
fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
pub(crate) fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
let module = resolve_module(self.db, module_name)?;
Some(module.file(self.db)?.path(self.db))
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.12.5"
version = "0.12.8"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -89,3 +89,14 @@ print(1)
# ///
#
# Foobar
# Regression tests for https://github.com/astral-sh/ruff/issues/19713
# mypy: ignore-errors
# pyright: ignore-errors
# pyrefly: ignore-errors
# ty: ignore[unresolved-import]
# pyrefly: ignore[unused-import]
print(1)

View File

@@ -162,3 +162,86 @@ except Exception:
exception("An error occurred")
else:
exception("An error occurred")
# Test tuple exceptions
try:
pass
except (Exception,):
pass
try:
pass
except (Exception, ValueError):
pass
try:
pass
except (ValueError, Exception):
pass
try:
pass
except (ValueError, Exception) as e:
print(e)
try:
pass
except (BaseException, TypeError):
pass
try:
pass
except (TypeError, BaseException):
pass
try:
pass
except (Exception, BaseException):
pass
try:
pass
except (BaseException, Exception):
pass
# Test nested tuples
try:
pass
except ((Exception, ValueError), TypeError):
pass
try:
pass
except (ValueError, (BaseException, TypeError)):
pass
# Test valid tuple exceptions (should not trigger)
try:
pass
except (ValueError, TypeError):
pass
try:
pass
except (OSError, FileNotFoundError):
pass
try:
pass
except (OSError, FileNotFoundError) as e:
print(e)
try:
pass
except (Exception, ValueError):
critical("...", exc_info=True)
try:
pass
except (Exception, ValueError):
raise
try:
pass
except (Exception, ValueError) as e:
raise e

View File

@@ -88,3 +88,25 @@ def f_multi_line_string2():
example="example"
)
)
def raise_typing_cast_exception():
import typing
raise typing.cast("Exception", None)
def f_typing_cast_excluded():
from typing import cast
raise cast(RuntimeError, "This should not trigger EM101")
def f_typing_cast_excluded_import():
import typing
raise typing.cast(RuntimeError, "This should not trigger EM101")
def f_typing_cast_excluded_aliased():
from typing import cast as my_cast
raise my_cast(RuntimeError, "This should not trigger EM101")

View File

@@ -39,6 +39,11 @@ class NonEmptyWithInit:
pass
class NonEmptyChildWithInlineComment:
value: int
... # preserve me
class EmptyClass:
...

View File

@@ -38,6 +38,10 @@ class NonEmptyWithInit:
def __init__():
pass
class NonEmptyChildWithInlineComment:
value: int
... # preserve me
# Not violations
class EmptyClass: ...

View File

@@ -129,4 +129,35 @@ print(" x ".rsplit(maxsplit=0))
print(" x ".rsplit(maxsplit=0))
print(" x ".rsplit(sep=None, maxsplit=0))
print(" x ".rsplit(maxsplit=0))
print(" x ".rsplit(sep=None, maxsplit=0))
print(" x ".rsplit(sep=None, maxsplit=0))
# https://github.com/astral-sh/ruff/issues/19581 - embedded quotes in raw strings
r"""simple@example.com
very.common@example.com
FirstName.LastName@EasierReading.org
x@example.com
long.email-address-with-hyphens@and.subdomains.example.com
user.name+tag+sorting@example.com
name/surname@example.com
xample@s.example
" "@example.org
"john..doe"@example.org
mailhost!username@example.org
"very.(),:;<>[]\".VERY.\"very@\\ \"very\".unusual"@strange.example.com
user%example.com@example.org
user-@example.org
I❤CHOCOLATE@example.com
this\ still\"not\\allowed@example.com
stellyamburrr985@example.com
Abc.123@example.com
user+mailbox/department=shipping@example.com
!#$%&'*+-/=?^_`.{|}~@example.com
"Abc@def"@example.com
"Fred\ Bloggs"@example.com
"Joe.\\Blow"@example.com""".split("\n")
r"""first
'no need' to escape
"swap" quote style
"use' ugly triple quotes""".split("\n")

View File

@@ -1,4 +1,4 @@
from pathlib import Path, PurePath
from pathlib import Path, PurePath, PosixPath, PurePosixPath, WindowsPath, PureWindowsPath
from pathlib import Path as pth
@@ -68,3 +68,11 @@ Path(".", "folder")
PurePath(".", "folder")
Path()
from importlib.metadata import PackagePath
_ = PosixPath(".")
_ = PurePosixPath(".")
_ = WindowsPath(".")
_ = PureWindowsPath(".")
_ = PackagePath(".")

View File

@@ -0,0 +1,3 @@
"""Hello, world!"""\
x = 1; y = 2

View File

@@ -182,3 +182,13 @@ kwargs_with_maxsplit = {"maxsplit": 1}
"1,2,3".split(",", **kwargs_with_maxsplit)[0] # TODO: false positive
kwargs_with_maxsplit = {"sep": ",", "maxsplit": 1}
"1,2,3".split(**kwargs_with_maxsplit)[0] # TODO: false positive
## Test unpacked list literal args (starred expressions)
# Errors
"1,2,3".split(",", *[-1])[0]
## Test unpacked list variable args
# Errors
args_list = [-1]
"1,2,3".split(",", *args_list)[0]

View File

@@ -59,3 +59,7 @@ kwargs = {x: x for x in range(10)}
"{1}_{0}".format(1, 2, *args)
"{1}_{0}".format(1, 2)
r"\d{{1,2}} {0}".format(42)
"{{{0}}}".format(123)

View File

@@ -52,3 +52,7 @@ f"{repr(lambda: 1)}"
f"{repr(x := 2)}"
f"{str(object=3)}"
f"{str(x for x in [])}"
f"{str((x for x in []))}"

View File

@@ -315,7 +315,7 @@ impl<'a> Checker<'a> {
}
/// Create a [`Generator`] to generate source code based on the current AST state.
pub(crate) fn generator(&self) -> Generator {
pub(crate) fn generator(&self) -> Generator<'_> {
Generator::new(self.stylist.indentation(), self.stylist.line_ending())
}
@@ -590,6 +590,16 @@ impl<'a> Checker<'a> {
member,
})
}
/// Return the [`LintContext`] for the current analysis.
///
/// Note that you should always prefer calling methods like `settings`, `report_diagnostic`, or
/// `is_rule_enabled` directly on [`Checker`] when possible. This method exists only for the
/// rare cases where rules or helper functions need to be accessed by both a `Checker` and a
/// `LintContext` in different analysis phases.
pub(crate) const fn context(&self) -> &'a LintContext<'a> {
self.context
}
}
pub(crate) struct TypingImporter<'a, 'b> {

View File

@@ -8,14 +8,14 @@ use libcst_native::{
};
use ruff_python_codegen::Stylist;
pub(crate) fn match_module(module_text: &str) -> Result<Module> {
pub(crate) fn match_module(module_text: &str) -> Result<Module<'_>> {
match libcst_native::parse_module(module_text, None) {
Ok(module) => Ok(module),
Err(_) => bail!("Failed to extract CST from source"),
}
}
pub(crate) fn match_statement(statement_text: &str) -> Result<Statement> {
pub(crate) fn match_statement(statement_text: &str) -> Result<Statement<'_>> {
match libcst_native::parse_statement(statement_text) {
Ok(statement) => Ok(statement),
Err(_) => bail!("Failed to extract statement from source"),
@@ -220,7 +220,7 @@ pub(crate) fn match_if<'a, 'b>(statement: &'a mut Statement<'b>) -> Result<&'a m
///
/// If the expression is not guaranteed to be valid as a standalone expression (e.g., if it may
/// span multiple lines and/or require parentheses), use [`transform_expression`] instead.
pub(crate) fn match_expression(expression_text: &str) -> Result<Expression> {
pub(crate) fn match_expression(expression_text: &str) -> Result<Expression<'_>> {
match libcst_native::parse_expression(expression_text) {
Ok(expression) => Ok(expression),
Err(_) => bail!("Failed to extract expression from source"),

View File

@@ -13,7 +13,7 @@ use ruff_text_size::{Ranged, TextSize};
use crate::Locator;
/// Extract doc lines (standalone comments) from a token sequence.
pub(crate) fn doc_lines_from_tokens(tokens: &Tokens) -> DocLines {
pub(crate) fn doc_lines_from_tokens(tokens: &Tokens) -> DocLines<'_> {
DocLines::new(tokens)
}

View File

@@ -32,7 +32,7 @@ impl<'a> Docstring<'a> {
}
/// The contents of the docstring, excluding the opening and closing quotes.
pub(crate) fn body(&self) -> DocstringBody {
pub(crate) fn body(&self) -> DocstringBody<'_> {
DocstringBody { docstring: self }
}

View File

@@ -208,7 +208,7 @@ impl<'a> SectionContexts<'a> {
self.contexts.len()
}
pub(crate) fn iter(&self) -> SectionContextsIter {
pub(crate) fn iter(&self) -> SectionContextsIter<'_> {
SectionContextsIter {
docstring_body: self.docstring.body(),
inner: self.contexts.iter(),

View File

@@ -56,13 +56,19 @@ impl<'a> Insertion<'a> {
stylist: &Stylist,
) -> Insertion<'static> {
// Skip over any docstrings.
let mut location = if let Some(location) = match_docstring_end(body) {
let mut location = if let Some(mut location) = match_docstring_end(body) {
// If the first token after the docstring is a semicolon, insert after the semicolon as
// an inline statement.
if let Some(offset) = match_semicolon(locator.after(location)) {
return Insertion::inline(" ", location.add(offset).add(TextSize::of(';')), ";");
}
// If the first token after the docstring is a continuation character (i.e. "\"), advance
// an additional row to prevent inserting in the same logical line.
if match_continuation(locator.after(location)).is_some() {
location = locator.full_line_end(location);
}
// Otherwise, advance to the next row.
locator.full_line_end(location)
} else {
@@ -323,7 +329,7 @@ mod tests {
#[test]
fn start_of_file() -> Result<()> {
fn insert(contents: &str) -> Result<Insertion> {
fn insert(contents: &str) -> Result<Insertion<'_>> {
let parsed = parse_module(contents)?;
let locator = Locator::new(contents);
let stylist = Stylist::from_tokens(parsed.tokens(), locator.contents());
@@ -363,6 +369,16 @@ mod tests {
Insertion::own_line("", TextSize::from(20), "\n")
);
let contents = r#"
"""Hello, world!"""\
"#
.trim_start();
assert_eq!(
insert(contents)?,
Insertion::own_line("", TextSize::from(22), "\n")
);
let contents = r"
x = 1
"
@@ -434,7 +450,7 @@ x = 1
#[test]
fn start_of_block() {
fn insert(contents: &str, offset: TextSize) -> Insertion {
fn insert(contents: &str, offset: TextSize) -> Insertion<'_> {
let parsed = parse_module(contents).unwrap();
let locator = Locator::new(contents);
let stylist = Stylist::from_tokens(parsed.tokens(), locator.contents());

View File

@@ -49,7 +49,7 @@ impl<'a> Locator<'a> {
self.index.get()
}
pub fn to_source_code(&self) -> SourceCode {
pub fn to_source_code(&self) -> SourceCode<'_, '_> {
SourceCode::new(self.contents, self.to_index())
}

View File

@@ -33,10 +33,8 @@ impl Emitter for GithubEmitter {
write!(
writer,
"::error title=Ruff{code},file={file},line={row},col={column},endLine={end_row},endColumn={end_column}::",
code = diagnostic
.secondary_code()
.map_or_else(String::new, |code| format!(" ({code})")),
"::error title=Ruff ({code}),file={file},line={row},col={column},endLine={end_row},endColumn={end_column}::",
code = diagnostic.secondary_code_or_id(),
file = filename,
row = source_location.line,
column = source_location.column,
@@ -54,6 +52,8 @@ impl Emitter for GithubEmitter {
if let Some(code) = diagnostic.secondary_code() {
write!(writer, " {code}")?;
} else {
write!(writer, " {id}:", id = diagnostic.id())?;
}
writeln!(writer, " {}", diagnostic.body())?;

View File

@@ -61,22 +61,17 @@ impl Serialize for SerializedMessages<'_> {
let mut fingerprints = HashSet::<u64>::with_capacity(self.diagnostics.len());
for diagnostic in self.diagnostics {
let start_location = diagnostic.expect_ruff_start_location();
let end_location = diagnostic.expect_ruff_end_location();
let filename = diagnostic.expect_ruff_filename();
let lines = if self.context.is_notebook(&filename) {
let (start_location, end_location) = if self.context.is_notebook(&filename) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
json!({
"begin": 1,
"end": 1
})
Default::default()
} else {
json!({
"begin": start_location.line,
"end": end_location.line
})
(
diagnostic.expect_ruff_start_location(),
diagnostic.expect_ruff_end_location(),
)
};
let path = self.project_dir.as_ref().map_or_else(
@@ -111,8 +106,11 @@ impl Serialize for SerializedMessages<'_> {
"fingerprint": format!("{:x}", message_fingerprint),
"location": {
"path": path,
"lines": lines
}
"positions": {
"begin": start_location,
"end": end_location,
},
},
});
s.serialize_element(&value)?;

View File

@@ -33,8 +33,7 @@ mod text;
/// Creates a `Diagnostic` from a syntax error, with the format expected by Ruff.
///
/// This is almost identical to `ruff_db::diagnostic::create_syntax_error_diagnostic`, except the
/// `message` is stored as the primary diagnostic message instead of on the primary annotation, and
/// `SyntaxError: ` is prepended to the message.
/// `message` is stored as the primary diagnostic message instead of on the primary annotation.
///
/// TODO(brent) These should be unified at some point, but we keep them separate for now to avoid a
/// ton of snapshot changes while combining ruff's diagnostic type with `Diagnostic`.
@@ -43,11 +42,7 @@ pub fn create_syntax_error_diagnostic(
message: impl std::fmt::Display,
range: impl Ranged,
) -> Diagnostic {
let mut diag = Diagnostic::new(
DiagnosticId::InvalidSyntax,
Severity::Error,
format_args!("SyntaxError: {message}"),
);
let mut diag = Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, message);
let span = span.into().with_range(range.range());
diag.annotate(Annotation::primary(span));
diag
@@ -75,7 +70,15 @@ where
);
let span = Span::from(file).with_range(range);
let annotation = Annotation::primary(span);
let mut annotation = Annotation::primary(span);
// The `0..0` range is used to highlight file-level diagnostics.
//
// TODO(brent) We should instead set this flag on annotations for individual lint rules that
// actually need it, but we need to be able to cache the new diagnostic model first. See
// https://github.com/astral-sh/ruff/issues/19688.
if range == TextRange::default() {
annotation.set_file_level(true);
}
diagnostic.annotate(annotation);
if let Some(suggestion) = suggestion {
@@ -146,7 +149,7 @@ impl Deref for MessageWithLocation<'_> {
fn group_diagnostics_by_filename(
diagnostics: &[Diagnostic],
) -> BTreeMap<String, Vec<MessageWithLocation>> {
) -> BTreeMap<String, Vec<MessageWithLocation<'_>>> {
let mut grouped_messages = BTreeMap::default();
for diagnostic in diagnostics {
grouped_messages

View File

@@ -27,7 +27,10 @@ impl Emitter for SarifEmitter {
.map(SarifResult::from_message)
.collect::<Result<Vec<_>>>()?;
let unique_rules: HashSet<_> = results.iter().filter_map(|result| result.code).collect();
let unique_rules: HashSet<_> = results
.iter()
.filter_map(|result| result.code.as_secondary_code())
.collect();
let mut rules: Vec<SarifRule> = unique_rules.into_iter().map(SarifRule::from).collect();
rules.sort_by(|a, b| a.code.cmp(b.code));
@@ -109,9 +112,40 @@ impl Serialize for SarifRule<'_> {
}
}
#[derive(Debug)]
enum RuleCode<'a> {
SecondaryCode(&'a SecondaryCode),
LintId(&'a str),
}
impl RuleCode<'_> {
fn as_secondary_code(&self) -> Option<&SecondaryCode> {
match self {
RuleCode::SecondaryCode(code) => Some(code),
RuleCode::LintId(_) => None,
}
}
fn as_str(&self) -> &str {
match self {
RuleCode::SecondaryCode(code) => code.as_str(),
RuleCode::LintId(id) => id,
}
}
}
impl<'a> From<&'a Diagnostic> for RuleCode<'a> {
fn from(code: &'a Diagnostic) -> Self {
match code.secondary_code() {
Some(diagnostic) => Self::SecondaryCode(diagnostic),
None => Self::LintId(code.id().as_str()),
}
}
}
#[derive(Debug)]
struct SarifResult<'a> {
code: Option<&'a SecondaryCode>,
code: RuleCode<'a>,
level: String,
message: String,
uri: String,
@@ -128,7 +162,7 @@ impl<'a> SarifResult<'a> {
let end_location = message.expect_ruff_end_location();
let path = normalize_path(&*message.expect_ruff_filename());
Ok(Self {
code: message.secondary_code(),
code: RuleCode::from(message),
level: "error".to_string(),
message: message.body().to_string(),
uri: url::Url::from_file_path(&path)
@@ -148,7 +182,7 @@ impl<'a> SarifResult<'a> {
let end_location = message.expect_ruff_end_location();
let path = normalize_path(&*message.expect_ruff_filename());
Ok(Self {
code: message.secondary_code(),
code: RuleCode::from(message),
level: "error".to_string(),
message: message.body().to_string(),
uri: path.display().to_string(),
@@ -183,7 +217,7 @@ impl Serialize for SarifResult<'_> {
}
}
}],
"ruleId": self.code,
"ruleId": self.code.as_str(),
})
.serialize(serializer)
}

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_linter/src/message/github.rs
expression: content
snapshot_kind: text
---
::error title=Ruff,file=syntax_errors.py,line=1,col=15,endLine=2,endColumn=1::syntax_errors.py:1:15: SyntaxError: Expected one or more symbol names after import
::error title=Ruff,file=syntax_errors.py,line=3,col=12,endLine=4,endColumn=1::syntax_errors.py:3:12: SyntaxError: Expected ')', found newline
::error title=Ruff (invalid-syntax),file=syntax_errors.py,line=1,col=15,endLine=2,endColumn=1::syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
::error title=Ruff (invalid-syntax),file=syntax_errors.py,line=3,col=12,endLine=4,endColumn=1::syntax_errors.py:3:12: invalid-syntax: Expected ')', found newline

View File

@@ -8,11 +8,17 @@ expression: redact_fingerprint(&content)
"description": "`os` imported but unused",
"fingerprint": "<redacted>",
"location": {
"lines": {
"begin": 1,
"end": 1
},
"path": "fib.py"
"path": "fib.py",
"positions": {
"begin": {
"column": 8,
"line": 1
},
"end": {
"column": 10,
"line": 1
}
}
},
"severity": "major"
},
@@ -21,11 +27,17 @@ expression: redact_fingerprint(&content)
"description": "Local variable `x` is assigned to but never used",
"fingerprint": "<redacted>",
"location": {
"lines": {
"begin": 6,
"end": 6
},
"path": "fib.py"
"path": "fib.py",
"positions": {
"begin": {
"column": 5,
"line": 6
},
"end": {
"column": 6,
"line": 6
}
}
},
"severity": "major"
},
@@ -34,11 +46,17 @@ expression: redact_fingerprint(&content)
"description": "Undefined name `a`",
"fingerprint": "<redacted>",
"location": {
"lines": {
"begin": 1,
"end": 1
},
"path": "undef.py"
"path": "undef.py",
"positions": {
"begin": {
"column": 4,
"line": 1
},
"end": {
"column": 5,
"line": 1
}
}
},
"severity": "major"
}

View File

@@ -8,11 +8,17 @@ expression: redact_fingerprint(&content)
"description": "Expected one or more symbol names after import",
"fingerprint": "<redacted>",
"location": {
"lines": {
"begin": 1,
"end": 2
},
"path": "syntax_errors.py"
"path": "syntax_errors.py",
"positions": {
"begin": {
"column": 15,
"line": 1
},
"end": {
"column": 1,
"line": 2
}
}
},
"severity": "major"
},
@@ -21,11 +27,17 @@ expression: redact_fingerprint(&content)
"description": "Expected ')', found newline",
"fingerprint": "<redacted>",
"location": {
"lines": {
"begin": 3,
"end": 4
},
"path": "syntax_errors.py"
"path": "syntax_errors.py",
"positions": {
"begin": {
"column": 12,
"line": 3
},
"end": {
"column": 1,
"line": 4
}
}
},
"severity": "major"
}

View File

@@ -1,8 +1,7 @@
---
source: crates/ruff_linter/src/message/grouped.rs
expression: content
snapshot_kind: text
---
syntax_errors.py:
1:15 SyntaxError: Expected one or more symbol names after import
3:12 SyntaxError: Expected ')', found newline
1:15 invalid-syntax: Expected one or more symbol names after import
3:12 invalid-syntax: Expected ')', found newline

View File

@@ -81,7 +81,7 @@ expression: value
"rules": [
{
"fullDescription": {
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Preview\nWhen [preview](https://docs.astral.sh/ruff/preview/) is enabled,\nthe criterion for determining whether an import is first-party\nis stricter, which could affect the suggested fix. See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/source/libraries.html#library-interface-public-and-private-symbols)\n"
"text": "## What it does\nChecks for unused imports.\n\n## Why is this bad?\nUnused imports add a performance overhead at runtime, and risk creating\nimport cycles. They also increase the cognitive load of reading the code.\n\nIf an import statement is used to check for the availability or existence\nof a module, consider using `importlib.util.find_spec` instead.\n\nIf an import statement is used to re-export a symbol as part of a module's\npublic interface, consider using a \"redundant\" import alias, which\ninstructs Ruff (and other tools) to respect the re-export, and avoid\nmarking it as unused, as in:\n\n```python\nfrom module import member as member\n```\n\nAlternatively, you can use `__all__` to declare a symbol as part of the module's\ninterface, as in:\n\n```python\n# __init__.py\nimport some_module\n\n__all__ = [\"some_module\"]\n```\n\n## Fix safety\n\nFixes to remove unused imports are safe, except in `__init__.py` files.\n\nApplying fixes to `__init__.py` files is currently in preview. The fix offered depends on the\ntype of the unused import. Ruff will suggest a safe fix to export first-party imports with\neither a redundant alias or, if already present in the file, an `__all__` entry. If multiple\n`__all__` declarations are present, Ruff will not offer a fix. Ruff will suggest an unsafe fix\nto remove third-party and standard library imports -- the fix is unsafe because the module's\ninterface changes.\n\n## Example\n\n```python\nimport numpy as np # unused import\n\n\ndef area(radius):\n return 3.14 * radius**2\n```\n\nUse instead:\n\n```python\ndef area(radius):\n return 3.14 * radius**2\n```\n\nTo check the availability of a module, use `importlib.util.find_spec`:\n\n```python\nfrom importlib.util import find_spec\n\nif find_spec(\"numpy\") is not None:\n print(\"numpy is installed\")\nelse:\n print(\"numpy is not installed\")\n```\n\n## Preview\nWhen [preview](https://docs.astral.sh/ruff/preview/) is enabled,\nthe criterion for determining whether an import is first-party\nis stricter, which could affect the suggested fix. See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.\n\n## Options\n- `lint.ignore-init-module-imports`\n- `lint.pyflakes.allowed-unused-imports`\n\n## References\n- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)\n- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)\n- [Typing documentation: interface conventions](https://typing.python.org/en/latest/spec/distributing.html#library-interface-public-and-private-symbols)\n"
},
"help": {
"text": "`{name}` imported but unused; consider using `importlib.util.find_spec` to test for availability"

View File

@@ -2,7 +2,7 @@
source: crates/ruff_linter/src/message/text.rs
expression: content
---
syntax_errors.py:1:15: SyntaxError: Expected one or more symbol names after import
syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
|
1 | from os import
| ^
@@ -11,7 +11,7 @@ syntax_errors.py:1:15: SyntaxError: Expected one or more symbol names after impo
4 | def bar():
|
syntax_errors.py:3:12: SyntaxError: Expected ')', found newline
syntax_errors.py:3:12: invalid-syntax: Expected ')', found newline
|
1 | from os import
2 |

View File

@@ -13,7 +13,6 @@ use ruff_notebook::NotebookIndex;
use ruff_source_file::OneIndexed;
use ruff_text_size::{TextLen, TextRange, TextSize};
use crate::line_width::{IndentWidth, LineWidthBuilder};
use crate::message::diff::Diff;
use crate::message::{Emitter, EmitterContext};
use crate::settings::types::UnsafeFixes;
@@ -155,7 +154,12 @@ impl Display for RuleCodeAndBody<'_> {
body = self.message.body(),
)
} else {
f.write_str(self.message.body())
write!(
f,
"{code}: {body}",
code = self.message.id().as_str().red().bold(),
body = self.message.body(),
)
}
}
}
@@ -229,7 +233,7 @@ impl Display for MessageCodeFrame<'_> {
let start_offset = source_code.line_start(start_index);
let end_offset = source_code.line_end(end_index);
let source = replace_whitespace_and_unprintable(
let source = replace_unprintable(
source_code.slice(TextRange::new(start_offset, end_offset)),
self.message.expect_range() - start_offset,
)
@@ -272,16 +276,20 @@ impl Display for MessageCodeFrame<'_> {
}
/// Given some source code and an annotation range, this routine replaces
/// tabs with ASCII whitespace, and unprintable characters with printable
/// representations of them.
/// unprintable characters with printable representations of them.
///
/// The source code returned has an annotation that is updated to reflect
/// changes made to the source code (if any).
fn replace_whitespace_and_unprintable(source: &str, annotation_range: TextRange) -> SourceCode {
///
/// We don't need to normalize whitespace, such as converting tabs to spaces,
/// because `annotate-snippets` handles that internally. Similarly, it's safe to
/// modify the annotation ranges by inserting 3-byte Unicode replacements
/// because `annotate-snippets` will account for their actual width when
/// rendering and displaying the column to the user.
fn replace_unprintable(source: &str, annotation_range: TextRange) -> SourceCode<'_> {
let mut result = String::new();
let mut last_end = 0;
let mut range = annotation_range;
let mut line_width = LineWidthBuilder::new(IndentWidth::default());
// Updates the range given by the caller whenever a single byte (at
// `index` in `source`) is replaced with `len` bytes.
@@ -310,19 +318,7 @@ fn replace_whitespace_and_unprintable(source: &str, annotation_range: TextRange)
};
for (index, c) in source.char_indices() {
let old_width = line_width.get();
line_width = line_width.add_char(c);
if matches!(c, '\t') {
let tab_width = u32::try_from(line_width.get() - old_width)
.expect("small width because of tab size");
result.push_str(&source[last_end..index]);
for _ in 0..tab_width {
result.push(' ');
}
last_end = index + 1;
update_range(index, tab_width);
} else if let Some(printable) = unprintable_replacement(c) {
if let Some(printable) = unprintable_replacement(c) {
result.push_str(&source[last_end..index]);
result.push(printable);
last_end = index + 1;

View File

@@ -99,7 +99,7 @@ pub(crate) struct Codes<'a> {
impl Codes<'_> {
/// Returns an iterator over the [`Code`]s in the `noqa` directive.
pub(crate) fn iter(&self) -> std::slice::Iter<Code> {
pub(crate) fn iter(&self) -> std::slice::Iter<'_, Code<'_>> {
self.codes.iter()
}
@@ -306,7 +306,7 @@ impl<'a> FileNoqaDirectives<'a> {
Self(lines)
}
pub(crate) fn lines(&self) -> &[FileNoqaDirectiveLine] {
pub(crate) fn lines(&self) -> &[FileNoqaDirectiveLine<'_>] {
&self.0
}
@@ -1106,7 +1106,10 @@ impl<'a> NoqaDirectives<'a> {
Self { inner: directives }
}
pub(crate) fn find_line_with_directive(&self, offset: TextSize) -> Option<&NoqaDirectiveLine> {
pub(crate) fn find_line_with_directive(
&self,
offset: TextSize,
) -> Option<&NoqaDirectiveLine<'_>> {
self.find_line_index(offset).map(|index| &self.inner[index])
}
@@ -1139,7 +1142,7 @@ impl<'a> NoqaDirectives<'a> {
.ok()
}
pub(crate) fn lines(&self) -> &[NoqaDirectiveLine] {
pub(crate) fn lines(&self) -> &[NoqaDirectiveLine<'_>] {
&self.inner
}

View File

@@ -21,6 +21,7 @@ static ALLOWLIST_REGEX: LazyLock<Regex> = LazyLock::new(|| {
(?:
# Case-sensitive
pyright
| pyrefly
| mypy:
| type:\s*ignore
| SPDX-License-Identifier:

View File

@@ -75,6 +75,22 @@ impl Violation for BlindExcept {
}
}
fn contains_blind_exception<'a>(
semantic: &'a SemanticModel,
expr: &'a Expr,
) -> Option<(&'a str, ruff_text_size::TextRange)> {
match expr {
Expr::Tuple(ast::ExprTuple { elts, .. }) => elts
.iter()
.find_map(|elt| contains_blind_exception(semantic, elt)),
_ => {
let builtin_exception_type = semantic.resolve_builtin_symbol(expr)?;
matches!(builtin_exception_type, "BaseException" | "Exception")
.then(|| (builtin_exception_type, expr.range()))
}
}
}
/// BLE001
pub(crate) fn blind_except(
checker: &Checker,
@@ -87,12 +103,9 @@ pub(crate) fn blind_except(
};
let semantic = checker.semantic();
let Some(builtin_exception_type) = semantic.resolve_builtin_symbol(type_) else {
let Some((builtin_exception_type, range)) = contains_blind_exception(semantic, type_) else {
return;
};
if !matches!(builtin_exception_type, "BaseException" | "Exception") {
return;
}
// If the exception is re-raised, don't flag an error.
let mut visitor = ReraiseVisitor::new(name);
@@ -110,9 +123,9 @@ pub(crate) fn blind_except(
checker.report_diagnostic(
BlindExcept {
name: builtin_exception_type.to_string(),
name: builtin_exception_type.into(),
},
type_.range(),
range,
);
}

View File

@@ -147,3 +147,93 @@ BLE.py:131:8: BLE001 Do not catch blind exception: `Exception`
| ^^^^^^^^^ BLE001
132 | critical("...", exc_info=None)
|
BLE.py:169:9: BLE001 Do not catch blind exception: `Exception`
|
167 | try:
168 | pass
169 | except (Exception,):
| ^^^^^^^^^ BLE001
170 | pass
|
BLE.py:174:9: BLE001 Do not catch blind exception: `Exception`
|
172 | try:
173 | pass
174 | except (Exception, ValueError):
| ^^^^^^^^^ BLE001
175 | pass
|
BLE.py:179:21: BLE001 Do not catch blind exception: `Exception`
|
177 | try:
178 | pass
179 | except (ValueError, Exception):
| ^^^^^^^^^ BLE001
180 | pass
|
BLE.py:184:21: BLE001 Do not catch blind exception: `Exception`
|
182 | try:
183 | pass
184 | except (ValueError, Exception) as e:
| ^^^^^^^^^ BLE001
185 | print(e)
|
BLE.py:189:9: BLE001 Do not catch blind exception: `BaseException`
|
187 | try:
188 | pass
189 | except (BaseException, TypeError):
| ^^^^^^^^^^^^^ BLE001
190 | pass
|
BLE.py:194:20: BLE001 Do not catch blind exception: `BaseException`
|
192 | try:
193 | pass
194 | except (TypeError, BaseException):
| ^^^^^^^^^^^^^ BLE001
195 | pass
|
BLE.py:199:9: BLE001 Do not catch blind exception: `Exception`
|
197 | try:
198 | pass
199 | except (Exception, BaseException):
| ^^^^^^^^^ BLE001
200 | pass
|
BLE.py:204:9: BLE001 Do not catch blind exception: `BaseException`
|
202 | try:
203 | pass
204 | except (BaseException, Exception):
| ^^^^^^^^^^^^^ BLE001
205 | pass
|
BLE.py:210:10: BLE001 Do not catch blind exception: `Exception`
|
208 | try:
209 | pass
210 | except ((Exception, ValueError), TypeError):
| ^^^^^^^^^ BLE001
211 | pass
|
BLE.py:215:22: BLE001 Do not catch blind exception: `BaseException`
|
213 | try:
214 | pass
215 | except (ValueError, (BaseException, TypeError)):
| ^^^^^^^^^^^^^ BLE001
216 | pass
|

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_commas/mod.rs
---
COM81_syntax_error.py:3:5: SyntaxError: Starred expression cannot be used here
COM81_syntax_error.py:3:5: invalid-syntax: Starred expression cannot be used here
|
1 | # Check for `flake8-commas` violation for a file containing syntax errors.
2 | (
@@ -10,7 +10,7 @@ COM81_syntax_error.py:3:5: SyntaxError: Starred expression cannot be used here
4 | )
|
COM81_syntax_error.py:6:9: SyntaxError: Type parameter list cannot be empty
COM81_syntax_error.py:6:9: invalid-syntax: Type parameter list cannot be empty
|
4 | )
5 |

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_commas/mod.rs
---
COM81_syntax_error.py:3:5: SyntaxError: Starred expression cannot be used here
COM81_syntax_error.py:3:5: invalid-syntax: Starred expression cannot be used here
|
1 | # Check for `flake8-commas` violation for a file containing syntax errors.
2 | (
@@ -10,7 +10,7 @@ COM81_syntax_error.py:3:5: SyntaxError: Starred expression cannot be used here
4 | )
|
COM81_syntax_error.py:6:9: SyntaxError: Type parameter list cannot be empty
COM81_syntax_error.py:6:9: invalid-syntax: Type parameter list cannot be empty
|
4 | )
5 |

View File

@@ -182,60 +182,27 @@ impl Violation for DotFormatInException {
/// EM101, EM102, EM103
pub(crate) fn string_in_exception(checker: &Checker, stmt: &Stmt, exc: &Expr) {
if let Expr::Call(ast::ExprCall {
let Expr::Call(ast::ExprCall {
func,
arguments: Arguments { args, .. },
..
}) = exc
{
if let Some(first) = args.first() {
match first {
// Check for string literals.
Expr::StringLiteral(ast::ExprStringLiteral { value: string, .. }) => {
if checker.is_rule_enabled(Rule::RawStringInException) {
if string.len() >= checker.settings().flake8_errmsg.max_string_length {
let mut diagnostic =
checker.report_diagnostic(RawStringInException, first.range());
if let Some(indentation) =
whitespace::indentation(checker.source(), stmt)
{
diagnostic.set_fix(generate_fix(
stmt,
first,
indentation,
checker.stylist(),
checker.locator(),
));
}
}
}
}
// Check for byte string literals.
Expr::BytesLiteral(ast::ExprBytesLiteral { value: bytes, .. }) => {
if checker.settings().rules.enabled(Rule::RawStringInException) {
if bytes.len() >= checker.settings().flake8_errmsg.max_string_length
&& is_raise_exception_byte_string_enabled(checker.settings())
{
let mut diagnostic =
checker.report_diagnostic(RawStringInException, first.range());
if let Some(indentation) =
whitespace::indentation(checker.source(), stmt)
{
diagnostic.set_fix(generate_fix(
stmt,
first,
indentation,
checker.stylist(),
checker.locator(),
));
}
}
}
}
// Check for f-strings.
Expr::FString(_) => {
if checker.is_rule_enabled(Rule::FStringInException) {
else {
return;
};
if checker.semantic().match_typing_expr(func, "cast") {
return;
}
if let Some(first) = args.first() {
match first {
// Check for string literals.
Expr::StringLiteral(ast::ExprStringLiteral { value: string, .. }) => {
if checker.is_rule_enabled(Rule::RawStringInException) {
if string.len() >= checker.settings().flake8_errmsg.max_string_length {
let mut diagnostic =
checker.report_diagnostic(FStringInException, first.range());
checker.report_diagnostic(RawStringInException, first.range());
if let Some(indentation) = whitespace::indentation(checker.source(), stmt) {
diagnostic.set_fix(generate_fix(
stmt,
@@ -247,32 +214,66 @@ pub(crate) fn string_in_exception(checker: &Checker, stmt: &Stmt, exc: &Expr) {
}
}
}
// Check for .format() calls.
Expr::Call(ast::ExprCall { func, .. }) => {
if checker.is_rule_enabled(Rule::DotFormatInException) {
if let Expr::Attribute(ast::ExprAttribute { value, attr, .. }) =
func.as_ref()
{
if attr == "format" && value.is_literal_expr() {
let mut diagnostic =
checker.report_diagnostic(DotFormatInException, first.range());
if let Some(indentation) =
whitespace::indentation(checker.source(), stmt)
{
diagnostic.set_fix(generate_fix(
stmt,
first,
indentation,
checker.stylist(),
checker.locator(),
));
}
}
// Check for byte string literals.
Expr::BytesLiteral(ast::ExprBytesLiteral { value: bytes, .. }) => {
if checker.settings().rules.enabled(Rule::RawStringInException) {
if bytes.len() >= checker.settings().flake8_errmsg.max_string_length
&& is_raise_exception_byte_string_enabled(checker.settings())
{
let mut diagnostic =
checker.report_diagnostic(RawStringInException, first.range());
if let Some(indentation) = whitespace::indentation(checker.source(), stmt) {
diagnostic.set_fix(generate_fix(
stmt,
first,
indentation,
checker.stylist(),
checker.locator(),
));
}
}
}
}
// Check for f-strings.
Expr::FString(_) => {
if checker.is_rule_enabled(Rule::FStringInException) {
let mut diagnostic =
checker.report_diagnostic(FStringInException, first.range());
if let Some(indentation) = whitespace::indentation(checker.source(), stmt) {
diagnostic.set_fix(generate_fix(
stmt,
first,
indentation,
checker.stylist(),
checker.locator(),
));
}
}
}
// Check for .format() calls.
Expr::Call(ast::ExprCall { func, .. }) => {
if checker.is_rule_enabled(Rule::DotFormatInException) {
if let Expr::Attribute(ast::ExprAttribute { value, attr, .. }) = func.as_ref() {
if attr == "format" && value.is_literal_expr() {
let mut diagnostic =
checker.report_diagnostic(DotFormatInException, first.range());
if let Some(indentation) =
whitespace::indentation(checker.source(), stmt)
{
diagnostic.set_fix(generate_fix(
stmt,
first,
indentation,
checker.stylist(),
checker.locator(),
));
}
}
}
}
_ => {}
}
_ => {}
}
}
}

View File

@@ -278,3 +278,6 @@ EM.py:84:9: EM103 [*] Exception must not use a `.format()` string directly, assi
91 |+ raise RuntimeError(
92 |+ msg
93 |+ )
91 94 |
92 95 |
93 96 | def raise_typing_cast_exception():

View File

@@ -343,3 +343,6 @@ EM.py:84:9: EM103 [*] Exception must not use a `.format()` string directly, assi
91 |+ raise RuntimeError(
92 |+ msg
93 |+ )
91 94 |
92 95 |
93 96 | def raise_typing_cast_exception():

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_implicit_str_concat/mod.rs
---
ISC_syntax_error.py:2:5: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:2:5: invalid-syntax: missing closing quote in string literal
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -10,7 +10,7 @@ ISC_syntax_error.py:2:5: SyntaxError: missing closing quote in string literal
4 | "a" """b
|
ISC_syntax_error.py:2:7: SyntaxError: Expected a statement
ISC_syntax_error.py:2:7: invalid-syntax: Expected a statement
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -31,7 +31,7 @@ ISC_syntax_error.py:3:1: ISC001 Implicitly concatenated string literals on one l
|
= help: Combine string literals
ISC_syntax_error.py:3:9: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:3:9: invalid-syntax: missing closing quote in string literal
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -41,7 +41,7 @@ ISC_syntax_error.py:3:9: SyntaxError: missing closing quote in string literal
5 | c""" "d
|
ISC_syntax_error.py:3:11: SyntaxError: Expected a statement
ISC_syntax_error.py:3:11: invalid-syntax: Expected a statement
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -63,7 +63,7 @@ ISC_syntax_error.py:4:1: ISC001 Implicitly concatenated string literals on one l
|
= help: Combine string literals
ISC_syntax_error.py:5:6: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:5:6: invalid-syntax: missing closing quote in string literal
|
3 | "a" "b" "c
4 | "a" """b
@@ -73,7 +73,7 @@ ISC_syntax_error.py:5:6: SyntaxError: missing closing quote in string literal
7 | # For f-strings, the `FStringRanges` won't contain the range for
|
ISC_syntax_error.py:5:8: SyntaxError: Expected a statement
ISC_syntax_error.py:5:8: invalid-syntax: Expected a statement
|
3 | "a" "b" "c
4 | "a" """b
@@ -84,7 +84,7 @@ ISC_syntax_error.py:5:8: SyntaxError: Expected a statement
8 | # unterminated f-strings.
|
ISC_syntax_error.py:9:8: SyntaxError: f-string: unterminated string
ISC_syntax_error.py:9:8: invalid-syntax: f-string: unterminated string
|
7 | # For f-strings, the `FStringRanges` won't contain the range for
8 | # unterminated f-strings.
@@ -94,7 +94,7 @@ ISC_syntax_error.py:9:8: SyntaxError: f-string: unterminated string
11 | f"a" f"""b
|
ISC_syntax_error.py:9:9: SyntaxError: Expected FStringEnd, found newline
ISC_syntax_error.py:9:9: invalid-syntax: Expected FStringEnd, found newline
|
7 | # For f-strings, the `FStringRanges` won't contain the range for
8 | # unterminated f-strings.
@@ -116,7 +116,7 @@ ISC_syntax_error.py:10:1: ISC001 Implicitly concatenated string literals on one
|
= help: Combine string literals
ISC_syntax_error.py:10:13: SyntaxError: f-string: unterminated string
ISC_syntax_error.py:10:13: invalid-syntax: f-string: unterminated string
|
8 | # unterminated f-strings.
9 | f"a" f"b
@@ -126,7 +126,7 @@ ISC_syntax_error.py:10:13: SyntaxError: f-string: unterminated string
12 | c""" f"d {e
|
ISC_syntax_error.py:10:14: SyntaxError: Expected FStringEnd, found newline
ISC_syntax_error.py:10:14: invalid-syntax: Expected FStringEnd, found newline
|
8 | # unterminated f-strings.
9 | f"a" f"b
@@ -148,7 +148,7 @@ ISC_syntax_error.py:11:1: ISC001 Implicitly concatenated string literals on one
|
= help: Combine string literals
ISC_syntax_error.py:16:5: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:16:5: invalid-syntax: missing closing quote in string literal
|
14 | (
15 | "a"
@@ -158,7 +158,7 @@ ISC_syntax_error.py:16:5: SyntaxError: missing closing quote in string literal
18 | "d"
|
ISC_syntax_error.py:26:9: SyntaxError: f-string: unterminated triple-quoted string
ISC_syntax_error.py:26:9: invalid-syntax: f-string: unterminated triple-quoted string
|
24 | (
25 | """abc"""
@@ -170,14 +170,14 @@ ISC_syntax_error.py:26:9: SyntaxError: f-string: unterminated triple-quoted stri
| |__^
|
ISC_syntax_error.py:30:1: SyntaxError: unexpected EOF while parsing
ISC_syntax_error.py:30:1: invalid-syntax: unexpected EOF while parsing
|
28 | "i" "j"
29 | )
| ^
|
ISC_syntax_error.py:30:1: SyntaxError: f-string: unterminated string
ISC_syntax_error.py:30:1: invalid-syntax: f-string: unterminated string
|
28 | "i" "j"
29 | )

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_implicit_str_concat/mod.rs
---
ISC_syntax_error.py:2:5: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:2:5: invalid-syntax: missing closing quote in string literal
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -10,7 +10,7 @@ ISC_syntax_error.py:2:5: SyntaxError: missing closing quote in string literal
4 | "a" """b
|
ISC_syntax_error.py:2:7: SyntaxError: Expected a statement
ISC_syntax_error.py:2:7: invalid-syntax: Expected a statement
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -20,7 +20,7 @@ ISC_syntax_error.py:2:7: SyntaxError: Expected a statement
5 | c""" "d
|
ISC_syntax_error.py:3:9: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:3:9: invalid-syntax: missing closing quote in string literal
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -30,7 +30,7 @@ ISC_syntax_error.py:3:9: SyntaxError: missing closing quote in string literal
5 | c""" "d
|
ISC_syntax_error.py:3:11: SyntaxError: Expected a statement
ISC_syntax_error.py:3:11: invalid-syntax: Expected a statement
|
1 | # The lexer doesn't emit a string token if it's unterminated
2 | "a" "b
@@ -40,7 +40,7 @@ ISC_syntax_error.py:3:11: SyntaxError: Expected a statement
5 | c""" "d
|
ISC_syntax_error.py:5:6: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:5:6: invalid-syntax: missing closing quote in string literal
|
3 | "a" "b" "c
4 | "a" """b
@@ -50,7 +50,7 @@ ISC_syntax_error.py:5:6: SyntaxError: missing closing quote in string literal
7 | # For f-strings, the `FStringRanges` won't contain the range for
|
ISC_syntax_error.py:5:8: SyntaxError: Expected a statement
ISC_syntax_error.py:5:8: invalid-syntax: Expected a statement
|
3 | "a" "b" "c
4 | "a" """b
@@ -61,7 +61,7 @@ ISC_syntax_error.py:5:8: SyntaxError: Expected a statement
8 | # unterminated f-strings.
|
ISC_syntax_error.py:9:8: SyntaxError: f-string: unterminated string
ISC_syntax_error.py:9:8: invalid-syntax: f-string: unterminated string
|
7 | # For f-strings, the `FStringRanges` won't contain the range for
8 | # unterminated f-strings.
@@ -71,7 +71,7 @@ ISC_syntax_error.py:9:8: SyntaxError: f-string: unterminated string
11 | f"a" f"""b
|
ISC_syntax_error.py:9:9: SyntaxError: Expected FStringEnd, found newline
ISC_syntax_error.py:9:9: invalid-syntax: Expected FStringEnd, found newline
|
7 | # For f-strings, the `FStringRanges` won't contain the range for
8 | # unterminated f-strings.
@@ -82,7 +82,7 @@ ISC_syntax_error.py:9:9: SyntaxError: Expected FStringEnd, found newline
12 | c""" f"d {e
|
ISC_syntax_error.py:10:13: SyntaxError: f-string: unterminated string
ISC_syntax_error.py:10:13: invalid-syntax: f-string: unterminated string
|
8 | # unterminated f-strings.
9 | f"a" f"b
@@ -92,7 +92,7 @@ ISC_syntax_error.py:10:13: SyntaxError: f-string: unterminated string
12 | c""" f"d {e
|
ISC_syntax_error.py:10:14: SyntaxError: Expected FStringEnd, found newline
ISC_syntax_error.py:10:14: invalid-syntax: Expected FStringEnd, found newline
|
8 | # unterminated f-strings.
9 | f"a" f"b
@@ -102,7 +102,7 @@ ISC_syntax_error.py:10:14: SyntaxError: Expected FStringEnd, found newline
12 | c""" f"d {e
|
ISC_syntax_error.py:16:5: SyntaxError: missing closing quote in string literal
ISC_syntax_error.py:16:5: invalid-syntax: missing closing quote in string literal
|
14 | (
15 | "a"
@@ -112,7 +112,7 @@ ISC_syntax_error.py:16:5: SyntaxError: missing closing quote in string literal
18 | "d"
|
ISC_syntax_error.py:26:9: SyntaxError: f-string: unterminated triple-quoted string
ISC_syntax_error.py:26:9: invalid-syntax: f-string: unterminated triple-quoted string
|
24 | (
25 | """abc"""
@@ -124,14 +124,14 @@ ISC_syntax_error.py:26:9: SyntaxError: f-string: unterminated triple-quoted stri
| |__^
|
ISC_syntax_error.py:30:1: SyntaxError: unexpected EOF while parsing
ISC_syntax_error.py:30:1: invalid-syntax: unexpected EOF while parsing
|
28 | "i" "j"
29 | )
| ^
|
ISC_syntax_error.py:30:1: SyntaxError: f-string: unterminated string
ISC_syntax_error.py:30:1: invalid-syntax: f-string: unterminated string
|
28 | "i" "j"
29 | )

View File

@@ -1,10 +1,11 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::whitespace::trailing_comment_start_offset;
use ruff_python_ast::{Stmt, StmtExpr};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::fix;
use crate::{Fix, FixAvailability, Violation};
use crate::{Edit, Fix, FixAvailability, Violation};
/// ## What it does
/// Removes ellipses (`...`) in otherwise non-empty class bodies.
@@ -50,15 +51,21 @@ pub(crate) fn ellipsis_in_non_empty_class_body(checker: &Checker, body: &[Stmt])
}
for stmt in body {
let Stmt::Expr(StmtExpr { value, .. }) = &stmt else {
let Stmt::Expr(StmtExpr { value, .. }) = stmt else {
continue;
};
if value.is_ellipsis_literal_expr() {
let mut diagnostic =
checker.report_diagnostic(EllipsisInNonEmptyClassBody, stmt.range());
let edit =
fix::edits::delete_stmt(stmt, Some(stmt), checker.locator(), checker.indexer());
// Try to preserve trailing comment if it exists
let edit = if let Some(index) = trailing_comment_start_offset(stmt, checker.source()) {
Edit::range_deletion(stmt.range().add_end(index))
} else {
fix::edits::delete_stmt(stmt, Some(stmt), checker.locator(), checker.indexer())
};
diagnostic.set_fix(Fix::safe_edit(edit).isolate(Checker::isolation(
checker.semantic().current_statement_id(),
)));

View File

@@ -145,3 +145,22 @@ PYI013.py:36:5: PYI013 [*] Non-empty class body must not contain `...`
37 36 |
38 37 | def __init__():
39 38 | pass
PYI013.py:44:5: PYI013 [*] Non-empty class body must not contain `...`
|
42 | class NonEmptyChildWithInlineComment:
43 | value: int
44 | ... # preserve me
| ^^^ PYI013
|
= help: Remove unnecessary `...`
Safe fix
41 41 |
42 42 | class NonEmptyChildWithInlineComment:
43 43 | value: int
44 |- ... # preserve me
44 |+ # preserve me
45 45 |
46 46 |
47 47 | class EmptyClass:

Some files were not shown because too many files have changed in this diff Show More