Compare commits

...

75 Commits

Author SHA1 Message Date
renovate[bot]
de9724565a Update Rust crate unicode_names2 to v2 2026-01-12 07:58:24 +00:00
renovate[bot]
f92a818fdd Update Rust crate insta to v1.46.0 (#22527)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 07:56:12 +00:00
renovate[bot]
39ec97df79 Update taiki-e/install-action action to v2.65.13 (#22522)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:54:28 +01:00
renovate[bot]
a21988d820 Update NPM Development dependencies (#22525)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:53:56 +01:00
renovate[bot]
eab41d5a4c Update dependency ruff to v0.14.11 (#22517)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:52:14 +01:00
renovate[bot]
52f4a529f7 Update CodSpeedHQ/action action to v4.5.2 (#22516)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:49:54 +01:00
renovate[bot]
8fd142f4ef Update Rust crate libc to v0.2.179 (#22520)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:48:43 +01:00
renovate[bot]
5dca6d22df Update Rust crate clap to v4.5.54 (#22518)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:48:30 +01:00
renovate[bot]
4c5846c6fe Update Rust crate imperative to v1.0.7 (#22519)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:48:06 +01:00
renovate[bot]
0289d1b163 Update Rust crate syn to v2.0.113 (#22521)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:47:25 +01:00
Charlie Marsh
09ff3e7056 Set priority for prek to enable concurrent execution (#22506)
## Summary

prek allows you to set priorities, and can run tasks of the same
priority concurrently (e.g., we can run Ruff's Python formatting and
`cargo fmt` at the same time). On my machine, this takes `uvx prek run
-a` from 19.4s to 5.0s (~a 4x speed-up).
2026-01-11 19:21:38 +00:00
Charlie Marsh
7bacca9b62 Use prek in documentation and CI (#22505)
## Summary

AFAIK, many of us on the team are using prek now. It seems appropriate
to modify the docs to formalize it.
2026-01-11 14:17:10 -05:00
Charlie Marsh
2c68057c4b [ty] Preserve argument signature in @total_ordering (#22496)
## Summary

Closes https://github.com/astral-sh/ty/issues/2435.
2026-01-10 14:35:58 -05:00
Charlie Marsh
8e29be9c1c Add let chains preference to development guidelines (#22497) 2026-01-10 12:29:07 -05:00
Dylan
880513a013 Respect fmt: skip for multiple statements on same logical line (#22119)
This PR adjusts the logic for skipping formatting so that a `fmt: skip`
can affect multiple statements if they lie on the same line.

Specifically, a `fmt: skip` comment will now suppress all the statements
in the suite in which it appears whose range intersects the line
containing the skip directive. For example:

```python
x=[
'1'
];x=2 # fmt: skip
```

remains unchanged after formatting.

(Note that compound statements are somewhat special and were handled in
a previous PR - see #20633).


Closes #17331 and #11430.

Simplest to review commit by commit - the key diffs of interest are the
commit introducing the core logic, and the diff between the snapshots
introduced in the last commit (compared to the second commit).

# Implementation

On `main` we format a suite of statements by iterating through them. If
we meet a statement with a leading or trailing (own-line)`fmt: off`
comment, then we suppress formatting until we meet a `fmt: on` comment.
Otherwise we format the statement using its own formatting rule.

How are `fmt: skip` comments handled then? They are handled internally
to the formatting of each statement. Specifically, calling `.fmt` on a
statement node will first check to see if there is a trailing,
end-of-line `fmt: skip` (or `fmt: off`/`yapf: off`), and if so then
write the node with suppressed formatting.

In this PR we move the responsibility for handling `fmt: skip` into the
formatting logic of the suite itself. This is done as follows:

- Before beginning to format the suite, we do a pass through the
statements and collect the data of ranges with skipped formatting. More
specifically, we create a map with key given by the _first_ skipped
statement in a block and value a pair consisting of the _last_ skipped
statement and the _range_ to write verbatim.
- We iterate as before, but if we meet a statement that is a key in the
map constructed above, we pause to write the associated range verbatim.
We then advance the iterator to the last statement in the block and
proceed as before.

## Addendum on range formatting

We also had to make some changes to range formatting in order to support
this new behavior. For example, we want to make sure that

```python
<RANGE_START>x=1<RANGE_END>;x=2 # fmt: skip
```

formats verbatim, rather than becoming 

```python
x = 1;x=2 # fmt: skip
```

Recall that range formatting proceeds in two steps:
1. Find the smallest enclosing node containing the range AND that has
enough info to format the range (so it may be larger than you think,
e.g. a docstring has enclosing node given by the suite, not the string
itself.)
2. Carve out the formatted range from the result of formatting that
enclosing node.

We had to modify (1), since the suite knows how to format skipped nodes,
but nodes may not "know" they are skipped. To do this we altered the
`visit_body` bethod of the `FindEnclosingNode` visitor: now we iterate
through the statements and check for skipped ranges intersecting the
format range. If we find them, we return without descending. The result
is to consider the statement containing the suite as the enclosing node
in this case.
2026-01-10 14:56:58 +00:00
Charlie Marsh
046c5a46d8 [ty] Support dataclass_transform as a function call (#22378)
## Summary

Instead of just as a decorator.

Closes https://github.com/astral-sh/ty/issues/2319.
2026-01-10 08:45:45 -05:00
Micha Reiser
cfed34334c Update mypy primer pin (#22490) 2026-01-10 14:00:00 +01:00
Charlie Marsh
11cc324449 [ty] Detect invalid @total_ordering applications in non-decorator contexts (#22486)
## Summary

E.g., `ValidOrderedClass = total_ordering(HasOrderingMethod)`.
2026-01-09 19:37:58 -05:00
Charlie Marsh
c88e1a0663 [ty] Avoid emitting Liskov repeated violations from grandparent to child (#22484)
## Summary

If parent violates LSP against grandparent, and child has the same
violation (but matches parent), we no longer flag the LSP violation on
child, since it can't be fixed without violating parent.

If parent violates LSP against grandparent, and child violates LSP
against both parent and grandparent, we emit two diagnostics (one for
each violation).

If parent violates LSP against grandparent, and child violates LSP
against parent (but not grandparent), we flag it.

Closes https://github.com/astral-sh/ty/issues/2000.
2026-01-09 19:27:57 -05:00
William Woodruff
2c7ac17b1e Remove two old zizmor ignore comments (#22485) 2026-01-09 18:02:44 -05:00
Matthias Schoettle
a0f2cd0ded Add language: golang to actionlint pre-commit hook (#22483) 2026-01-09 19:30:00 +00:00
Alex Waygood
dc61104726 [ty] Derive Default in a few more places in place.rs (#22481) 2026-01-09 18:37:36 +00:00
Jason K Hall
f40c578ffb [ruff] document RUF100 trailing comment fix behavior (#22479)
## Summary
Ruff's `--fix` for `RUF100` can inadvertently remove trailing comments
(e.g., `pylint` or `mypy` suppressions) by interpreting them as
descriptions. This PR adds a "Conflict with other linters" section to
the rule documentation to clarify this behavior and provide the
double-hash (`# noqa # pylint`) workaround.

## Fixes
Fixes #20762
2026-01-09 09:01:09 -08:00
Leo Hayashi
baaf6966f6 [ruff] Split WASM build and publish into separate workflows (#12387) (#22476)
Co-authored-by: Hayashi Reo <reo@wantedly.com>
2026-01-09 17:41:13 +01:00
Zanie Blue
a6df4a3be7 Add llms.txt support for documentation (#22463)
Matching uv.

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-09 09:56:41 -06:00
Micha Reiser
1094009790 Fix configuration path in --show-settings (#22478) 2026-01-09 16:28:57 +01:00
Andrew Gallant
10eb3d52d5 [ty] Rank top-level module symbols above most other symbols
This makes it so, e.g., `os<CURSOR>` will suggest the top-level stdlib
`os` module even if there is an `os` symbol elsewhere in your project.

The way this is done is somewhat overwrought, but it's done to avoid
suggesting top-level modules over other symbols already in scope.

Fixes astral-sh/issues#1852
2026-01-09 10:11:37 -05:00
Andrew Gallant
91d24ebb92 [ty] Give exact completion matches very high ranking
Aside from ruling out "impossible" completions in the current context,
we give completions with an exact match the highest possible ranking.
2026-01-09 09:55:32 -05:00
Andrew Gallant
e68fba20a9 [ty] Remove type-var-typing-over-ast evaluation task
This is now strictly redundant with the `typing-gets-priority` task.
2026-01-09 09:55:32 -05:00
Andrew Gallant
64117c1146 [ty] Improve completion ranking based on origin of symbols
Part of this was already done, but it was half-assed. We now look at the
search path that a symbol came from and centralize a symbol's origin
classification.

The preference ordering here is maybe not the right one, but we can
iterate as users give us feedback. Note also that the preference
ordering based on the origin is pretty low in the relevance sorting.
This means that other more specific criteria will and can override this.

This results in some nice improvements to our evaluation tasks.
2026-01-09 09:55:32 -05:00
Andrew Gallant
2196ef3a33 [ty] Add package priority completion evaluation tasks
This commit adds two new tests. One checks that a symbol in the current
project gets priority over a symbol in the standard library. Another
checks that a symbol in a third party dependency gets priority over a
symbol in the standard library. We don't get either of these right
today.

Note that these comparisons are done ceteris paribus. A symbol from the
standard library could still be ranked above a symbol elsewhere.
(Although I believe currently this is somewhat rare.)
2026-01-09 09:55:32 -05:00
Andrew Gallant
c36397031b [ty] Actually ignore hidden directories in completion evaluation
I apparently don't know how to use my own API. Previously,
we would skip, e.g., `.venv`, but still descend into it.

This was annoying in practice because I sometimes have an
environment in one of the truth task directories. The eval
command should ignore that entirely, but it ended up
choking on it without properly ignoring hidden files
and directories.
2026-01-09 09:55:32 -05:00
Andrew Gallant
eef34958f9 [ty] More sensible sorting for results returned by all_symbols
Previously, we would only sort by name and file path (unless the symbol
refers to an entire module). This led to somewhat confusing ordering,
since the file path is absolute and can be anything.

Sorting by module name after symbol name gives a more predictable
ordering in common practice.

This is mostly just an improvement for debugging purposes, i.e., when
looking at the output of `all_symbols` directly. This mostly shouldn't
impact completion ordering since completions do their own ranking.
2026-01-09 09:55:32 -05:00
Andrew Gallant
f1bd5f1941 [ty] Rank unimported symbols from typing higher
This addresses a number of minor annoyances where users really want
imports from `typing` most of the time.

For example:
https://github.com/astral-sh/ty/issues/1274#issuecomment-3345884227
2026-01-09 09:55:32 -05:00
Andrew Gallant
56862f8241 [ty] Refactor ranking
This moves the information we want to use to rank completions into
`Completion` itself. (This was the primary motivation for creating a
`CompletionBuilder` in the first place.)

The principal advantage here is that we now only need to compute the
relevance information for each completion exactly once. Previously, we
were computing it on every comparison, which might end up doing
redundant work for a not insignifcant number of completions.

The relevance information is also specifically constructed from the
builder so that, in the future, we might choose to short-circuit
construction if we know we'll never send it back to the client (e.g.,
its ranking is worse than the lowest ranked completion). But we don't
implement that optimization yet.
2026-01-09 09:55:32 -05:00
Andrew Gallant
e9cc2f6f42 [ty] Refactor completion construction to use a builder
I want to be able to attach extra data to each `Completion`, but not
burden callers with the need to construct it. This commit helps get us
to that point by requiring callers to use a `CompletionBuilder` for
construction instead of a `Completion` itself.

I think this will also help in the future if it proves to be the case
that we can improve performance by delaying work until we actually build
a `Completion`, which might only happen if we know we won't throw it
out. But we aren't quite there yet.

This also lets us tighten things up a little bit and makes completion
construction less noisy. The downside is that callers no longer need to
consider "every" completion field.

There should not be any behavior changes here.
2026-01-09 09:55:32 -05:00
Andrew Gallant
8dd56d4264 [ty] Rename internal completion Rank type to Relevance
I think "relevance" captures the meaning of this type more precisely.
2026-01-09 09:55:32 -05:00
Andrew Gallant
d4c1b0ccc7 [ty] Add evaluation tasks for a few cases
I went through https://github.com/astral-sh/ty/issues/1274 and tried to
extract what I could into eval tasks. Some of the suggestions from that
issue have already been done, but most haven't.

This captures the status quo.
2026-01-09 09:55:32 -05:00
Micha Reiser
e61657ff3c [ty] Enable unused-type-ignore-comment by default (#22474) 2026-01-09 10:58:43 +00:00
Micha Reiser
ba5dd5837c [ty] Pass slice to specialize (#22421) 2026-01-09 09:45:39 +01:00
Micha Reiser
c5f6a74da5 [ty] Fix goto definition for relative imports in third-party files (#22457) 2026-01-09 09:30:31 +01:00
Micha Reiser
b3cde98cd1 [ty] Update salsa (#22473) 2026-01-09 09:21:45 +01:00
Carl Meyer
f9f7a6901b Complete minor TODO in ty_python_semantic crate (#22468)
This TODO is very old -- we have long since recorded this definition.
Updating the test to actually assert the declaration requires a new
helper method for declarations, to complement the existing
`first_public_binding` helper.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-08 16:04:34 -08:00
Dylan
c920cf8cdb Bump 0.14.11 (#22462) 2026-01-08 12:51:47 -06:00
Micha Reiser
bb757b5a79 [ty] Don't show diagnostics for excluded files (#22455) 2026-01-08 18:27:28 +01:00
Charlie Marsh
1f49e8ef51 Include configured src directories when resolving graphs (#22451)
## Summary

This PR augments the detected source paths with the user-configured
`src` when computing roots for `ruff analyze graph`.
2026-01-08 15:19:15 +00:00
Douglas Creager
701f5134ab [ty] Only consider fully static pivots when deriving transitive constraints (#22444)
When working with constraint sets, we track transitive relationships
between the constraints in the set. For instance, in `S ≤ int ∧ int ≤
T`, we can infer that `S ≤ T`. However, we should only consider fully
static types when looking for a "pivot" for this kind of transitive
relationship. The same pattern does not hold for `S ≤ Any ∧ Any ≤ T`;
because the two `Any`s can materialize to different types, we cannot
infer that `S ≤ T`.

Fixes https://github.com/astral-sh/ty/issues/2371
2026-01-08 09:31:55 -05:00
Micha Reiser
eea9ad8352 Pin maturin version (#22454) 2026-01-08 12:39:51 +01:00
Alex Waygood
eeac2bd3ee [ty] Optimize union building for unions with many enum-literal members (#22363) 2026-01-08 10:50:04 +00:00
Jason K Hall
7319c37f4e docs: fix jupyter notebook discovery info for editors (#22447)
Resolves #21892

## Summary

This PR updates `docs/editors/features.md` to clarify that Jupyter
Notebooks are now included by default as of version 0.6.0.
2026-01-08 11:52:01 +05:30
Amethyst Reese
805503c19a [ruff] Improve fix title for RUF102 invalid rule code (#22100)
## Summary

Updates the fix title for RUF102 to either specify which rule code to
remove, or clarify
that the entire suppression comment should be removed.

## Test Plan

Updated test snapshots.
2026-01-07 17:23:18 -08:00
Charlie Marsh
68a2f6c57d [ty] Fix super() with TypeVar-annotated self and cls parameter (#22208)
## Summary

This PR fixes `super()` handling when the first parameter (`self` or
`cls`) is annotated with a TypeVar, like `Self`.

Previously, `super()` would incorrectly resolve TypeVars to their bounds
before creating the `BoundSuperType`. So if you had `self: Self` where
`Self` is bounded by `Parent`, we'd process `Parent` as a
`NominalInstance` and end up with `SuperOwnerKind::Instance(Parent)`.

As a result:

```python
class Parent:
    @classmethod
    def create(cls) -> Self:
        return cls()

class Child(Parent):
    @classmethod
    def create(cls) -> Self:
        return super().create()  # Error: Argument type `Self@create` does not satisfy upper bound `Parent`
```

We now track two additional variants on `SuperOwnerKind` for TypeVar
owners:

- `InstanceTypeVar`: for instance methods where self is a TypeVar (e.g.,
`self: Self`).
- `ClassTypeVar`: for classmethods where `cls` is a `TypeVar` wrapped in
`type[...]` (e.g., `cls: type[Self]`).

Closes https://github.com/astral-sh/ty/issues/2122.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2026-01-07 19:56:09 -05:00
Alex Waygood
abaa735e1d [ty] Improve UnionBuilder performance by changing Type::is_subtype_of calls to Type::is_redundant_with (#22337) 2026-01-07 22:17:44 +00:00
Jelle Zijlstra
c02d164357 Check required-version before parsing rules (#22410)
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-07 17:29:27 +00:00
Andrew Gallant
88aa3f82f0 [ty] Fix generally poor ranking in playground completions
We enabled [`CompletionListisIncomplete`] in our LSP server a while back
in order to have more of a say in how we rank and filter completions.
When it isn't set, the client tends to ask for completions less
frequently and will instead do its own filtering.

But... we did not enable it for the playground. Which I guess didn't
result in anything noticeably bad until we started limiting completions
to 1,000 suggestions. This meant that if the _initial_ completion
response didn't include the ultimate desired answer, then it would never
show up in the results until the client requested completions again.
This in turn led to some very poor completions in some cases.

This all gets fixed by simply enabling `isIncomplete` for Monaco.

Fixes astral-sh/ty#2340

[`CompletionList::isIncomplete`](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#completionList)
2026-01-07 12:25:26 -05:00
Carl Meyer
30902497db [ty] Make signature return and parameter types non-optional (#22425)
## Summary

Fixes https://github.com/astral-sh/ty/issues/2363
Fixes https://github.com/astral-sh/ty/issues/2013

And several other bugs with the same root cause. And makes any similar
bugs impossible by construction.

Previously we distinguished "no annotation" (Rust `None`) from
"explicitly annotated with something of type `Unknown`" (which is not an
error, and results in the annotation being of Rust type
`Some(Type::DynamicType(Unknown))`), even though semantically these
should be treated the same.

This was a bit of a bug magnet, because it was easy to forget to make
this `None` -> `Unknown` translation everywhere we needed to. And in
fact we did fail to do it in the case of materializing a callable,
leading to a top-materialized callable still having (rust) `None` return
type, which should have instead materialized to `object`.

This also fixes several other bugs related to not handling un-annotated
return types correctly:
1. We previously considered the return type of an unannotated `async
def` to be `Unknown`, where it should be `CoroutineType[Any, Any,
Unknown]`.
2. We previously failed to infer a ParamSpec if the return type of the
callable we are inferring against was not annotated.
3. We previously wrongly returned `Unknown` from `some_dict.get("key",
None)` if the value type of `some_dict` included a callable type with
un-annotated return type.

We now make signature return types and annotated parameter types
required, and we eagerly insert `Unknown` if there's no annotation. Most
of the diff is just a bunch of mechanical code changes where we
construct these types, and simplifications where we use them.

One exception is type display: when a callable type has un-annotated
parameters, we want to display them as un-annotated, but if it has a
parameter explicitly annotated with something of `Unknown` type, we want
to display that parameter as `x: Unknown` (it would be confusing if it
looked like your annotation just disappeared entirely).

Fortunately, we already have a mechanism in place for handling this: the
`inferred_annotation` flag, which suppresses display of an annotation.
Previously we used it only for `self` and `cls` parameters with an
inferred annotated type -- but we now also set it for any un-annotated
parameter, for which we infer `Unknown` type.

We also need to normalize `inferred_annotation`, since it's display-only
and shouldn't impact type equivalence. (This is technically a
previously-existing bug, it just never came up when it only affected
self types -- now it comes up because we have tests asserting that `def
f(x)` and `def g(x: Unknown)` are equivalent.)

## Test Plan

Added mdtests.
2026-01-07 09:18:39 -08:00
Alex Waygood
3ad99fb1f4 [ty] Fix an mdtest title (#22439) 2026-01-07 16:34:56 +00:00
Micha Reiser
d0ff59cfe5 [ty] Use Pool from regex_automata to reuse the matches allocations (#22438) 2026-01-07 17:22:35 +01:00
Andrew Gallant
952193e0c6 [ty] Offer completions for T when a value has type Unknown | T
Fixes astral-sh/ty#2197
2026-01-07 10:15:36 -05:00
Alex Waygood
4cba2e8f91 [ty] Generalize len() narrowing somewhat (#22330) 2026-01-07 13:57:50 +00:00
Alex Waygood
1a7f53022a [ty] Link to Callable __name__ FAQ directly from unresolved-attribute diagnostic (#22437) 2026-01-07 13:22:53 +00:00
Micha Reiser
266a7bc4c5 [ty] Fix stack overflow due to too small stack size (#22433) 2026-01-07 13:55:23 +01:00
Micha Reiser
3b7a5e4de8 [ty] Allow including files with no extension (#22243) 2026-01-07 11:38:02 +01:00
Micha Reiser
93039d055d [ty] Add --add-ignore CLI option (#21696) 2026-01-07 11:17:05 +01:00
Jason K Hall
3b61da0da3 Allow Python 3.15 as valid target-version value in preview (#22419) 2026-01-07 09:38:36 +01:00
Alex Waygood
5933cc0101 [ty] Optimize and simplify some object-related code (#22366)
## Summary

I wondered if this might improve performance a little. It doesn't seem
to, but it's a net reduction in LOC and I think the changes make sense.
I think it's worth it anyway just in terms of simplifying the code.

## Test Plan

Our existing tests all pass and the primer report is clean (aside from
our usual flakes).
2026-01-07 08:35:26 +00:00
Dhruv Manilawala
2190fcebe0 [ty] Substitute ParamSpec in overloaded functions (#22416)
## Summary

fixes: https://github.com/astral-sh/ty/issues/2027

This PR fixes a bug where the type mapping for a `ParamSpec` was not
being applied in an overloaded function.

This PR also fixes https://github.com/astral-sh/ty/issues/2081 and
reveals new diagnostics which doesn't look related to the bug:

```py
from prefect import flow, task

@task
def task_get() -> int:
    """Task get integer."""
    return 42

@task
def task_add(x: int, y: int) -> int:
    """Task add two integers."""
    print(f"Adding {x} and {y}")
    return x + y

@flow
def my_flow():
    """My flow."""
    x = 23
    future_y = task_get.submit()

	# error: [no-matching-overload]
    task_add(future_y, future_y)
	# error: [no-matching-overload]
    task_add(x, future_y)
```

The reason is that the type of `future_y` is `PrefectFuture[int]` while
the type of `task_add` is `Task[(x: int, y: int), int]` which means that
the assignment between `int` and `PrefectFuture[int]` fails which
results in no overload matching. Pyright also raises the invalid
argument type error on all three usages of `future_y` in those two
calls.

## Test Plan

Add regression mdtest from the linked issue.
2026-01-07 13:30:34 +05:30
Douglas Creager
df9d6886d4 [ty] Remove redundant apply_specialization type mappings (#22422)
@dhruvmanila encountered this in #22416 — there are two different
`TypeMapping` variants for apply a specialization to a type. One
operates on a full `Specialization` instance, the other on a partially
constructed one. If we move this enum-ness "down a level" it reduces
some copy/paste in places where we are operating on a `TypeMapping`.
2026-01-07 13:10:26 +05:30
Aria Desires
5133fa4516 [ty] fix typo in CODEOWNERS (#22430) 2026-01-07 07:44:46 +01:00
Amethyst Reese
21c5cfe236 Consolidate diagnostics for matched disable/enable suppression comments (#22099)
## Summary

Combines diagnostics for matched suppression comments, so that ranges
and autofixes for both
the `#ruff:disable` and `#ruff:enable` comments will be reported as a
single diagnostic.

## Test Plan

Snapshot changes, added new snapshot for full output from preview mode
rather than just a diff.

Issue #3711
2026-01-06 18:42:51 -08:00
Carl Meyer
f97da18267 [ty] improve typevar solving from constraint sets (#22411)
## Summary

Fixes https://github.com/astral-sh/ty/issues/2292

When solving a bounded typevar, we preferred the upper bound over the
actual type seen in the call. This change fixes that.

## Test Plan

Added mdtest, existing tests pass.
2026-01-06 13:10:51 -08:00
Alex Waygood
bc191f59b9 Convert more ty snapshots to the new format (#22424) 2026-01-06 20:01:41 +00:00
Alex Waygood
00f86c39e0 Add Alex Waygood back as a ty_ide codeowner (#22423) 2026-01-06 19:24:13 +00:00
Alex Waygood
2ec29b7418 [ty] Optimize Type::negate() (#22402) 2026-01-06 19:17:59 +00:00
Jack O'Connor
ab1ac254d9 [ty] fix comparisons and arithmetic with NewTypes of float (#22105)
Fixes https://github.com/astral-sh/ty/issues/2077.
2026-01-06 09:32:22 -08:00
220 changed files with 8866 additions and 2939 deletions

View File

@@ -5,4 +5,4 @@ rustup component add clippy rustfmt
cargo install cargo-insta
cargo fetch
pip install maturin pre-commit
pip install maturin prek

4
.github/CODEOWNERS vendored
View File

@@ -21,10 +21,10 @@
/crates/ty* @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_project/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/crates/ty_ide/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/crates/ty_ide/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager @Gankra
/crates/ty_server/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/crates/ty/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_wasm/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/scripts/ty_benchmark/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ty_python_semantic/ @carljm @AlexWaygood @sharkdp @dcreager
/crates/ty_module_resolver/ @carljm @MichaReiser @AlexWaygood Gankra
/crates/ty_module_resolver/ @carljm @MichaReiser @AlexWaygood @Gankra

View File

@@ -1,4 +1,4 @@
# Configuration for the actionlint tool, which we run via pre-commit
# Configuration for the actionlint tool, which we run via prek
# to verify the correctness of the syntax in our GitHub Actions workflows.
self-hosted-runner:

View File

@@ -5,5 +5,4 @@
[rules]
possibly-unresolved-reference = "warn"
possibly-missing-import = "warn"
unused-ignore-comment = "warn"
division-by-zero = "warn"

View File

@@ -76,9 +76,9 @@
enabled: false,
},
{
groupName: "pre-commit dependencies",
groupName: "prek dependencies",
matchManagers: ["pre-commit"],
description: "Weekly update of pre-commit dependencies",
description: "Weekly update of prek dependencies",
},
{
groupName: "NPM Development dependencies",

View File

@@ -51,6 +51,7 @@ jobs:
- name: "Build sdist"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
command: sdist
args: --out dist
- name: "Test sdist"
@@ -81,6 +82,7 @@ jobs:
- name: "Build wheels - x86_64"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: x86_64
args: --release --locked --out dist
- name: "Upload wheels"
@@ -123,6 +125,7 @@ jobs:
- name: "Build wheels - aarch64"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: aarch64
args: --release --locked --out dist
- name: "Test wheel - aarch64"
@@ -179,6 +182,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.platform.target }}
args: --release --locked --out dist
env:
@@ -232,6 +236,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.target }}
manylinux: auto
args: --release --locked --out dist
@@ -308,6 +313,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.platform.target }}
manylinux: auto
docker-options: ${{ matrix.platform.maturin_docker_options }}
@@ -374,6 +380,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist
@@ -439,6 +446,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.platform.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist

58
.github/workflows/build-wasm.yml vendored Normal file
View File

@@ -0,0 +1,58 @@
# Build ruff_wasm for npm.
#
# Assumed to run as a subworkflow of .github/workflows/release.yml; specifically, as a local
# artifacts job within `cargo-dist`.
name: "Build wasm"
on:
workflow_call:
inputs:
plan:
required: true
type: string
pull_request:
paths:
- .github/workflows/build-wasm.yml
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
jobs:
build:
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-build') }}
runs-on: ubuntu-latest
strategy:
matrix:
target: [web, bundler, nodejs]
fail-fast: false
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: jetli/wasm-pack-action@0d096b08b4e5a7de8c28de67e11e945404e9eefa # v0.4.0
with:
version: v0.13.1
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Run wasm-pack build"
run: wasm-pack build --target ${{ matrix.target }} crates/ruff_wasm
- name: "Rename generated package"
run: | # Replace the package name w/ jq
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
- name: "Upload wasm artifact"
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: artifacts-wasm-${{ matrix.target }}
path: crates/ruff_wasm/pkg

View File

@@ -281,11 +281,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-insta
- name: "Install uv"
@@ -343,7 +343,7 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-nextest
- name: "Install uv"
@@ -376,7 +376,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-nextest
- name: "Install uv"
@@ -769,8 +769,8 @@ jobs:
- name: "Remove wheels from cache"
run: rm -rf target/wheels
pre-commit:
name: "pre-commit"
prek:
name: "prek"
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-16' || 'ubuntu-latest' }}
timeout-minutes: 10
steps:
@@ -784,17 +784,17 @@ jobs:
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: 24
- name: "Cache pre-commit"
- name: "Cache prek"
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
with:
path: ~/.cache/pre-commit
key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: "Run pre-commit"
path: ~/.cache/prek
key: prek-${{ hashFiles('.pre-commit-config.yaml') }}
- name: "Run prek"
run: |
echo '```console' > "$GITHUB_STEP_SUMMARY"
# Enable color output for pre-commit and remove it for the summary
# Use --hook-stage=manual to enable slower pre-commit hooks that are skipped by default
SKIP=cargo-fmt uvx --python="${PYTHON_VERSION}" pre-commit run --all-files --show-diff-on-failure --color=always --hook-stage=manual | \
# Enable color output for prek and remove it for the summary
# Use --hook-stage=manual to enable slower hooks that are skipped by default
SKIP=cargo-fmt uvx prek run --all-files --show-diff-on-failure --color always --hook-stage manual | \
tee >(sed -E 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})*)?[mGK]//g' >> "$GITHUB_STEP_SUMMARY") >&1
exit_code="${PIPESTATUS[0]}"
echo '```' >> "$GITHUB_STEP_SUMMARY"
@@ -972,7 +972,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -980,7 +980,7 @@ jobs:
run: cargo codspeed build --features "codspeed,ruff_instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser
- name: "Run benchmarks"
uses: CodSpeedHQ/action@972e3437949c89e1357ebd1a2dbc852fcbc57245 # v4.5.1
uses: CodSpeedHQ/action@dbda7111f8ac363564b0c51b992d4ce76bb89f2f # v4.5.2
with:
mode: simulation
run: cargo codspeed run
@@ -1011,7 +1011,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1047,7 +1047,7 @@ jobs:
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- name: "Install codspeed"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1061,7 +1061,7 @@ jobs:
run: chmod +x target/codspeed/simulation/ruff_benchmark/ty
- name: "Run benchmarks"
uses: CodSpeedHQ/action@972e3437949c89e1357ebd1a2dbc852fcbc57245 # v4.5.1
uses: CodSpeedHQ/action@dbda7111f8ac363564b0c51b992d4ce76bb89f2f # v4.5.2
with:
mode: simulation
run: cargo codspeed run --bench ty "${{ matrix.benchmark }}"
@@ -1098,7 +1098,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1136,7 +1136,7 @@ jobs:
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- name: "Install codspeed"
uses: taiki-e/install-action@28a9d316db64b78a951f3f8587a5d08cc97ad8eb # v2.65.6
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1150,7 +1150,7 @@ jobs:
run: chmod +x target/codspeed/walltime/ruff_benchmark/ty_walltime
- name: "Run benchmarks"
uses: CodSpeedHQ/action@972e3437949c89e1357ebd1a2dbc852fcbc57245 # v4.5.1
uses: CodSpeedHQ/action@dbda7111f8ac363564b0c51b992d4ce76bb89f2f # v4.5.2
env:
# enabling walltime flamegraphs adds ~6 minutes to the CI time, and they don't
# appear to provide much useful insight for our walltime benchmarks right now

View File

@@ -1,25 +1,18 @@
# Build and publish ruff-api for wasm.
# Publish ruff_wasm to npm.
#
# Assumed to run as a subworkflow of .github/workflows/release.yml; specifically, as a publish
# job within `cargo-dist`.
name: "Build and publish wasm"
name: "Publish wasm"
on:
workflow_dispatch:
workflow_call:
inputs:
plan:
required: true
type: string
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
jobs:
ruff_wasm:
publish:
runs-on: ubuntu-latest
permissions:
contents: read
@@ -29,31 +22,19 @@ jobs:
target: [web, bundler, nodejs]
fail-fast: false
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: jetli/wasm-pack-action@0d096b08b4e5a7de8c28de67e11e945404e9eefa # v0.4.0
with:
version: v0.13.1
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Run wasm-pack build"
run: wasm-pack build --target ${{ matrix.target }} crates/ruff_wasm
- name: "Rename generated package"
run: | # Replace the package name w/ jq
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
name: artifacts-wasm-${{ matrix.target }}
path: pkg
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: 24
registry-url: "https://registry.npmjs.org"
- name: "Publish (dry-run)"
if: ${{ inputs.plan == '' || fromJson(inputs.plan).announcement_tag_is_implicit }}
run: npm publish --dry-run crates/ruff_wasm/pkg
run: npm publish --dry-run pkg
- name: "Publish"
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
run: npm publish --provenance --access public crates/ruff_wasm/pkg
run: npm publish --provenance --access public pkg
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

View File

@@ -112,12 +112,22 @@ jobs:
"contents": "read"
"packages": "write"
custom-build-wasm:
needs:
- plan
if: ${{ needs.plan.outputs.publishing == 'true' || fromJson(needs.plan.outputs.val).ci.github.pr_run_mode == 'upload' || inputs.tag == 'dry-run' }}
uses: ./.github/workflows/build-wasm.yml
with:
plan: ${{ needs.plan.outputs.val }}
secrets: inherit
# Build and package all the platform-agnostic(ish) things
build-global-artifacts:
needs:
- plan
- custom-build-binaries
- custom-build-docker
- custom-build-wasm
runs-on: "depot-ubuntu-latest-4"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -165,9 +175,10 @@ jobs:
- plan
- custom-build-binaries
- custom-build-docker
- custom-build-wasm
- build-global-artifacts
# Only run if we're "publishing", and only if plan, local and global didn't fail (skipped is fine)
if: ${{ always() && needs.plan.result == 'success' && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') }}
if: ${{ always() && needs.plan.result == 'success' && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') && (needs.custom-build-wasm.result == 'skipped' || needs.custom-build-wasm.result == 'success') }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
runs-on: "depot-ubuntu-latest-4"

View File

@@ -40,12 +40,12 @@ jobs:
- name: Install the latest version of uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
with:
enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
enable-cache: true
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
lookup-only: false # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
lookup-only: false
- name: Install Rust toolchain
run: rustup show

View File

@@ -21,15 +21,62 @@ exclude: |
)$
repos:
# Priority 0: Read-only hooks; hooks that modify disjoint file types.
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: check-merge-conflict
priority: 0
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.24.1
hooks:
- id: validate-pyproject
priority: 0
- repo: https://github.com/crate-ci/typos
rev: v1.40.0
hooks:
- id: typos
priority: 0
- repo: local
hooks:
- id: cargo-fmt
name: cargo fmt
entry: cargo fmt --
language: system
types: [rust]
pass_filenames: false # This makes it a lot faster
priority: 0
# Prettier
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.7.4
hooks:
- id: prettier
types: [yaml]
priority: 0
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/zizmorcore/zizmor-pre-commit
rev: v1.19.0
hooks:
- id: zizmor
priority: 0
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.36.0
hooks:
- id: check-github-workflows
priority: 0
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.11.0.1
hooks:
- id: shellcheck
priority: 0
- repo: https://github.com/executablebooks/mdformat
rev: 1.0.0
@@ -44,7 +91,20 @@ repos:
docs/formatter/black\.md
| docs/\w+\.md
)$
priority: 0
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10
hooks:
- id: ruff-format
priority: 0
- id: ruff-check
args: [--fix, --exit-non-zero-on-fix]
types_or: [python, pyi]
require_serial: true
priority: 1
# Priority 1: Second-pass fixers (e.g., markdownlint-fix runs after mdformat).
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.47.0
hooks:
@@ -54,7 +114,9 @@ repos:
docs/formatter/black\.md
| docs/\w+\.md
)$
priority: 1
# Priority 2: blacken-docs runs after markdownlint-fix (both modify markdown).
- repo: https://github.com/adamchainz/blacken-docs
rev: 1.20.0
hooks:
@@ -68,48 +130,7 @@ repos:
)$
additional_dependencies:
- black==25.12.0
- repo: https://github.com/crate-ci/typos
rev: v1.40.0
hooks:
- id: typos
- repo: local
hooks:
- id: cargo-fmt
name: cargo fmt
entry: cargo fmt --
language: system
types: [rust]
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10
hooks:
- id: ruff-format
- id: ruff-check
args: [--fix, --exit-non-zero-on-fix]
types_or: [python, pyi]
require_serial: true
# Prettier
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.7.4
hooks:
- id: prettier
types: [yaml]
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/zizmorcore/zizmor-pre-commit
rev: v1.19.0
hooks:
- id: zizmor
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.36.0
hooks:
- id: check-github-workflows
priority: 2
# `actionlint` hook, for verifying correct syntax in GitHub Actions workflows.
# Some additional configuration for `actionlint` can be found in `.github/actionlint.yaml`.
@@ -119,19 +140,16 @@ repos:
- id: actionlint
stages:
# This hook is disabled by default, since it's quite slow.
# To run all hooks *including* this hook, use `uvx pre-commit run -a --hook-stage=manual`.
# To run *just* this hook, use `uvx pre-commit run -a actionlint --hook-stage=manual`.
# To run all hooks *including* this hook, use `uvx prek run -a --hook-stage=manual`.
# To run *just* this hook, use `uvx prek run -a actionlint --hook-stage=manual`.
- manual
args:
- "-ignore=SC2129" # ignorable stylistic lint from shellcheck
- "-ignore=SC2016" # another shellcheck lint: seems to have false positives?
language: golang # means renovate will also update `additional_dependencies`
additional_dependencies:
# actionlint has a shellcheck integration which extracts shell scripts in `run:` steps from GitHub Actions
# and checks these with shellcheck. This is arguably its most useful feature,
# but the integration only works if shellcheck is installed
- "github.com/wasilibs/go-shellcheck/cmd/shellcheck@v0.11.1"
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.11.0.1
hooks:
- id: shellcheck
priority: 0

View File

@@ -1,5 +1,63 @@
# Changelog
## 0.14.11
Released on 2026-01-08.
### Preview features
- Consolidate diagnostics for matched disable/enable suppression comments ([#22099](https://github.com/astral-sh/ruff/pull/22099))
- Report diagnostics for invalid/unmatched range suppression comments ([#21908](https://github.com/astral-sh/ruff/pull/21908))
- \[`airflow`\] Passing positional argument into `airflow.lineage.hook.HookLineageCollector.create_asset` is not allowed (`AIR303`) ([#22046](https://github.com/astral-sh/ruff/pull/22046))
- \[`refurb`\] Mark `FURB192` fix as always unsafe ([#22210](https://github.com/astral-sh/ruff/pull/22210))
- \[`ruff`\] Add `non-empty-init-module` (`RUF067`) ([#22143](https://github.com/astral-sh/ruff/pull/22143))
### Bug fixes
- Fix GitHub format for multi-line diagnostics ([#22108](https://github.com/astral-sh/ruff/pull/22108))
- \[`flake8-unused-arguments`\] Mark `**kwargs` in `TypeVar` as used (`ARG001`) ([#22214](https://github.com/astral-sh/ruff/pull/22214))
### Rule changes
- Add `help:` subdiagnostics for several Ruff rules that can sometimes appear to disagree with `ty` ([#22331](https://github.com/astral-sh/ruff/pull/22331))
- \[`pylint`\] Demote `PLW1510` fix to display-only ([#22318](https://github.com/astral-sh/ruff/pull/22318))
- \[`pylint`\] Ignore identical members (`PLR1714`) ([#22220](https://github.com/astral-sh/ruff/pull/22220))
- \[`pylint`\] Improve diagnostic range for `PLC0206` ([#22312](https://github.com/astral-sh/ruff/pull/22312))
- \[`ruff`\] Improve fix title for `RUF102` invalid rule code ([#22100](https://github.com/astral-sh/ruff/pull/22100))
- \[`flake8-simplify`\]: Avoid unnecessary builtins import for `SIM105` ([#22358](https://github.com/astral-sh/ruff/pull/22358))
### Configuration
- Allow Python 3.15 as valid `target-version` value in preview ([#22419](https://github.com/astral-sh/ruff/pull/22419))
- Check `required-version` before parsing rules ([#22410](https://github.com/astral-sh/ruff/pull/22410))
- Include configured `src` directories when resolving graphs ([#22451](https://github.com/astral-sh/ruff/pull/22451))
### Documentation
- Update `T201` suggestion to not use root logger to satisfy `LOG015` ([#22059](https://github.com/astral-sh/ruff/pull/22059))
- Fix `iter` example in unsafe fixes doc ([#22118](https://github.com/astral-sh/ruff/pull/22118))
- \[`flake8_print`\] better suggestion for `basicConfig` in `T201` docs ([#22101](https://github.com/astral-sh/ruff/pull/22101))
- \[`pylint`\] Restore the fix safety docs for `PLW0133` ([#22211](https://github.com/astral-sh/ruff/pull/22211))
- Fix Jupyter notebook discovery info for editors ([#22447](https://github.com/astral-sh/ruff/pull/22447))
### Contributors
- [@charliermarsh](https://github.com/charliermarsh)
- [@ntBre](https://github.com/ntBre)
- [@cenviity](https://github.com/cenviity)
- [@njhearp](https://github.com/njhearp)
- [@cbachhuber](https://github.com/cbachhuber)
- [@jelle-openai](https://github.com/jelle-openai)
- [@AlexWaygood](https://github.com/AlexWaygood)
- [@ValdonVitija](https://github.com/ValdonVitija)
- [@BurntSushi](https://github.com/BurntSushi)
- [@Jkhall81](https://github.com/Jkhall81)
- [@PeterJCLaw](https://github.com/PeterJCLaw)
- [@harupy](https://github.com/harupy)
- [@amyreese](https://github.com/amyreese)
- [@sjyangkevin](https://github.com/sjyangkevin)
- [@woodruffw](https://github.com/woodruffw)
## 0.14.10
Released on 2025-12-18.

View File

@@ -65,6 +65,7 @@ When working on ty, PR titles should start with `[ty]` and be tagged with the `t
- All changes must be tested. If you're not testing your changes, you're not done.
- Get your tests to pass. If you didn't run the tests, your code does not work.
- Follow existing code style. Check neighboring files for patterns.
- Always run `uvx pre-commit run -a` at the end of a task.
- Always run `uvx prek run -a` at the end of a task.
- Avoid writing significant amounts of new code. This is often a sign that we're missing an existing method or mechanism that could help solve the problem. Look for existing utilities first.
- Avoid falling back to patterns that require `panic!`, `unreachable!`, or `.unwrap()`. Instead, try to encode those constraints in the type system.
- Prefer let chains (`if let` combined with `&&`) over nested `if let` statements to reduce indentation and improve readability.

View File

@@ -53,12 +53,12 @@ cargo install cargo-insta
You'll need [uv](https://docs.astral.sh/uv/getting-started/installation/) (or `pipx` and `pip`) to
run Python utility commands.
You can optionally install pre-commit hooks to automatically run the validation checks
You can optionally install hooks to automatically run the validation checks
when making a commit:
```shell
uv tool install pre-commit
pre-commit install
uv tool install prek
prek install
```
We recommend [nextest](https://nexte.st/) to run Ruff's test suite (via `cargo nextest run`),
@@ -85,7 +85,7 @@ and that it passes both the lint and test validation checks:
```shell
cargo clippy --workspace --all-targets --all-features -- -D warnings # Rust linting
RUFF_UPDATE_SCHEMA=1 cargo test # Rust testing and updating ruff.schema.json
uvx pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting, Markdown and Python linting, etc.
uvx prek run -a # Rust and Python formatting, Markdown and Python linting, etc.
```
These checks will run on GitHub Actions when you open your pull request, but running them locally
@@ -381,7 +381,7 @@ Commit each step of this process separately for easier review.
- Often labels will be missing from pull requests they will need to be manually organized into the proper section
- Changes should be edited to be user-facing descriptions, avoiding internal details
- Square brackets (eg, `[ruff]` project name) will be automatically escaped by `pre-commit`
- Square brackets (eg, `[ruff]` project name) will be automatically escaped by `prek`
Additionally, for minor releases:

108
Cargo.lock generated
View File

@@ -466,9 +466,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.53"
version = "4.5.54"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9e340e012a1bf4935f5282ed1436d1489548e8f72308207ea5df0e23d2d03f8"
checksum = "c6e6ff9dcd79cff5cd969a17a545d79e84ab086e444102a591e288a8aa3ce394"
dependencies = [
"clap_builder",
"clap_derive",
@@ -476,9 +476,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.53"
version = "4.5.54"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d76b5d13eaa18c901fd2f7fca939fefe3a0727a953561fefdf3b2922b8569d00"
checksum = "fa42cf4d2b7a41bc8f663a7cab4031ebafa1bf3875705bfaf8466dc60ab52c00"
dependencies = [
"anstream",
"anstyle",
@@ -666,7 +666,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "117725a109d387c937a1533ce01b450cbde6b88abceea8473c4d7a85853cda3c"
dependencies = [
"lazy_static",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -675,7 +675,7 @@ version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fde0e0ec90c9dfb3b4b1a0891a7dcd0e2bffde2f7efed5fe7c9bb00e5bfb915e"
dependencies = [
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -1033,7 +1033,7 @@ dependencies = [
"libc",
"option-ext",
"redox_users",
"windows-sys 0.60.2",
"windows-sys 0.61.0",
]
[[package]]
@@ -1125,7 +1125,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.52.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -1289,15 +1289,6 @@ dependencies = [
"smallvec",
]
[[package]]
name = "getopts"
version = "0.2.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cfe4fbac503b8d1f88e6676011885f34b7174f46e59956bba534ba83abded4df"
dependencies = [
"unicode-width",
]
[[package]]
name = "getrandom"
version = "0.2.16"
@@ -1583,11 +1574,11 @@ dependencies = [
[[package]]
name = "imperative"
version = "1.0.6"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "29a1f6526af721f9aec9ceed7ab8ebfca47f3399d08b80056c2acca3fcb694a9"
checksum = "35e1d0bd9c575c52e59aad8e122a11786e852a154678d0c86e9e243d55273970"
dependencies = [
"phf",
"phf 0.13.1",
"rust-stemmers",
]
@@ -1648,9 +1639,9 @@ dependencies = [
[[package]]
name = "insta"
version = "1.45.1"
version = "1.46.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "983e3b24350c84ab8a65151f537d67afbbf7153bb9f1110e03e9fa9b07f67a5c"
checksum = "1b66886d14d18d420ab5052cbff544fc5d34d0b2cdd35eb5976aaa10a4a472e5"
dependencies = [
"console 0.15.11",
"once_cell",
@@ -1727,7 +1718,7 @@ checksum = "e04d7f318608d35d4b61ddd75cbdaee86b023ebe2bd5a66ee0915f0bf93095a9"
dependencies = [
"hermit-abi",
"libc",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -1791,7 +1782,7 @@ dependencies = [
"portable-atomic",
"portable-atomic-util",
"serde_core",
"windows-sys 0.52.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -1874,9 +1865,9 @@ checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]]
name = "libc"
version = "0.2.178"
version = "0.2.179"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37c93d8daa9d8a012fd8ab92f088405fb202ea0b6ab73ee2482ae66af4f42091"
checksum = "c5a2d376baa530d1238d133232d15e239abad80d05838b4b59354e5268af431f"
[[package]]
name = "libcst"
@@ -2488,7 +2479,17 @@ version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fd6780a80ae0c52cc120a26a1a42c1ae51b247a253e4e06113d23d2c2edd078"
dependencies = [
"phf_shared",
"phf_shared 0.11.3",
]
[[package]]
name = "phf"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1562dc717473dbaa4c1f85a36410e03c047b2e7df7f45ee938fbef64ae7fadf"
dependencies = [
"phf_shared 0.13.1",
"serde",
]
[[package]]
@@ -2498,7 +2499,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aef8048c789fa5e851558d709946d6d79a8ff88c0440c587967f8e94bfb1216a"
dependencies = [
"phf_generator",
"phf_shared",
"phf_shared 0.11.3",
]
[[package]]
@@ -2507,7 +2508,7 @@ version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
dependencies = [
"phf_shared",
"phf_shared 0.11.3",
"rand 0.8.5",
]
@@ -2520,6 +2521,15 @@ dependencies = [
"siphasher",
]
[[package]]
name = "phf_shared"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e57fef6bc5981e38c2ce2d63bfa546861309f875b8a75f092d1d54ae2d64f266"
dependencies = [
"siphasher",
]
[[package]]
name = "pin-project-lite"
version = "0.2.16"
@@ -2912,7 +2922,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"anyhow",
"argfile",
@@ -2928,6 +2938,7 @@ dependencies = [
"filetime",
"globwalk",
"ignore",
"indexmap",
"indoc",
"insta",
"insta-cmd",
@@ -3171,7 +3182,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3529,7 +3540,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -3627,7 +3638,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.52.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -3645,7 +3656,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.25.2"
source = "git+https://github.com/salsa-rs/salsa.git?rev=309c249088fdeef0129606fa34ec2eefc74736ff#309c249088fdeef0129606fa34ec2eefc74736ff"
source = "git+https://github.com/salsa-rs/salsa.git?rev=9860ff6ca0f1f8f3a8d6b832020002790b501254#9860ff6ca0f1f8f3a8d6b832020002790b501254"
dependencies = [
"boxcar",
"compact_str",
@@ -3670,12 +3681,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.25.2"
source = "git+https://github.com/salsa-rs/salsa.git?rev=309c249088fdeef0129606fa34ec2eefc74736ff#309c249088fdeef0129606fa34ec2eefc74736ff"
source = "git+https://github.com/salsa-rs/salsa.git?rev=9860ff6ca0f1f8f3a8d6b832020002790b501254#9860ff6ca0f1f8f3a8d6b832020002790b501254"
[[package]]
name = "salsa-macros"
version = "0.25.2"
source = "git+https://github.com/salsa-rs/salsa.git?rev=309c249088fdeef0129606fa34ec2eefc74736ff#309c249088fdeef0129606fa34ec2eefc74736ff"
source = "git+https://github.com/salsa-rs/salsa.git?rev=9860ff6ca0f1f8f3a8d6b832020002790b501254#9860ff6ca0f1f8f3a8d6b832020002790b501254"
dependencies = [
"proc-macro2",
"quote",
@@ -3993,9 +4004,9 @@ checksum = "e396b6523b11ccb83120b115a0b7366de372751aa6edf19844dfb13a6af97e91"
[[package]]
name = "syn"
version = "2.0.111"
version = "2.0.113"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "390cc9a294ab71bdb1aa2e99d13be9c753cd2d7bd6560c77118597410c4d2e87"
checksum = "678faa00651c9eb72dd2020cbdf275d92eccb2400d568e419efdd64838145cb4"
dependencies = [
"proc-macro2",
"quote",
@@ -4029,7 +4040,7 @@ dependencies = [
"getrandom 0.3.4",
"once_cell",
"rustix",
"windows-sys 0.52.0",
"windows-sys 0.61.0",
]
[[package]]
@@ -4059,7 +4070,7 @@ checksum = "d4ea810f0692f9f51b382fff5893887bb4580f5fa246fde546e0b13e7fcee662"
dependencies = [
"fnv",
"nom",
"phf",
"phf 0.11.3",
"phf_codegen",
]
@@ -4443,6 +4454,7 @@ version = "0.0.0"
dependencies = [
"bitflags 2.10.0",
"camino",
"compact_str",
"get-size2",
"insta",
"itertools 0.14.0",
@@ -4511,11 +4523,13 @@ dependencies = [
"regex-automata",
"ruff_cache",
"ruff_db",
"ruff_diagnostics",
"ruff_macros",
"ruff_memory_usage",
"ruff_options_metadata",
"ruff_python_ast",
"ruff_python_formatter",
"ruff_python_trivia",
"ruff_text_size",
"rustc-hash",
"salsa",
@@ -4800,22 +4814,20 @@ checksum = "b4ac048d71ede7ee76d585517add45da530660ef4390e49b098733c6e897f254"
[[package]]
name = "unicode_names2"
version = "1.3.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1673eca9782c84de5f81b82e4109dcfb3611c8ba0d52930ec4a9478f547b2dd"
checksum = "d189085656ca1203291e965444e7f6a2723fbdd1dd9f34f8482e79bafd8338a0"
dependencies = [
"phf",
"phf 0.11.3",
"unicode_names2_generator",
]
[[package]]
name = "unicode_names2_generator"
version = "1.3.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b91e5b84611016120197efd7dc93ef76774f4e084cd73c9fb3ea4a86c570c56e"
checksum = "1262662dc96937c71115228ce2e1d30f41db71a7a45d3459e98783ef94052214"
dependencies = [
"getopts",
"log",
"phf_codegen",
"rand 0.8.5",
]
@@ -5133,7 +5145,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.52.0",
"windows-sys 0.61.0",
]
[[package]]

View File

@@ -150,7 +150,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "309c249088fdeef0129606fa34ec2eefc74736ff", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "9860ff6ca0f1f8f3a8d6b832020002790b501254", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",
@@ -200,7 +200,7 @@ unic-ucd-category = { version = "0.9" }
unicode-ident = { version = "1.0.12" }
unicode-normalization = { version = "0.1.23" }
unicode-width = { version = "0.2.0" }
unicode_names2 = { version = "1.2.2" }
unicode_names2 = { version = "2.0.0" }
url = { version = "2.5.0" }
uuid = { version = "1.6.1", features = ["v4", "fast-rng", "macro-diagnostics"] }
walkdir = { version = "2.3.2" }

View File

@@ -150,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.10/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.11/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.11/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -184,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
publish = true
authors = { workspace = true }
edition = { workspace = true }
@@ -48,6 +48,7 @@ colored = { workspace = true }
filetime = { workspace = true }
globwalk = { workspace = true }
ignore = { workspace = true }
indexmap = { workspace = true }
is-macro = { workspace = true }
itertools = { workspace = true }
jiff = { workspace = true }

View File

@@ -2,6 +2,7 @@ use crate::args::{AnalyzeGraphArgs, ConfigArguments};
use crate::resolve::resolve;
use crate::{ExitStatus, resolve_default_files};
use anyhow::Result;
use indexmap::IndexSet;
use log::{debug, warn};
use path_absolutize::CWD;
use ruff_db::system::{SystemPath, SystemPathBuf};
@@ -11,7 +12,7 @@ use ruff_linter::source_kind::SourceKind;
use ruff_linter::{warn_user, warn_user_once};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{ResolvedFile, match_exclusion, python_files_in_path};
use rustc_hash::FxHashMap;
use rustc_hash::{FxBuildHasher, FxHashMap};
use std::io::Write;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
@@ -59,17 +60,34 @@ pub(crate) fn analyze_graph(
})
.collect::<FxHashMap<_, _>>();
// Create a database from the source roots.
let src_roots = package_roots
.values()
.filter_map(|package| package.as_deref())
.filter_map(|package| package.parent())
.map(Path::to_path_buf)
.filter_map(|path| SystemPathBuf::from_path_buf(path).ok())
.collect();
// Create a database from the source roots, combining configured `src` paths with detected
// package roots. Configured paths are added first so they take precedence, and duplicates
// are removed.
let mut src_roots: IndexSet<SystemPathBuf, FxBuildHasher> = IndexSet::default();
// Add configured `src` paths first (for precedence), filtering to only include existing
// directories.
src_roots.extend(
pyproject_config
.settings
.linter
.src
.iter()
.filter(|path| path.is_dir())
.filter_map(|path| SystemPathBuf::from_path_buf(path.clone()).ok()),
);
// Add detected package roots.
src_roots.extend(
package_roots
.values()
.filter_map(|package| package.as_deref())
.filter_map(|path| path.parent())
.filter_map(|path| SystemPathBuf::from_path_buf(path.to_path_buf()).ok()),
);
let db = ModuleDb::from_src_roots(
src_roots,
src_roots.into_iter().collect(),
pyproject_config
.settings
.analyze

View File

@@ -29,10 +29,10 @@ pub(crate) fn show_settings(
bail!("No files found under the given path");
};
let settings = resolver.resolve(&path);
let (settings, config_path) = resolver.resolve_with_path(&path);
writeln!(writer, "Resolved settings for: \"{}\"", path.display())?;
if let Some(settings_path) = pyproject_config.path.as_ref() {
if let Some(settings_path) = config_path {
writeln!(writer, "Settings path: \"{}\"", settings_path.display())?;
}
write!(writer, "{settings}")?;

View File

@@ -714,6 +714,121 @@ fn notebook_basic() -> Result<()> {
Ok(())
}
/// Test that the `src` configuration option is respected.
///
/// This is useful for monorepos where there are multiple source directories that need to be
/// included in the module resolution search path.
#[test]
fn src_option() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
// Create a lib directory with a package.
root.child("lib")
.child("mylib")
.child("__init__.py")
.write_str("def helper(): pass")?;
// Create an app directory with a file that imports from mylib.
root.child("app").child("__init__.py").write_str("")?;
root.child("app")
.child("main.py")
.write_str("from mylib import helper")?;
// Without src configured, the import from mylib won't resolve.
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": []
}
----- stderr -----
"#);
});
// With src = ["lib"], the import should resolve.
root.child("ruff.toml").write_str(indoc::indoc! {r#"
src = ["lib"]
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": [
"lib/mylib/__init__.py"
]
}
----- stderr -----
"#);
});
Ok(())
}
/// Test that glob patterns in `src` are expanded.
#[test]
fn src_glob_expansion() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
// Create multiple lib directories with packages.
root.child("libs")
.child("lib_a")
.child("pkg_a")
.child("__init__.py")
.write_str("def func_a(): pass")?;
root.child("libs")
.child("lib_b")
.child("pkg_b")
.child("__init__.py")
.write_str("def func_b(): pass")?;
// Create an app that imports from both packages.
root.child("app").child("__init__.py").write_str("")?;
root.child("app")
.child("main.py")
.write_str("from pkg_a import func_a\nfrom pkg_b import func_b")?;
// Use a glob pattern to include all lib directories.
root.child("ruff.toml").write_str(indoc::indoc! {r#"
src = ["libs/*"]
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": [
"libs/lib_a/pkg_a/__init__.py",
"libs/lib_b/pkg_b/__init__.py"
]
}
----- stderr -----
"#);
});
Ok(())
}
#[test]
fn notebook_with_magic() -> Result<()> {
let tempdir = TempDir::new()?;

View File

@@ -1126,6 +1126,35 @@ import os
Ok(())
}
#[test]
fn required_version_fails_to_parse() -> Result<()> {
let fixture = CliTest::with_file(
"ruff.toml",
r#"
required-version = "pikachu"
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command(), @r#"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Failed to load configuration `[TMP]/ruff.toml`
Cause: Failed to parse [TMP]/ruff.toml
Cause: TOML parse error at line 2, column 20
|
2 | required-version = "pikachu"
| ^^^^^^^^^
Failed to parse version: Unexpected end of version specifier, expected operator:
pikachu
^^^^^^^
"#);
Ok(())
}
#[test]
fn required_version_exact_mismatch() -> Result<()> {
let version = env!("CARGO_PKG_VERSION");
@@ -1137,10 +1166,10 @@ required-version = "0.1.0"
"#,
)?;
insta::with_settings!({
filters => vec![(version, "[VERSION]")]
}, {
assert_cmd_snapshot!(fixture
let mut settings = insta::Settings::clone_current();
settings.add_filter(version, "[VERSION]");
settings.bind(|| {
assert_cmd_snapshot!(fixture
.check_command()
.arg("--config")
.arg("ruff.toml")
@@ -1154,6 +1183,7 @@ import os
----- stderr -----
ruff failed
Cause: Failed to load configuration `[TMP]/ruff.toml`
Cause: Required version `==0.1.0` does not match the running version `[VERSION]`
");
});
@@ -1212,10 +1242,10 @@ required-version = ">{version}"
),
)?;
insta::with_settings!({
filters => vec![(version, "[VERSION]")]
}, {
assert_cmd_snapshot!(fixture
let mut settings = insta::Settings::clone_current();
settings.add_filter(version, "[VERSION]");
settings.bind(|| {
assert_cmd_snapshot!(fixture
.check_command()
.arg("--config")
.arg("ruff.toml")
@@ -1229,6 +1259,48 @@ import os
----- stderr -----
ruff failed
Cause: Failed to load configuration `[TMP]/ruff.toml`
Cause: Required version `>[VERSION]` does not match the running version `[VERSION]`
");
});
Ok(())
}
#[test]
fn required_version_precedes_rule_validation() -> Result<()> {
let version = env!("CARGO_PKG_VERSION");
let fixture = CliTest::with_file(
"ruff.toml",
&format!(
r#"
required-version = ">{version}"
[lint]
select = ["RUF999"]
"#
),
)?;
let mut settings = insta::Settings::clone_current();
settings.add_filter(version, "[VERSION]");
settings.bind(|| {
assert_cmd_snapshot!(fixture
.check_command()
.arg("--config")
.arg("ruff.toml")
.arg("-")
.pass_stdin(r#"
import os
"#), @"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Failed to load configuration `[TMP]/ruff.toml`
Cause: Required version `>[VERSION]` does not match the running version `[VERSION]`
");
});

View File

@@ -16,6 +16,7 @@ success: true
exit_code: 0
----- stdout -----
Resolved settings for: "[TMP]/foo/test.py"
Settings path: "[TMP]/foo/pyproject.toml"
# General Settings
cache_dir = "[TMP]/foo/.ruff_cache"

View File

@@ -50,6 +50,56 @@ ignore = [
Ok(())
}
#[test]
fn display_settings_from_nested_directory() -> anyhow::Result<()> {
let tempdir = TempDir::new().context("Failed to create temp directory.")?;
// Tempdir path's on macos are symlinks, which doesn't play nicely with
// our snapshot filtering.
let project_dir =
dunce::canonicalize(tempdir.path()).context("Failed to canonical tempdir path.")?;
// Root pyproject.toml.
std::fs::write(
project_dir.join("pyproject.toml"),
r#"
[tool.ruff]
line-length = 100
[tool.ruff.lint]
select = ["E", "F"]
"#,
)?;
// Create a subdirectory with its own pyproject.toml.
let subdir = project_dir.join("subdir");
std::fs::create_dir(&subdir)?;
std::fs::write(
subdir.join("pyproject.toml"),
r#"
[tool.ruff]
line-length = 120
[tool.ruff.lint]
select = ["E", "F", "I"]
"#,
)?;
std::fs::write(subdir.join("test.py"), r#"import os"#).context("Failed to write test.py.")?;
insta::with_settings!({filters => vec![
(&*tempdir_filter(&project_dir), "<temp_dir>/"),
(r#"\\(\w\w|\s|\.|")"#, "/$1"),
]}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-settings", "subdir/test.py"])
.current_dir(&project_dir));
});
Ok(())
}
fn tempdir_filter(project_dir: &Path) -> String {
format!(r#"{}\\?/?"#, regex::escape(project_dir.to_str().unwrap()))
}

View File

@@ -0,0 +1,410 @@
---
source: crates/ruff/tests/show_settings.rs
info:
program: ruff
args:
- check
- "--show-settings"
- subdir/test.py
---
success: true
exit_code: 0
----- stdout -----
Resolved settings for: "<temp_dir>/subdir/test.py"
Settings path: "<temp_dir>/subdir/pyproject.toml"
# General Settings
cache_dir = "<temp_dir>/subdir/.ruff_cache"
fix = false
fix_only = false
output_format = full
show_fixes = false
unsafe_fixes = hint
# File Resolver Settings
file_resolver.exclude = [
".bzr",
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".ipynb_checkpoints",
".mypy_cache",
".nox",
".pants.d",
".pyenv",
".pytest_cache",
".pytype",
".ruff_cache",
".svn",
".tox",
".venv",
".vscode",
"__pypackages__",
"_build",
"buck-out",
"dist",
"node_modules",
"site-packages",
"venv",
]
file_resolver.extend_exclude = []
file_resolver.force_exclude = false
file_resolver.include = [
"*.py",
"*.pyi",
"*.ipynb",
"**/pyproject.toml",
]
file_resolver.extend_include = []
file_resolver.respect_gitignore = true
file_resolver.project_root = "<temp_dir>/subdir"
# Linter Settings
linter.exclude = []
linter.project_root = "<temp_dir>/subdir"
linter.rules.enabled = [
unsorted-imports (I001),
missing-required-import (I002),
mixed-spaces-and-tabs (E101),
multiple-imports-on-one-line (E401),
module-import-not-at-top-of-file (E402),
line-too-long (E501),
multiple-statements-on-one-line-colon (E701),
multiple-statements-on-one-line-semicolon (E702),
useless-semicolon (E703),
none-comparison (E711),
true-false-comparison (E712),
not-in-test (E713),
not-is-test (E714),
type-comparison (E721),
bare-except (E722),
lambda-assignment (E731),
ambiguous-variable-name (E741),
ambiguous-class-name (E742),
ambiguous-function-name (E743),
io-error (E902),
unused-import (F401),
import-shadowed-by-loop-var (F402),
undefined-local-with-import-star (F403),
late-future-import (F404),
undefined-local-with-import-star-usage (F405),
undefined-local-with-nested-import-star-usage (F406),
future-feature-not-defined (F407),
percent-format-invalid-format (F501),
percent-format-expected-mapping (F502),
percent-format-expected-sequence (F503),
percent-format-extra-named-arguments (F504),
percent-format-missing-argument (F505),
percent-format-mixed-positional-and-named (F506),
percent-format-positional-count-mismatch (F507),
percent-format-star-requires-sequence (F508),
percent-format-unsupported-format-character (F509),
string-dot-format-invalid-format (F521),
string-dot-format-extra-named-arguments (F522),
string-dot-format-extra-positional-arguments (F523),
string-dot-format-missing-arguments (F524),
string-dot-format-mixing-automatic (F525),
f-string-missing-placeholders (F541),
multi-value-repeated-key-literal (F601),
multi-value-repeated-key-variable (F602),
expressions-in-star-assignment (F621),
multiple-starred-expressions (F622),
assert-tuple (F631),
is-literal (F632),
invalid-print-syntax (F633),
if-tuple (F634),
break-outside-loop (F701),
continue-outside-loop (F702),
yield-outside-function (F704),
return-outside-function (F706),
default-except-not-last (F707),
forward-annotation-syntax-error (F722),
redefined-while-unused (F811),
undefined-name (F821),
undefined-export (F822),
undefined-local (F823),
unused-variable (F841),
unused-annotation (F842),
raise-not-implemented (F901),
]
linter.rules.should_fix = [
unsorted-imports (I001),
missing-required-import (I002),
mixed-spaces-and-tabs (E101),
multiple-imports-on-one-line (E401),
module-import-not-at-top-of-file (E402),
line-too-long (E501),
multiple-statements-on-one-line-colon (E701),
multiple-statements-on-one-line-semicolon (E702),
useless-semicolon (E703),
none-comparison (E711),
true-false-comparison (E712),
not-in-test (E713),
not-is-test (E714),
type-comparison (E721),
bare-except (E722),
lambda-assignment (E731),
ambiguous-variable-name (E741),
ambiguous-class-name (E742),
ambiguous-function-name (E743),
io-error (E902),
unused-import (F401),
import-shadowed-by-loop-var (F402),
undefined-local-with-import-star (F403),
late-future-import (F404),
undefined-local-with-import-star-usage (F405),
undefined-local-with-nested-import-star-usage (F406),
future-feature-not-defined (F407),
percent-format-invalid-format (F501),
percent-format-expected-mapping (F502),
percent-format-expected-sequence (F503),
percent-format-extra-named-arguments (F504),
percent-format-missing-argument (F505),
percent-format-mixed-positional-and-named (F506),
percent-format-positional-count-mismatch (F507),
percent-format-star-requires-sequence (F508),
percent-format-unsupported-format-character (F509),
string-dot-format-invalid-format (F521),
string-dot-format-extra-named-arguments (F522),
string-dot-format-extra-positional-arguments (F523),
string-dot-format-missing-arguments (F524),
string-dot-format-mixing-automatic (F525),
f-string-missing-placeholders (F541),
multi-value-repeated-key-literal (F601),
multi-value-repeated-key-variable (F602),
expressions-in-star-assignment (F621),
multiple-starred-expressions (F622),
assert-tuple (F631),
is-literal (F632),
invalid-print-syntax (F633),
if-tuple (F634),
break-outside-loop (F701),
continue-outside-loop (F702),
yield-outside-function (F704),
return-outside-function (F706),
default-except-not-last (F707),
forward-annotation-syntax-error (F722),
redefined-while-unused (F811),
undefined-name (F821),
undefined-export (F822),
undefined-local (F823),
unused-variable (F841),
unused-annotation (F842),
raise-not-implemented (F901),
]
linter.per_file_ignores = {}
linter.safety_table.forced_safe = []
linter.safety_table.forced_unsafe = []
linter.unresolved_target_version = none
linter.per_file_target_version = {}
linter.preview = disabled
linter.explicit_preview_rules = false
linter.extension = ExtensionMapping({})
linter.allowed_confusables = []
linter.builtins = []
linter.dummy_variable_rgx = ^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$
linter.external = []
linter.ignore_init_module_imports = true
linter.logger_objects = []
linter.namespace_packages = []
linter.src = [
"<temp_dir>/subdir",
"<temp_dir>/subdir/src",
]
linter.tab_size = 4
linter.line_length = 120
linter.task_tags = [
TODO,
FIXME,
XXX,
]
linter.typing_modules = []
linter.typing_extensions = true
# Linter Plugins
linter.flake8_annotations.mypy_init_return = false
linter.flake8_annotations.suppress_dummy_args = false
linter.flake8_annotations.suppress_none_returning = false
linter.flake8_annotations.allow_star_arg_any = false
linter.flake8_annotations.ignore_fully_untyped = false
linter.flake8_bandit.hardcoded_tmp_directory = [
/tmp,
/var/tmp,
/dev/shm,
]
linter.flake8_bandit.check_typed_exception = false
linter.flake8_bandit.extend_markup_names = []
linter.flake8_bandit.allowed_markup_calls = []
linter.flake8_bugbear.extend_immutable_calls = []
linter.flake8_builtins.allowed_modules = []
linter.flake8_builtins.ignorelist = []
linter.flake8_builtins.strict_checking = false
linter.flake8_comprehensions.allow_dict_calls_with_keyword_arguments = false
linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|,\s)\d{4})*
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
]
linter.flake8_implicit_str_concat.allow_multiline = true
linter.flake8_import_conventions.aliases = {
altair = alt,
holoviews = hv,
matplotlib = mpl,
matplotlib.pyplot = plt,
networkx = nx,
numpy = np,
numpy.typing = npt,
pandas = pd,
panel = pn,
plotly.express = px,
polars = pl,
pyarrow = pa,
seaborn = sns,
tensorflow = tf,
tkinter = tk,
xml.etree.ElementTree = ET,
}
linter.flake8_import_conventions.banned_aliases = {}
linter.flake8_import_conventions.banned_from = []
linter.flake8_pytest_style.fixture_parentheses = false
linter.flake8_pytest_style.parametrize_names_type = tuple
linter.flake8_pytest_style.parametrize_values_type = list
linter.flake8_pytest_style.parametrize_values_row_type = tuple
linter.flake8_pytest_style.raises_require_match_for = [
BaseException,
Exception,
ValueError,
OSError,
IOError,
EnvironmentError,
socket.error,
]
linter.flake8_pytest_style.raises_extend_require_match_for = []
linter.flake8_pytest_style.mark_parentheses = false
linter.flake8_quotes.inline_quotes = double
linter.flake8_quotes.multiline_quotes = double
linter.flake8_quotes.docstring_quotes = double
linter.flake8_quotes.avoid_escape = true
linter.flake8_self.ignore_names = [
_make,
_asdict,
_replace,
_fields,
_field_defaults,
_name_,
_value_,
]
linter.flake8_tidy_imports.ban_relative_imports = "parents"
linter.flake8_tidy_imports.banned_api = {}
linter.flake8_tidy_imports.banned_module_level_imports = []
linter.flake8_type_checking.strict = false
linter.flake8_type_checking.exempt_modules = [
typing,
typing_extensions,
]
linter.flake8_type_checking.runtime_required_base_classes = []
linter.flake8_type_checking.runtime_required_decorators = []
linter.flake8_type_checking.quote_annotations = false
linter.flake8_unused_arguments.ignore_variadic_names = false
linter.isort.required_imports = []
linter.isort.combine_as_imports = false
linter.isort.force_single_line = false
linter.isort.force_sort_within_sections = false
linter.isort.detect_same_package = true
linter.isort.case_sensitive = false
linter.isort.force_wrap_aliases = false
linter.isort.force_to_top = []
linter.isort.known_modules = {}
linter.isort.order_by_type = true
linter.isort.relative_imports_order = furthest_to_closest
linter.isort.single_line_exclusions = []
linter.isort.split_on_trailing_comma = true
linter.isort.classes = []
linter.isort.constants = []
linter.isort.variables = []
linter.isort.no_lines_before = []
linter.isort.lines_after_imports = -1
linter.isort.lines_between_types = 0
linter.isort.forced_separate = []
linter.isort.section_order = [
known { type = future },
known { type = standard_library },
known { type = third_party },
known { type = first_party },
known { type = local_folder },
]
linter.isort.default_section = known { type = third_party }
linter.isort.no_sections = false
linter.isort.from_first = false
linter.isort.length_sort = false
linter.isort.length_sort_straight = false
linter.mccabe.max_complexity = 10
linter.pep8_naming.ignore_names = [
setUp,
tearDown,
setUpClass,
tearDownClass,
setUpModule,
tearDownModule,
asyncSetUp,
asyncTearDown,
setUpTestData,
failureException,
longMessage,
maxDiff,
]
linter.pep8_naming.classmethod_decorators = []
linter.pep8_naming.staticmethod_decorators = []
linter.pycodestyle.max_line_length = 120
linter.pycodestyle.max_doc_length = none
linter.pycodestyle.ignore_overlong_task_comments = false
linter.pyflakes.extend_generics = []
linter.pyflakes.allowed_unused_imports = []
linter.pylint.allow_magic_value_types = [
str,
bytes,
]
linter.pylint.allow_dunder_method_names = []
linter.pylint.max_args = 5
linter.pylint.max_positional_args = 5
linter.pylint.max_returns = 6
linter.pylint.max_bool_expr = 5
linter.pylint.max_branches = 12
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []
formatter.unresolved_target_version = 3.10
formatter.per_file_target_version = {}
formatter.preview = disabled
formatter.line_width = 120
formatter.line_ending = auto
formatter.indent_style = space
formatter.indent_width = 4
formatter.quote_style = double
formatter.magic_trailing_comma = respect
formatter.docstring_code_format = disabled
formatter.docstring_code_line_width = dynamic
# Analyze Settings
analyze.exclude = []
analyze.preview = disabled
analyze.target_version = 3.10
analyze.string_imports = disabled
analyze.extension = ExtensionMapping({})
analyze.include_dependencies = {}
analyze.type_checking_imports = true
----- stderr -----

View File

@@ -15,7 +15,7 @@ use ruff_db::files::{File, system_path_to_file};
use ruff_db::source::source_text;
use ruff_db::system::{InMemorySystem, MemoryFileSystem, SystemPath, SystemPathBuf, TestSystem};
use ruff_python_ast::PythonVersion;
use ty_project::metadata::options::{EnvironmentOptions, Options};
use ty_project::metadata::options::{AnalysisOptions, EnvironmentOptions, Options};
use ty_project::metadata::value::{RangedValue, RelativePathBuf};
use ty_project::watch::{ChangeEvent, ChangedKind};
use ty_project::{CheckMode, Db, ProjectDatabase, ProjectMetadata};
@@ -67,6 +67,7 @@ fn tomllib_path(file: &TestFile) -> SystemPathBuf {
SystemPathBuf::from("src").join(file.name())
}
#[expect(clippy::needless_update)]
fn setup_tomllib_case() -> Case {
let system = TestSystem::default();
let fs = system.memory_file_system().clone();
@@ -85,6 +86,10 @@ fn setup_tomllib_case() -> Case {
python_version: Some(RangedValue::cli(PythonVersion::PY312)),
..EnvironmentOptions::default()
}),
analysis: Some(AnalysisOptions {
respect_type_ignore_comments: Some(false),
..AnalysisOptions::default()
}),
..Options::default()
});
@@ -221,7 +226,7 @@ fn setup_micro_case(code: &str) -> Case {
let file_path = "src/test.py";
fs.write_file_all(
SystemPathBuf::from(file_path),
ruff_python_trivia::textwrap::dedent(code),
&*ruff_python_trivia::textwrap::dedent(code),
)
.unwrap();
@@ -755,7 +760,7 @@ fn datetype(criterion: &mut Criterion) {
max_dep_date: "2025-07-04",
python_version: PythonVersion::PY313,
},
2,
4,
);
bench_project(&benchmark, criterion);

View File

@@ -71,6 +71,8 @@ impl Display for Benchmark<'_> {
}
}
#[track_caller]
#[expect(clippy::cast_precision_loss)]
fn check_project(db: &ProjectDatabase, project_name: &str, max_diagnostics: usize) {
let result = db.check();
let diagnostics = result.len();
@@ -79,6 +81,12 @@ fn check_project(db: &ProjectDatabase, project_name: &str, max_diagnostics: usiz
diagnostics > 1 && diagnostics <= max_diagnostics,
"Expected between 1 and {max_diagnostics} diagnostics on project '{project_name}' but got {diagnostics}",
);
if (max_diagnostics - diagnostics) as f64 / max_diagnostics as f64 > 0.10 {
tracing::warn!(
"The expected diagnostics for project `{project_name}` can be reduced: expected {max_diagnostics} but got {diagnostics}"
);
}
}
static ALTAIR: Benchmark = Benchmark::new(
@@ -101,7 +109,7 @@ static ALTAIR: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
1000,
850,
);
static COLOUR_SCIENCE: Benchmark = Benchmark::new(
@@ -120,7 +128,7 @@ static COLOUR_SCIENCE: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY310,
},
1070,
350,
);
static FREQTRADE: Benchmark = Benchmark::new(
@@ -163,7 +171,7 @@ static PANDAS: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
4000,
3800,
);
static PYDANTIC: Benchmark = Benchmark::new(
@@ -181,7 +189,7 @@ static PYDANTIC: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY39,
},
7000,
3200,
);
static SYMPY: Benchmark = Benchmark::new(
@@ -194,7 +202,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13116,
13400,
);
static TANJUN: Benchmark = Benchmark::new(
@@ -207,7 +215,7 @@ static TANJUN: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
320,
110,
);
static STATIC_FRAME: Benchmark = Benchmark::new(
@@ -223,7 +231,7 @@ static STATIC_FRAME: Benchmark = Benchmark::new(
max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311,
},
1100,
1700,
);
#[track_caller]

View File

@@ -1,3 +1,4 @@
use std::fmt::Formatter;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
@@ -49,3 +50,15 @@ impl CancellationToken {
self.cancelled.load(std::sync::atomic::Ordering::Relaxed)
}
}
/// The operation was canceled by the provided [`CancellationToken`].
#[derive(Debug)]
pub struct Canceled;
impl std::error::Error for Canceled {}
impl std::fmt::Display for Canceled {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str("operation was canceled")
}
}

View File

@@ -98,6 +98,44 @@ impl Diagnostic {
diag
}
/// Adds sub diagnostics that tell the user that this is a bug in ty
/// and asks them to open an issue on GitHub.
pub fn add_bug_sub_diagnostics(&mut self, url_encoded_title: &str) {
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
"This indicates a bug in ty.",
));
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format_args!(
"If you could open an issue at https://github.com/astral-sh/ty/issues/new?title={url_encoded_title}, we'd be very appreciative!"
),
));
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!(
"Platform: {os} {arch}",
os = std::env::consts::OS,
arch = std::env::consts::ARCH
),
));
if let Some(version) = crate::program_version() {
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!("Version: {version}"),
));
}
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!(
"Args: {args:?}",
args = std::env::args().collect::<Vec<_>>()
),
));
}
/// Add an annotation to this diagnostic.
///
/// Annotations for a diagnostic are optional, but if any are added,
@@ -1019,6 +1057,13 @@ impl DiagnosticId {
matches!(self, DiagnosticId::Lint(_))
}
pub const fn as_lint(&self) -> Option<LintName> {
match self {
DiagnosticId::Lint(name) => Some(*name),
_ => None,
}
}
/// Returns `true` if this `DiagnosticId` represents a lint with the given name.
pub fn is_lint_named(&self, name: &str) -> bool {
matches!(self, DiagnosticId::Lint(self_name) if self_name == name)

View File

@@ -14,6 +14,7 @@ use crate::diagnostic::{Span, UnifiedFile};
use crate::file_revision::FileRevision;
use crate::files::file_root::FileRoots;
use crate::files::private::FileStatus;
use crate::source::SourceText;
use crate::system::{SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf};
use crate::vendored::{VendoredPath, VendoredPathBuf};
use crate::{Db, FxDashMap, vendored};
@@ -323,6 +324,17 @@ pub struct File {
/// the file has been deleted is to change the status to `Deleted`.
#[default]
status: FileStatus,
/// Overrides the result of [`source_text`](crate::source::source_text).
///
/// This is useful when running queries after modifying a file's content but
/// before the content is written to disk. For example, to verify that the applied fixes
/// didn't introduce any new errors.
///
/// The override gets automatically removed the next time the file changes.
#[default]
#[returns(ref)]
pub source_text_override: Option<SourceText>,
}
// The Salsa heap is tracked separately.
@@ -444,20 +456,28 @@ impl File {
_ => (FileStatus::NotFound, FileRevision::zero(), None),
};
let mut clear_override = false;
if file.status(db) != status {
tracing::debug!("Updating the status of `{}`", file.path(db));
file.set_status(db).to(status);
clear_override = true;
}
if file.revision(db) != revision {
tracing::debug!("Updating the revision of `{}`", file.path(db));
file.set_revision(db).to(revision);
clear_override = true;
}
if file.permissions(db) != permission {
tracing::debug!("Updating the permissions of `{}`", file.path(db));
file.set_permissions(db).to(permission);
}
if clear_override && file.source_text_override(db).is_some() {
file.set_source_text_override(db).to(None);
}
}
/// Returns `true` if the file exists.

View File

@@ -85,6 +85,13 @@ pub fn max_parallelism() -> NonZeroUsize {
})
}
// Use a reasonably large stack size to avoid running into stack overflows too easily. The
// size was chosen in such a way as to still be able to handle large expressions involving
// binary operators (x + x + … + x) both during the AST walk in semantic index building as
// well as during type checking. Using this stack size, we can handle handle expressions
// that are several times larger than the corresponding limits in existing type checkers.
pub const STACK_SIZE: usize = 16 * 1024 * 1024;
/// Trait for types that can provide Rust documentation.
///
/// Use `derive(RustDoc)` to automatically implement this trait for types that have a static string documentation.

View File

@@ -1,6 +1,8 @@
use std::borrow::Cow;
use std::ops::Deref;
use std::sync::Arc;
use ruff_diagnostics::SourceMap;
use ruff_notebook::Notebook;
use ruff_python_ast::PySourceType;
use ruff_source_file::LineIndex;
@@ -16,6 +18,10 @@ pub fn source_text(db: &dyn Db, file: File) -> SourceText {
let _span = tracing::trace_span!("source_text", file = %path).entered();
let mut read_error = None;
if let Some(source) = file.source_text_override(db) {
return source.clone();
}
let kind = if is_notebook(db.system(), path) {
file.read_to_notebook(db)
.unwrap_or_else(|error| {
@@ -90,6 +96,45 @@ impl SourceText {
pub fn read_error(&self) -> Option<&SourceTextError> {
self.inner.read_error.as_ref()
}
/// Returns a new instance for this file with the updated source text (Python code).
///
/// Uses the `source_map` to preserve the cell-boundaries.
#[must_use]
pub fn with_text(&self, new_text: String, source_map: &SourceMap) -> Self {
let new_kind = match &self.inner.kind {
SourceTextKind::Text(_) => SourceTextKind::Text(new_text),
SourceTextKind::Notebook { notebook } => {
let mut new_notebook = notebook.as_ref().clone();
new_notebook.update(source_map, new_text);
SourceTextKind::Notebook {
notebook: new_notebook.into(),
}
}
};
Self {
inner: Arc::new(SourceTextInner {
kind: new_kind,
read_error: self.inner.read_error.clone(),
}),
}
}
pub fn to_bytes(&self) -> Cow<'_, [u8]> {
match &self.inner.kind {
SourceTextKind::Text(source) => Cow::Borrowed(source.as_bytes()),
SourceTextKind::Notebook { notebook } => {
let mut output: Vec<u8> = Vec::new();
notebook
.write(&mut output)
.expect("writing to a Vec should never fail");
Cow::Owned(output)
}
}
}
}
impl Deref for SourceText {
@@ -117,13 +162,13 @@ impl std::fmt::Debug for SourceText {
}
}
#[derive(Eq, PartialEq, get_size2::GetSize)]
#[derive(Eq, PartialEq, get_size2::GetSize, Clone)]
struct SourceTextInner {
kind: SourceTextKind,
read_error: Option<SourceTextError>,
}
#[derive(Eq, PartialEq, get_size2::GetSize)]
#[derive(Eq, PartialEq, get_size2::GetSize, Clone)]
enum SourceTextKind {
Text(String),
Notebook {

View File

@@ -271,7 +271,12 @@ pub trait WritableSystem: System {
fn create_new_file(&self, path: &SystemPath) -> Result<()>;
/// Writes the given content to the file at the given path.
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()>;
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
self.write_file_bytes(path, content.as_bytes())
}
/// Writes the given content to the file at the given path.
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()>;
/// Creates a directory at `path` as well as any intermediate directories.
fn create_directory_all(&self, path: &SystemPath) -> Result<()>;
@@ -311,6 +316,8 @@ pub trait WritableSystem: System {
Ok(Some(cache_path))
}
fn dyn_clone(&self) -> Box<dyn WritableSystem>;
}
#[derive(Clone, Debug, Eq, PartialEq)]

View File

@@ -122,7 +122,9 @@ impl MemoryFileSystem {
let entry = by_path.get(&normalized).ok_or_else(not_found)?;
match entry {
Entry::File(file) => Ok(file.content.clone()),
Entry::File(file) => {
String::from_utf8(file.content.to_vec()).map_err(|_| invalid_utf8())
}
Entry::Directory(_) => Err(is_a_directory()),
}
}
@@ -139,7 +141,7 @@ impl MemoryFileSystem {
.get(&path.as_ref().to_path_buf())
.ok_or_else(not_found)?;
Ok(file.content.clone())
String::from_utf8(file.content.to_vec()).map_err(|_| invalid_utf8())
}
pub fn exists(&self, path: &SystemPath) -> bool {
@@ -161,7 +163,7 @@ impl MemoryFileSystem {
match by_path.entry(normalized) {
btree_map::Entry::Vacant(entry) => {
entry.insert(Entry::File(File {
content: String::new(),
content: Box::default(),
last_modified: file_time_now(),
}));
@@ -177,13 +179,17 @@ impl MemoryFileSystem {
/// Stores a new file in the file system.
///
/// The operation overrides the content for an existing file with the same normalized `path`.
pub fn write_file(&self, path: impl AsRef<SystemPath>, content: impl ToString) -> Result<()> {
pub fn write_file(
&self,
path: impl AsRef<SystemPath>,
content: impl AsRef<[u8]>,
) -> Result<()> {
let mut by_path = self.inner.by_path.write().unwrap();
let normalized = self.normalize_path(path.as_ref());
let file = get_or_create_file(&mut by_path, &normalized)?;
file.content = content.to_string();
file.content = content.as_ref().to_vec().into_boxed_slice();
file.last_modified = file_time_now();
Ok(())
@@ -214,7 +220,7 @@ impl MemoryFileSystem {
pub fn write_file_all(
&self,
path: impl AsRef<SystemPath>,
content: impl ToString,
content: impl AsRef<[u8]>,
) -> Result<()> {
let path = path.as_ref();
@@ -228,19 +234,24 @@ impl MemoryFileSystem {
/// Stores a new virtual file in the file system.
///
/// The operation overrides the content for an existing virtual file with the same `path`.
pub fn write_virtual_file(&self, path: impl AsRef<SystemVirtualPath>, content: impl ToString) {
pub fn write_virtual_file(
&self,
path: impl AsRef<SystemVirtualPath>,
content: impl AsRef<[u8]>,
) {
let path = path.as_ref();
let mut virtual_files = self.inner.virtual_files.write().unwrap();
let content = content.as_ref().to_vec().into_boxed_slice();
match virtual_files.entry(path.to_path_buf()) {
std::collections::hash_map::Entry::Vacant(entry) => {
entry.insert(File {
content: content.to_string(),
content,
last_modified: file_time_now(),
});
}
std::collections::hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().content = content.to_string();
entry.get_mut().content = content;
}
}
}
@@ -468,7 +479,7 @@ impl Entry {
#[derive(Debug)]
struct File {
content: String,
content: Box<[u8]>,
last_modified: FileTime,
}
@@ -497,6 +508,13 @@ fn directory_not_empty() -> std::io::Error {
std::io::Error::other("directory not empty")
}
fn invalid_utf8() -> std::io::Error {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"stream did not contain valid UTF-8",
)
}
fn create_dir_all(
paths: &mut RwLockWriteGuard<BTreeMap<Utf8PathBuf, Entry>>,
normalized: &Utf8Path,
@@ -533,7 +551,7 @@ fn get_or_create_file<'a>(
let entry = paths.entry(normalized.to_path_buf()).or_insert_with(|| {
Entry::File(File {
content: String::new(),
content: Box::default(),
last_modified: file_time_now(),
})
});
@@ -844,7 +862,7 @@ mod tests {
let fs = with_files(["c.py"]);
let error = fs
.write_file(SystemPath::new("a/b.py"), "content".to_string())
.write_file(SystemPath::new("a/b.py"), "content")
.unwrap_err();
assert_eq!(error.kind(), ErrorKind::NotFound);
@@ -855,7 +873,7 @@ mod tests {
let fs = with_files(["a/b.py"]);
let error = fs
.write_file_all(SystemPath::new("a/b.py/c"), "content".to_string())
.write_file_all(SystemPath::new("a/b.py/c"), "content")
.unwrap_err();
assert_eq!(error.kind(), ErrorKind::Other);
@@ -878,7 +896,7 @@ mod tests {
let fs = MemoryFileSystem::new();
let path = SystemPath::new("a.py");
fs.write_file_all(path, "Test content".to_string())?;
fs.write_file_all(path, "Test content")?;
assert_eq!(fs.read_to_string(path)?, "Test content");
@@ -915,9 +933,7 @@ mod tests {
fs.create_directory_all("a")?;
let error = fs
.write_file(SystemPath::new("a"), "content".to_string())
.unwrap_err();
let error = fs.write_file(SystemPath::new("a"), "content").unwrap_err();
assert_eq!(error.kind(), ErrorKind::Other);

View File

@@ -361,13 +361,17 @@ impl WritableSystem for OsSystem {
std::fs::File::create_new(path).map(drop)
}
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()> {
std::fs::write(path.as_std_path(), content)
}
fn create_directory_all(&self, path: &SystemPath) -> Result<()> {
std::fs::create_dir_all(path.as_std_path())
}
fn dyn_clone(&self) -> Box<dyn WritableSystem> {
Box::new(self.clone())
}
}
impl Default for OsSystem {

View File

@@ -205,13 +205,17 @@ impl WritableSystem for TestSystem {
self.system().create_new_file(path)
}
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
self.system().write_file(path, content)
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()> {
self.system().write_file_bytes(path, content)
}
fn create_directory_all(&self, path: &SystemPath) -> Result<()> {
self.system().create_directory_all(path)
}
fn dyn_clone(&self) -> Box<dyn WritableSystem> {
Box::new(self.clone())
}
}
/// Extension trait for databases that use a [`WritableSystem`].
@@ -283,7 +287,11 @@ pub trait DbWithTestSystem: Db + Sized {
///
/// ## Panics
/// If the db isn't using the [`InMemorySystem`].
fn write_virtual_file(&mut self, path: impl AsRef<SystemVirtualPath>, content: impl ToString) {
fn write_virtual_file(
&mut self,
path: impl AsRef<SystemVirtualPath>,
content: impl AsRef<[u8]>,
) {
let path = path.as_ref();
self.test_system()
.memory_file_system()
@@ -322,23 +330,23 @@ where
}
}
#[derive(Default, Debug)]
#[derive(Clone, Default, Debug)]
pub struct InMemorySystem {
user_config_directory: Mutex<Option<SystemPathBuf>>,
user_config_directory: Arc<Mutex<Option<SystemPathBuf>>>,
memory_fs: MemoryFileSystem,
}
impl InMemorySystem {
pub fn new(cwd: SystemPathBuf) -> Self {
Self {
user_config_directory: Mutex::new(None),
user_config_directory: Mutex::new(None).into(),
memory_fs: MemoryFileSystem::with_current_directory(cwd),
}
}
pub fn from_memory_fs(memory_fs: MemoryFileSystem) -> Self {
Self {
user_config_directory: Mutex::new(None),
user_config_directory: Mutex::new(None).into(),
memory_fs,
}
}
@@ -440,10 +448,7 @@ impl System for InMemorySystem {
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(Self {
user_config_directory: Mutex::new(self.user_config_directory.lock().unwrap().clone()),
memory_fs: self.memory_fs.clone(),
})
Box::new(self.clone())
}
}
@@ -452,11 +457,15 @@ impl WritableSystem for InMemorySystem {
self.memory_fs.create_new_file(path)
}
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()> {
self.memory_fs.write_file(path, content)
}
fn create_directory_all(&self, path: &SystemPath) -> Result<()> {
self.memory_fs.create_directory_all(path)
}
fn dyn_clone(&self) -> Box<dyn WritableSystem> {
Box::new(self.clone())
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.10"
version = "0.14.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -26,6 +26,7 @@ use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
use crate::fix::{FixResult, fix_file};
use crate::noqa::add_noqa;
use crate::package::PackageRoot;
use crate::preview::is_py315_support_enabled;
use crate::registry::Rule;
#[cfg(any(feature = "test-rules", test))]
use crate::rules::ruff::rules::test_rules::{self, TEST_RULES, TestRule};
@@ -33,7 +34,7 @@ use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, TargetVersion, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Locator, directives, fs};
use crate::{Locator, directives, fs, warn_user_once};
pub(crate) mod float;
@@ -450,6 +451,14 @@ pub fn lint_only(
) -> LinterResult {
let target_version = settings.resolve_target_version(path);
if matches!(target_version.linter_version(), PythonVersion::PY315)
&& !is_py315_support_enabled(settings)
{
warn_user_once!(
"Support for Python 3.15 is under development and may be unstable. Enable `preview` to remove this warning."
);
}
let parsed = source.into_parsed(source_kind, source_type, target_version.parser_version());
// Map row and column locations to byte slices (lazily).
@@ -555,6 +564,14 @@ pub fn lint_fix<'a>(
let target_version = settings.resolve_target_version(path);
if matches!(target_version.linter_version(), PythonVersion::PY315)
&& !is_py315_support_enabled(settings)
{
warn_user_once!(
"Support for Python 3.15 is under development and may be unstable. Enable `preview` to remove this warning."
);
}
// Continuously fix until the source code stabilizes.
loop {
// Parse once.

View File

@@ -296,3 +296,8 @@ pub(crate) const fn is_s310_resolve_string_literal_bindings_enabled(
pub(crate) const fn is_range_suppressions_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/22419
pub(crate) const fn is_py315_support_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -52,6 +52,7 @@ impl InvalidRuleCodeKind {
pub(crate) struct InvalidRuleCode {
pub(crate) rule_code: String,
pub(crate) kind: InvalidRuleCodeKind,
pub(crate) whole_comment: bool,
}
impl AlwaysFixableViolation for InvalidRuleCode {
@@ -65,7 +66,11 @@ impl AlwaysFixableViolation for InvalidRuleCode {
}
fn fix_title(&self) -> String {
"Remove the rule code".to_string()
if self.whole_comment {
format!("Remove the {} comment", self.kind.as_str())
} else {
format!("Remove the rule code `{}`", self.rule_code)
}
}
}
@@ -122,6 +127,7 @@ fn all_codes_invalid_diagnostic(
.collect::<Vec<_>>()
.join(", "),
kind: InvalidRuleCodeKind::Noqa,
whole_comment: true,
},
directive.range(),
)
@@ -139,6 +145,7 @@ fn some_codes_are_invalid_diagnostic(
InvalidRuleCode {
rule_code: invalid_code.to_string(),
kind: InvalidRuleCodeKind::Noqa,
whole_comment: false,
},
invalid_code.range(),
)

View File

@@ -52,6 +52,25 @@ impl UnusedNOQAKind {
/// foo.bar()
/// ```
///
/// ## Conflict with other linters
/// When using `RUF100` with the `--fix` option, Ruff may remove trailing comments
/// that follow a `# noqa` directive on the same line, as it interprets the
/// remainder of the line as a description for the suppression.
///
/// To prevent Ruff from removing suppressions for other tools (like `pylint`
/// or `mypy`), separate them with a second `#` character:
///
/// ```python
/// # Bad: Ruff --fix will remove the pylint comment
/// def visit_ImportFrom(self, node): # noqa: N802, pylint: disable=invalid-name
/// pass
///
///
/// # Good: Ruff will preserve the pylint comment
/// def visit_ImportFrom(self, node): # noqa: N802 # pylint: disable=invalid-name
/// pass
/// ```
///
/// ## Options
/// - `lint.external`
///

View File

@@ -10,7 +10,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID123
3 | # External code
4 | import re # noqa: V123
|
help: Remove the rule code
help: Remove the `# noqa` comment
1 | # Invalid code
- import os # noqa: INVALID123
2 + import os
@@ -28,7 +28,7 @@ RUF102 [*] Invalid rule code in `# noqa`: V123
5 | # Valid noqa
6 | import sys # noqa: E402
|
help: Remove the rule code
help: Remove the `# noqa` comment
1 | # Invalid code
2 | import os # noqa: INVALID123
3 | # External code
@@ -48,7 +48,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID456
8 | from itertools import product # Preceeding comment # noqa: INVALID789
9 | # Succeeding comment
|
help: Remove the rule code
help: Remove the rule code `INVALID456`
4 | import re # noqa: V123
5 | # Valid noqa
6 | import sys # noqa: E402
@@ -68,7 +68,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID789
9 | # Succeeding comment
10 | import math # noqa: INVALID000 # Succeeding comment
|
help: Remove the rule code
help: Remove the `# noqa` comment
5 | # Valid noqa
6 | import sys # noqa: E402
7 | from functools import cache # Preceeding comment # noqa: F401, INVALID456
@@ -88,7 +88,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID000
11 | # Mixed valid and invalid
12 | from typing import List # noqa: F401, INVALID123
|
help: Remove the rule code
help: Remove the `# noqa` comment
7 | from functools import cache # Preceeding comment # noqa: F401, INVALID456
8 | from itertools import product # Preceeding comment # noqa: INVALID789
9 | # Succeeding comment
@@ -108,7 +108,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID123
13 | # Test for multiple invalid
14 | from collections import defaultdict # noqa: INVALID100, INVALID200, F401
|
help: Remove the rule code
help: Remove the rule code `INVALID123`
9 | # Succeeding comment
10 | import math # noqa: INVALID000 # Succeeding comment
11 | # Mixed valid and invalid
@@ -128,7 +128,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID100
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
|
help: Remove the rule code
help: Remove the rule code `INVALID100`
11 | # Mixed valid and invalid
12 | from typing import List # noqa: F401, INVALID123
13 | # Test for multiple invalid
@@ -148,7 +148,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID200
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
|
help: Remove the rule code
help: Remove the rule code `INVALID200`
11 | # Mixed valid and invalid
12 | from typing import List # noqa: F401, INVALID123
13 | # Test for multiple invalid
@@ -168,7 +168,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID300
17 | # Test for mixed code types
18 | import json # noqa: E402, INVALID400, V100
|
help: Remove the rule code
help: Remove the rule code `INVALID300`
13 | # Test for multiple invalid
14 | from collections import defaultdict # noqa: INVALID100, INVALID200, F401
15 | # Test for preserving valid codes when fixing
@@ -188,7 +188,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID400
19 | # Test for rule redirects
20 | import pandas as pd # noqa: TCH002
|
help: Remove the rule code
help: Remove the rule code `INVALID400`
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
17 | # Test for mixed code types
@@ -207,7 +207,7 @@ RUF102 [*] Invalid rule code in `# noqa`: V100
19 | # Test for rule redirects
20 | import pandas as pd # noqa: TCH002
|
help: Remove the rule code
help: Remove the rule code `V100`
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
17 | # Test for mixed code types

View File

@@ -10,7 +10,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID123
3 | # External code
4 | import re # noqa: V123
|
help: Remove the rule code
help: Remove the `# noqa` comment
1 | # Invalid code
- import os # noqa: INVALID123
2 + import os
@@ -28,7 +28,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID456
8 | from itertools import product # Preceeding comment # noqa: INVALID789
9 | # Succeeding comment
|
help: Remove the rule code
help: Remove the rule code `INVALID456`
4 | import re # noqa: V123
5 | # Valid noqa
6 | import sys # noqa: E402
@@ -48,7 +48,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID789
9 | # Succeeding comment
10 | import math # noqa: INVALID000 # Succeeding comment
|
help: Remove the rule code
help: Remove the `# noqa` comment
5 | # Valid noqa
6 | import sys # noqa: E402
7 | from functools import cache # Preceeding comment # noqa: F401, INVALID456
@@ -68,7 +68,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID000
11 | # Mixed valid and invalid
12 | from typing import List # noqa: F401, INVALID123
|
help: Remove the rule code
help: Remove the `# noqa` comment
7 | from functools import cache # Preceeding comment # noqa: F401, INVALID456
8 | from itertools import product # Preceeding comment # noqa: INVALID789
9 | # Succeeding comment
@@ -88,7 +88,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID123
13 | # Test for multiple invalid
14 | from collections import defaultdict # noqa: INVALID100, INVALID200, F401
|
help: Remove the rule code
help: Remove the rule code `INVALID123`
9 | # Succeeding comment
10 | import math # noqa: INVALID000 # Succeeding comment
11 | # Mixed valid and invalid
@@ -108,7 +108,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID100
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
|
help: Remove the rule code
help: Remove the rule code `INVALID100`
11 | # Mixed valid and invalid
12 | from typing import List # noqa: F401, INVALID123
13 | # Test for multiple invalid
@@ -128,7 +128,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID200
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
|
help: Remove the rule code
help: Remove the rule code `INVALID200`
11 | # Mixed valid and invalid
12 | from typing import List # noqa: F401, INVALID123
13 | # Test for multiple invalid
@@ -148,7 +148,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID300
17 | # Test for mixed code types
18 | import json # noqa: E402, INVALID400, V100
|
help: Remove the rule code
help: Remove the rule code `INVALID300`
13 | # Test for multiple invalid
14 | from collections import defaultdict # noqa: INVALID100, INVALID200, F401
15 | # Test for preserving valid codes when fixing
@@ -168,7 +168,7 @@ RUF102 [*] Invalid rule code in `# noqa`: INVALID400
19 | # Test for rule redirects
20 | import pandas as pd # noqa: TCH002
|
help: Remove the rule code
help: Remove the rule code `INVALID400`
15 | # Test for preserving valid codes when fixing
16 | from itertools import chain # noqa: E402, INVALID300, F401
17 | # Test for mixed code types

View File

@@ -7,7 +7,7 @@ source: crates/ruff_linter/src/rules/ruff/mod.rs
--- Summary ---
Removed: 15
Added: 23
Added: 20
--- Removed ---
E741 Ambiguous variable name: `I`
@@ -301,6 +301,7 @@ RUF100 [*] Unused suppression (non-enabled: `E501`)
| ^^^^^^^^^^^^^^^^^^^^^
47 | I = 1
48 | # ruff: enable[E501]
| --------------------
|
help: Remove unused suppression
43 | def f():
@@ -308,26 +309,10 @@ help: Remove unused suppression
45 | # logged to user
- # ruff: disable[E501]
46 | I = 1
47 | # ruff: enable[E501]
48 |
RUF100 [*] Unused suppression (non-enabled: `E501`)
--> suppressions.py:48:5
|
46 | # ruff: disable[E501]
47 | I = 1
48 | # ruff: enable[E501]
| ^^^^^^^^^^^^^^^^^^^^
|
help: Remove unused suppression
45 | # logged to user
46 | # ruff: disable[E501]
47 | I = 1
- # ruff: enable[E501]
47 |
48 |
49 |
50 | def f():
49 | def f():
RUF100 [*] Unused `noqa` directive (unused: `E741`, `F841`)
@@ -563,8 +548,11 @@ RUF102 [*] Invalid rule code in suppression: YF829
| ^^^^^
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
| -----
|
help: Remove the rule code
help: Remove the suppression comment
90 |
91 | def f():
92 | # Unknown rule codes
@@ -572,6 +560,10 @@ help: Remove the rule code
93 | # ruff: disable[F841, RQW320]
94 | value = 0
95 | # ruff: enable[F841, RQW320]
- # ruff: enable[YF829]
96 |
97 |
98 | def f():
RUF102 [*] Invalid rule code in suppression: RQW320
@@ -583,30 +575,15 @@ RUF102 [*] Invalid rule code in suppression: RQW320
| ^^^^^^
95 | value = 0
96 | # ruff: enable[F841, RQW320]
| ------
97 | # ruff: enable[YF829]
|
help: Remove the rule code
help: Remove the rule code `RQW320`
91 | def f():
92 | # Unknown rule codes
93 | # ruff: disable[YF829]
- # ruff: disable[F841, RQW320]
94 + # ruff: disable[F841]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
RUF102 [*] Invalid rule code in suppression: RQW320
--> suppressions.py:96:26
|
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
| ^^^^^^
97 | # ruff: enable[YF829]
|
help: Remove the rule code
93 | # ruff: disable[YF829]
94 | # ruff: disable[F841, RQW320]
95 | value = 0
- # ruff: enable[F841, RQW320]
96 + # ruff: enable[F841]
@@ -615,24 +592,6 @@ help: Remove the rule code
99 |
RUF102 [*] Invalid rule code in suppression: YF829
--> suppressions.py:97:20
|
95 | value = 0
96 | # ruff: enable[F841, RQW320]
97 | # ruff: enable[YF829]
| ^^^^^
|
help: Remove the rule code
94 | # ruff: disable[F841, RQW320]
95 | value = 0
96 | # ruff: enable[F841, RQW320]
- # ruff: enable[YF829]
97 |
98 |
99 | def f():
RUF103 [*] Invalid suppression comment: missing suppression codes like `[E501, ...]`
--> suppressions.py:109:5
|

View File

@@ -36,6 +36,7 @@ pub enum PythonVersion {
Py312,
Py313,
Py314,
Py315,
}
impl Default for PythonVersion {
@@ -58,6 +59,7 @@ impl TryFrom<ast::PythonVersion> for PythonVersion {
ast::PythonVersion::PY312 => Ok(Self::Py312),
ast::PythonVersion::PY313 => Ok(Self::Py313),
ast::PythonVersion::PY314 => Ok(Self::Py314),
ast::PythonVersion::PY315 => Ok(Self::Py315),
_ => Err(format!("unrecognized python version {value}")),
}
}
@@ -88,6 +90,7 @@ impl PythonVersion {
Self::Py312 => (3, 12),
Self::Py313 => (3, 13),
Self::Py314 => (3, 14),
Self::Py315 => (3, 15),
}
}
}
@@ -604,13 +607,21 @@ impl TryFrom<String> for RequiredVersion {
type Error = pep440_rs::VersionSpecifiersParseError;
fn try_from(value: String) -> Result<Self, Self::Error> {
value.parse()
}
}
impl FromStr for RequiredVersion {
type Err = pep440_rs::VersionSpecifiersParseError;
fn from_str(value: &str) -> Result<Self, Self::Err> {
// Treat `0.3.1` as `==0.3.1`, for backwards compatibility.
if let Ok(version) = pep440_rs::Version::from_str(&value) {
if let Ok(version) = pep440_rs::Version::from_str(value) {
Ok(Self(VersionSpecifiers::from(
VersionSpecifier::equals_version(version),
)))
} else {
Ok(Self(VersionSpecifiers::from_str(&value)?))
Ok(Self(VersionSpecifiers::from_str(value)?))
}
}
}

View File

@@ -13,7 +13,6 @@ use ruff_python_trivia::Cursor;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize, TextSlice};
use smallvec::{SmallVec, smallvec};
use crate::Locator;
use crate::checkers::ast::LintContext;
use crate::codes::Rule;
use crate::fix::edits::delete_comment;
@@ -24,6 +23,7 @@ use crate::rules::ruff::rules::{
UnmatchedSuppressionComment, UnusedCodes, UnusedNOQA, UnusedNOQAKind, code_is_valid,
};
use crate::settings::LinterSettings;
use crate::{Locator, Violation};
#[derive(Clone, Debug, Eq, PartialEq)]
enum SuppressionAction {
@@ -85,11 +85,39 @@ pub(crate) struct Suppression {
/// Range for which the suppression applies
range: TextRange,
/// Any comments associated with the suppression
comments: SmallVec<[SuppressionComment; 2]>,
/// Whether this suppression actually suppressed a diagnostic
used: Cell<bool>,
comments: DisableEnableComments,
}
impl Suppression {
fn codes(&self) -> &[TextRange] {
&self.comments.disable_comment().codes
}
}
#[derive(Debug)]
pub(crate) enum DisableEnableComments {
/// An implicitly closed disable comment without a matching enable comment.
Disable(SuppressionComment),
/// A matching pair of disable and enable comments.
DisableEnable(SuppressionComment, SuppressionComment),
}
impl DisableEnableComments {
pub(crate) fn disable_comment(&self) -> &SuppressionComment {
match self {
DisableEnableComments::Disable(comment) => comment,
DisableEnableComments::DisableEnable(disable, _) => disable,
}
}
pub(crate) fn enable_comment(&self) -> Option<&SuppressionComment> {
match self {
DisableEnableComments::Disable(_) => None,
DisableEnableComments::DisableEnable(_, enable) => Some(enable),
}
}
}
#[derive(Copy, Clone, Debug)]
@@ -171,23 +199,17 @@ impl Suppressions {
if !code_is_valid(&suppression.code, &context.settings().external) {
// InvalidRuleCode
if context.is_rule_enabled(Rule::InvalidRuleCode) {
for comment in &suppression.comments {
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
comment,
true,
);
context
.report_diagnostic(
InvalidRuleCode {
rule_code: suppression.code.to_string(),
kind: InvalidRuleCodeKind::Suppression,
},
range,
)
.set_fix(Fix::safe_edit(edit));
}
Suppressions::report_suppression(
context,
locator,
suppression,
true,
InvalidRuleCode {
rule_code: suppression.code.to_string(),
kind: InvalidRuleCodeKind::Suppression,
whole_comment: suppression.codes().len() == 1,
},
);
}
} else if !suppression.used.get() {
// UnusedNOQA
@@ -197,42 +219,37 @@ impl Suppressions {
) else {
continue; // "external" lint code, don't treat it as unused
};
for comment in &suppression.comments {
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
comment,
false,
);
let codes = if context.is_rule_enabled(rule) {
UnusedCodes {
unmatched: vec![suppression.code.to_string()],
..Default::default()
}
} else {
UnusedCodes {
disabled: vec![suppression.code.to_string()],
..Default::default()
}
};
let codes = if context.is_rule_enabled(rule) {
UnusedCodes {
unmatched: vec![suppression.code.to_string()],
..Default::default()
}
} else {
UnusedCodes {
disabled: vec![suppression.code.to_string()],
..Default::default()
}
};
context
.report_diagnostic(
UnusedNOQA {
codes: Some(codes),
kind: UnusedNOQAKind::Suppression,
},
range,
)
.set_fix(Fix::safe_edit(edit));
}
Suppressions::report_suppression(
context,
locator,
suppression,
false,
UnusedNOQA {
codes: Some(codes),
kind: UnusedNOQAKind::Suppression,
},
);
}
} else if suppression.comments.len() == 1 {
} else if let DisableEnableComments::Disable(comment) = &suppression.comments {
// UnmatchedSuppressionComment
let range = suppression.comments[0].range;
if unmatched_ranges.insert(range) {
context.report_diagnostic_if_enabled(UnmatchedSuppressionComment {}, range);
if unmatched_ranges.insert(comment.range) {
context.report_diagnostic_if_enabled(
UnmatchedSuppressionComment {},
comment.range,
);
}
}
}
@@ -267,6 +284,35 @@ impl Suppressions {
}
}
fn report_suppression<T: Violation>(
context: &LintContext,
locator: &Locator,
suppression: &Suppression,
highlight_only_code: bool,
kind: T,
) {
let disable_comment = suppression.comments.disable_comment();
let (range, edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
disable_comment,
highlight_only_code,
);
let mut diagnostic = context.report_diagnostic(kind, range);
if let Some(enable_comment) = suppression.comments.enable_comment() {
let (enable_range, enable_range_edit) = Suppressions::delete_code_or_comment(
locator,
suppression,
enable_comment,
highlight_only_code,
);
diagnostic.secondary_annotation("", enable_range);
diagnostic.set_fix(Fix::safe_edits(edit, [enable_range_edit]));
} else {
diagnostic.set_fix(Fix::safe_edit(edit));
}
}
fn delete_code_or_comment(
locator: &Locator<'_>,
suppression: &Suppression,
@@ -424,7 +470,10 @@ impl<'a> SuppressionsBuilder<'a> {
self.valid.push(Suppression {
code: code.into(),
range: combined_range,
comments: smallvec![comment.comment.clone(), other.comment.clone()],
comments: DisableEnableComments::DisableEnable(
comment.comment.clone(),
other.comment.clone(),
),
used: false.into(),
});
}
@@ -441,7 +490,7 @@ impl<'a> SuppressionsBuilder<'a> {
self.valid.push(Suppression {
code: code.into(),
range: implicit_range,
comments: smallvec![comment.comment.clone()],
comments: DisableEnableComments::Disable(comment.comment.clone()),
used: false.into(),
});
}
@@ -643,7 +692,7 @@ mod tests {
use insta::assert_debug_snapshot;
use itertools::Itertools;
use ruff_python_parser::{Mode, ParseOptions, parse};
use ruff_text_size::{TextRange, TextSize};
use ruff_text_size::{TextLen, TextRange, TextSize};
use similar::DiffableStr;
use crate::{
@@ -705,24 +754,22 @@ print('hello')
Suppression {
covered_source: "# ruff: disable[foo]\nprint('hello')\n# ruff: enable[foo]",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
},
],
invalid: [],
@@ -751,30 +798,28 @@ def foo():
Suppression {
covered_source: "# ruff: disable[bar]\n print('hello')\n\n",
code: "bar",
comments: [
SuppressionComment {
text: "# ruff: disable[bar]",
action: Disable,
codes: [
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[bar]",
action: Disable,
codes: [
"bar",
],
reason: "",
},
enable_comment: None,
},
Suppression {
covered_source: "# ruff: disable[foo]\nprint('hello')\n\ndef foo():\n # ruff: disable[bar]\n print('hello')\n\n",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
enable_comment: None,
},
],
invalid: [],
@@ -803,46 +848,42 @@ class Foo:
Suppression {
covered_source: "# ruff: disable[bar]\n print('hello')\n # ruff: enable[bar]",
code: "bar",
comments: [
SuppressionComment {
text: "# ruff: disable[bar]",
action: Disable,
codes: [
"bar",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[bar]",
action: Enable,
codes: [
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[bar]",
action: Disable,
codes: [
"bar",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[bar]",
action: Enable,
codes: [
"bar",
],
reason: "",
},
},
Suppression {
covered_source: "# ruff: disable[foo]\n def bar(self):\n # ruff: disable[bar]\n print('hello')\n # ruff: enable[bar]\n # ruff: enable[foo]",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
},
],
invalid: [],
@@ -872,46 +913,42 @@ def foo():
Suppression {
covered_source: "# ruff: disable[foo]\n print('hello')\n # ruff: disable[bar]\n print('hello')\n # ruff: enable[foo]",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
},
Suppression {
covered_source: "# ruff: disable[bar]\n print('hello')\n # ruff: enable[foo]\n print('hello')\n # ruff: enable[bar]",
code: "bar",
comments: [
SuppressionComment {
text: "# ruff: disable[bar]",
action: Disable,
codes: [
"bar",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[bar]",
action: Enable,
codes: [
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[bar]",
action: Disable,
codes: [
"bar",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[bar]",
action: Enable,
codes: [
"bar",
],
reason: "",
},
},
],
invalid: [],
@@ -936,50 +973,46 @@ print('hello')
Suppression {
covered_source: "# ruff: disable[foo, bar]\nprint('hello')\n# ruff: enable[foo, bar]",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[foo, bar]",
action: Enable,
codes: [
"foo",
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[foo, bar]",
action: Enable,
codes: [
"foo",
"bar",
],
reason: "",
},
},
Suppression {
covered_source: "# ruff: disable[foo, bar]\nprint('hello')\n# ruff: enable[foo, bar]",
code: "bar",
comments: [
SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[foo, bar]",
action: Enable,
codes: [
"foo",
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[foo, bar]",
action: Enable,
codes: [
"foo",
"bar",
],
reason: "",
},
},
],
invalid: [],
@@ -1005,16 +1038,15 @@ print('world')
Suppression {
covered_source: "# ruff: disable[foo]\nprint('hello')\n# ruff: enable[bar]\nprint('world')\n",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo]",
action: Disable,
codes: [
"foo",
],
reason: "",
},
enable_comment: None,
},
],
invalid: [
@@ -1051,32 +1083,30 @@ print('hello')
Suppression {
covered_source: "# ruff: disable[foo, bar]\nprint('hello')\n# ruff: enable[bar, foo]\n",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
enable_comment: None,
},
Suppression {
covered_source: "# ruff: disable[foo, bar]\nprint('hello')\n# ruff: enable[bar, foo]\n",
code: "bar",
comments: [
SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo, bar]",
action: Disable,
codes: [
"foo",
"bar",
],
reason: "",
},
enable_comment: None,
},
],
invalid: [
@@ -1116,38 +1146,35 @@ print('hello')
Suppression {
covered_source: "# ruff: disable[foo] first\nprint('hello')\n# ruff: disable[foo] second\nprint('hello')\n# ruff: enable[foo]",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo] first",
action: Disable,
codes: [
"foo",
],
reason: "first",
},
SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo] first",
action: Disable,
codes: [
"foo",
],
reason: "first",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[foo]",
action: Enable,
codes: [
"foo",
],
reason: "",
},
},
Suppression {
covered_source: "# ruff: disable[foo] second\nprint('hello')\n# ruff: enable[foo]\n",
code: "foo",
comments: [
SuppressionComment {
text: "# ruff: disable[foo] second",
action: Disable,
codes: [
"foo",
],
reason: "second",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[foo] second",
action: Disable,
codes: [
"foo",
],
reason: "second",
},
enable_comment: None,
},
],
invalid: [],
@@ -1189,100 +1216,92 @@ def bar():
Suppression {
covered_source: "# ruff: disable[delta] unmatched\n pass\n # ruff: enable[beta,gamma]\n# ruff: enable[alpha]\n\n# ruff: disable # parse error!\n",
code: "delta",
comments: [
SuppressionComment {
text: "# ruff: disable[delta] unmatched",
action: Disable,
codes: [
"delta",
],
reason: "unmatched",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[delta] unmatched",
action: Disable,
codes: [
"delta",
],
reason: "unmatched",
},
enable_comment: None,
},
Suppression {
covered_source: "# ruff: disable[beta,gamma]\n if True:\n # ruff: disable[delta] unmatched\n pass\n # ruff: enable[beta,gamma]",
code: "beta",
comments: [
SuppressionComment {
text: "# ruff: disable[beta,gamma]",
action: Disable,
codes: [
"beta",
"gamma",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[beta,gamma]",
action: Enable,
codes: [
"beta",
"gamma",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[beta,gamma]",
action: Disable,
codes: [
"beta",
"gamma",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[beta,gamma]",
action: Enable,
codes: [
"beta",
"gamma",
],
reason: "",
},
},
Suppression {
covered_source: "# ruff: disable[beta,gamma]\n if True:\n # ruff: disable[delta] unmatched\n pass\n # ruff: enable[beta,gamma]",
code: "gamma",
comments: [
SuppressionComment {
text: "# ruff: disable[beta,gamma]",
action: Disable,
codes: [
"beta",
"gamma",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[beta,gamma]",
action: Enable,
codes: [
"beta",
"gamma",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[beta,gamma]",
action: Disable,
codes: [
"beta",
"gamma",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[beta,gamma]",
action: Enable,
codes: [
"beta",
"gamma",
],
reason: "",
},
},
Suppression {
covered_source: "# ruff: disable[zeta] unmatched\n pass\n# ruff: enable[zeta] underindented\n pass\n",
code: "zeta",
comments: [
SuppressionComment {
text: "# ruff: disable[zeta] unmatched",
action: Disable,
codes: [
"zeta",
],
reason: "unmatched",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[zeta] unmatched",
action: Disable,
codes: [
"zeta",
],
reason: "unmatched",
},
enable_comment: None,
},
Suppression {
covered_source: "# ruff: disable[alpha]\ndef foo():\n # ruff: disable[beta,gamma]\n if True:\n # ruff: disable[delta] unmatched\n pass\n # ruff: enable[beta,gamma]\n# ruff: enable[alpha]",
code: "alpha",
comments: [
SuppressionComment {
text: "# ruff: disable[alpha]",
action: Disable,
codes: [
"alpha",
],
reason: "",
},
SuppressionComment {
text: "# ruff: enable[alpha]",
action: Enable,
codes: [
"alpha",
],
reason: "",
},
],
disable_comment: SuppressionComment {
text: "# ruff: disable[alpha]",
action: Disable,
codes: [
"alpha",
],
reason: "",
},
enable_comment: SuppressionComment {
text: "# ruff: enable[alpha]",
action: Enable,
codes: [
"alpha",
],
reason: "",
},
},
],
invalid: [
@@ -1532,10 +1551,8 @@ def bar():
#[test]
fn comment_attributes() {
let source = "# ruff: disable[foo, bar] hello world";
let mut parser = SuppressionParser::new(
source,
TextRange::new(0.into(), TextSize::try_from(source.len()).unwrap()),
);
let mut parser =
SuppressionParser::new(source, TextRange::new(0.into(), source.text_len()));
let comment = parser.parse_comment().unwrap();
assert_eq!(comment.action, SuppressionAction::Disable);
assert_eq!(
@@ -1554,12 +1571,12 @@ def bar():
source: &'_ str,
) -> Result<DebugSuppressionComment<'_>, ParseError> {
let offset = TextSize::new(source.find('#').unwrap_or(0).try_into().unwrap());
let mut parser = SuppressionParser::new(
source,
TextRange::new(offset, TextSize::try_from(source.len()).unwrap()),
);
let mut parser = SuppressionParser::new(source, TextRange::new(offset, source.text_len()));
match parser.parse_comment() {
Ok(comment) => Ok(DebugSuppressionComment { source, comment }),
Ok(comment) => Ok(DebugSuppressionComment {
source,
comment: Some(comment),
}),
Err(error) => Err(error),
}
}
@@ -1639,16 +1656,18 @@ def bar():
.field("covered_source", &&self.source[self.suppression.range])
.field("code", &self.suppression.code)
.field(
"comments",
&self
.suppression
.comments
.iter()
.map(|comment| DebugSuppressionComment {
source: self.source,
comment: comment.clone(),
})
.collect_vec(),
"disable_comment",
&DebugSuppressionComment {
source: self.source,
comment: Some(self.suppression.comments.disable_comment().clone()),
},
)
.field(
"enable_comment",
&DebugSuppressionComment {
source: self.source,
comment: self.suppression.comments.enable_comment().cloned(),
},
)
.finish()
}
@@ -1667,7 +1686,7 @@ def bar():
"comment",
&DebugSuppressionComment {
source: self.source,
comment: self.invalid.comment.clone(),
comment: Some(self.invalid.comment.clone()),
},
)
.finish()
@@ -1690,23 +1709,27 @@ def bar():
struct DebugSuppressionComment<'a> {
source: &'a str,
comment: SuppressionComment,
comment: Option<SuppressionComment>,
}
impl fmt::Debug for DebugSuppressionComment<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("SuppressionComment")
.field("text", &&self.source[self.comment.range])
.field("action", &self.comment.action)
.field(
"codes",
&DebugCodes {
source: self.source,
codes: &self.comment.codes,
},
)
.field("reason", &&self.source[self.comment.reason])
.finish()
match &self.comment {
Some(comment) => f
.debug_struct("SuppressionComment")
.field("text", &&self.source[comment.range])
.field("action", &comment.action)
.field(
"codes",
&DebugCodes {
source: self.source,
codes: &comment.codes,
},
)
.field("reason", &&self.source[comment.reason])
.finish(),
None => f.debug_tuple("None").finish(),
}
}
}

View File

@@ -35,6 +35,10 @@ impl PythonVersion {
major: 3,
minor: 14,
};
pub const PY315: PythonVersion = PythonVersion {
major: 3,
minor: 15,
};
pub fn iter() -> impl Iterator<Item = PythonVersion> {
[
@@ -46,6 +50,7 @@ impl PythonVersion {
PythonVersion::PY312,
PythonVersion::PY313,
PythonVersion::PY314,
PythonVersion::PY315,
]
.into_iter()
}
@@ -61,7 +66,7 @@ impl PythonVersion {
/// The latest Python version supported in preview
pub fn latest_preview() -> Self {
let latest_preview = Self::PY314;
let latest_preview = Self::PY315;
debug_assert!(latest_preview >= Self::latest());
latest_preview
}

View File

@@ -0,0 +1,133 @@
class Simple:
x=1
x=2 # fmt: skip
x=3
class Semicolon:
x=1
x=2;x=3 # fmt: skip
x=4
class TrailingSemicolon:
x=1
x=2;x=3 ; # fmt: skip
x=4
class SemicolonNewLogicalLine:
x=1;
x=2;x=3 # fmt: skip
x=4
class ManySemicolonOneLine:
x=1
x=2;x=3;x=4 # fmt: skip
x=5
class CompoundInSuite:
x=1
def foo(): y=1 # fmt: skip
x=2
class CompoundInSuiteNewline:
x=1
def foo():
y=1 # fmt: skip
x=2
class MultiLineSkip:
x=1
x = [
'1',
'2',
] # fmt: skip
class MultiLineSemicolon:
x=1
x = [
'1',
'2',
]; x=2 # fmt: skip
class LineContinuationSemicolonAfter:
x=1
x = ['a']\
; y=1 # fmt: skip
class LineContinuationSemicolonBefore:
x=1
x = ['a']; \
y=1 # fmt: skip
class LineContinuationSemicolonAndNewline:
x=1
x = ['a']; \
y=1 # fmt: skip
class LineContinuationSemicolonAndNewlineAndComment:
x=1
x = ['a']; \
# 1
y=1 # fmt: skip
class RepeatedLineContinuation:
x=1
x = ['a']; \
\
\
y=1 # fmt: skip
class MultiLineSemicolonComments:
x=1
# 1
x = [ # 2
'1', # 3
'2',
# 4
]; x=2 # 5 # fmt: skip # 6
class DocstringSkipped:
'''This is a docstring''' # fmt: skip
x=1
class MultilineDocstringSkipped:
'''This is a docstring
''' # fmt: skip
x=1
class FirstStatementNewlines:
x=1 # fmt: skip
class ChainingSemicolons:
x=[
'1',
'2',
'3',
];x=1;x=[
'1',
'2',
'3'
];x=1;x=1 # fmt: skip
class LotsOfComments:
# 1
x=[ # 2
'1', # 3
'2',
'3'
] ;x=2;x=3 # 4 # fmt: skip # 5
# 6
class MixingCompound:
def foo(): bar(); import zoo # fmt: skip
# https://github.com/astral-sh/ruff/issues/17331
def main() -> None:
import ipdb; ipdb.set_trace() # noqa: E402,E702,I001 # fmt: skip
# https://github.com/astral-sh/ruff/issues/11430
print(); print() # noqa # fmt: skip

View File

@@ -0,0 +1,22 @@
# 0
'''Module docstring
multiple lines''' # 1 # fmt: skip # 2
import a;import b; from c import (
x,y,z
); import f # fmt: skip
x=1;x=2;x=3;x=4 # fmt: skip
# 1
x=[ # 2
'1', # 3
'2',
'3'
];x=1;x=1 # 4 # fmt: skip # 5
# 6
def foo(): x=[
'1',
'2',
];x=1 # fmt: skip

View File

@@ -0,0 +1,68 @@
class Simple:
# Range comprises skip range
x=1
<RANGE_START>x=2 <RANGE_END># fmt: skip
x=3
class Semicolon:
# Range is part of skip range
x=1
x=2;<RANGE_START>x=3<RANGE_END> # fmt: skip
x=4
class FormatFirst:
x=1
<RANGE_START>x=2<RANGE_END>;x=3 # fmt: skip
x=4
class FormatMiddle:
x=1
x=2;<RANGE_START>x=3<RANGE_END>;x=4 # fmt: skip
x=5
class SemicolonNewLogicalLine:
# Range overlaps on right side
<RANGE_START>x=1;
x=2<RANGE_END>;x=3 # fmt: skip
x=4
class SemicolonNewLogicalLine:
# Range overlaps on left side
x=1;
x=2;<RANGE_START>x=3 # fmt: skip
x=4<RANGE_END>
class ManySemicolonOneLine:
x=1
x=2;x=3;x=4 # fmt: skip
x=5
class CompoundInSuite:
x=1
<RANGE_START>def foo(): y=1 <RANGE_END># fmt: skip
x=2
class CompoundInSuiteNewline:
x=1
def foo():
y=1 # fmt: skip
x=2
class MultiLineSkip:
# Range inside statement
x=1
x = <RANGE_START>[
'1',
'2',<RANGE_END>
] # fmt: skip
class LotsOfComments:
# 1
x=[ # 2
'1', # 3<RANGE_START>
'2',
'3'
] ;x=2;x=3 # 4<RANGE_END> # fmt: skip # 5
# 6

View File

@@ -24,7 +24,6 @@ pub use crate::options::{
};
use crate::range::is_logical_line;
pub use crate::shared_traits::{AsFormat, FormattedIter, FormattedIterExt, IntoFormat};
use crate::verbatim::suppressed_node;
pub(crate) mod builders;
pub mod cli;
@@ -61,51 +60,39 @@ where
let node_ref = AnyNodeRef::from(node);
let node_comments = comments.leading_dangling_trailing(node_ref);
if self.is_suppressed(node_comments.trailing, f.context()) {
suppressed_node(node_ref).fmt(f)
} else {
leading_comments(node_comments.leading).fmt(f)?;
leading_comments(node_comments.leading).fmt(f)?;
// Emit source map information for nodes that are valid "narrowing" targets
// in range formatting. Never emit source map information if they're disabled
// for performance reasons.
let emit_source_position = (is_logical_line(node_ref) || node_ref.is_mod_module())
&& f.options().source_map_generation().is_enabled();
// Emit source map information for nodes that are valid "narrowing" targets
// in range formatting. Never emit source map information if they're disabled
// for performance reasons.
let emit_source_position = (is_logical_line(node_ref) || node_ref.is_mod_module())
&& f.options().source_map_generation().is_enabled();
emit_source_position
.then_some(source_position(node.start()))
.fmt(f)?;
emit_source_position
.then_some(source_position(node.start()))
.fmt(f)?;
self.fmt_fields(node, f)?;
self.fmt_fields(node, f)?;
debug_assert!(
node_comments
.dangling
.iter()
.all(SourceComment::is_formatted),
"The node has dangling comments that need to be formatted manually. Add the special dangling comments handling to `fmt_fields`."
);
debug_assert!(
node_comments
.dangling
.iter()
.all(SourceComment::is_formatted),
"The node has dangling comments that need to be formatted manually. Add the special dangling comments handling to `fmt_fields`."
);
write!(
f,
[
emit_source_position.then_some(source_position(node.end())),
trailing_comments(node_comments.trailing)
]
)
}
write!(
f,
[
emit_source_position.then_some(source_position(node.end())),
trailing_comments(node_comments.trailing)
]
)
}
/// Formats the node's fields.
fn fmt_fields(&self, item: &N, f: &mut PyFormatter) -> FormatResult<()>;
fn is_suppressed(
&self,
_trailing_comments: &[SourceComment],
_context: &PyFormatContext,
) -> bool {
false
}
}
#[derive(Error, Debug, salsa::Update, PartialEq, Eq)]

View File

@@ -1,9 +1,10 @@
use ruff_formatter::write;
use ruff_python_ast::Decorator;
use ruff_text_size::Ranged;
use crate::comments::SourceComment;
use crate::expression::maybe_parenthesize_expression;
use crate::expression::parentheses::Parenthesize;
use crate::verbatim::verbatim_text;
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
@@ -11,26 +12,27 @@ pub struct FormatDecorator;
impl FormatNodeRule<Decorator> for FormatDecorator {
fn fmt_fields(&self, item: &Decorator, f: &mut PyFormatter) -> FormatResult<()> {
let Decorator {
expression,
range: _,
node_index: _,
} = item;
let comments = f.context().comments();
let trailing = comments.trailing(item);
write!(
f,
[
token("@"),
maybe_parenthesize_expression(expression, item, Parenthesize::Optional)
]
)
}
if has_skip_comment(trailing, f.context().source()) {
comments.mark_verbatim_node_comments_formatted(item.into());
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
verbatim_text(item.range()).fmt(f)
} else {
let Decorator {
expression,
range: _,
node_index: _,
} = item;
write!(
f,
[
token("@"),
maybe_parenthesize_expression(expression, item, Parenthesize::Optional)
]
)
}
}
}

View File

@@ -15,7 +15,7 @@ use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use crate::comments::Comments;
use crate::context::{IndentLevel, NodeLevel};
use crate::prelude::*;
use crate::statement::suite::DocstringStmt;
use crate::statement::suite::{DocstringStmt, skip_range};
use crate::verbatim::{ends_suppression, starts_suppression};
use crate::{FormatModuleError, PyFormatOptions, format_module_source};
@@ -251,7 +251,29 @@ impl<'ast> SourceOrderVisitor<'ast> for FindEnclosingNode<'_, 'ast> {
// We only visit statements that aren't suppressed that's why we don't need to track the suppression
// state in a stack. Assert that this assumption is safe.
debug_assert!(self.suppressed.is_no());
walk_body(self, body);
let mut iter = body.iter();
while let Some(stmt) = iter.next() {
// If the range intersects a skip range then we need to
// format the entire suite to properly handle the case
// where a `fmt: skip` affects multiple statements.
//
// For example, in the case
//
// ```
// <RANGE_START>x=1<RANGE_END>;x=2 # fmt: skip
// ```
//
// the statement `x=1` does not "know" that it is
// suppressed, but the suite does.
if let Some(verbatim_range) = skip_range(stmt, iter.as_slice(), self.context)
&& verbatim_range.intersect(self.range).is_some()
{
break;
}
self.visit_stmt(stmt);
}
self.suppressed = Suppressed::No;
}
}
@@ -561,7 +583,7 @@ impl NarrowRange<'_> {
}
pub(crate) const fn is_logical_line(node: AnyNodeRef) -> bool {
// Make sure to update [`FormatEnclosingLine`] when changing this.
// Make sure to update [`FormatEnclosingNode`] when changing this.
node.is_statement()
|| node.is_decorator()
|| node.is_except_handler()

View File

@@ -1,14 +1,13 @@
use ruff_formatter::write;
use ruff_python_ast::StmtAnnAssign;
use crate::comments::SourceComment;
use crate::expression::is_splittable_expression;
use crate::expression::parentheses::Parentheses;
use crate::prelude::*;
use crate::statement::stmt_assign::{
AnyAssignmentOperator, AnyBeforeOperator, FormatStatementsLastExpression,
};
use crate::statement::trailing_semicolon;
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
pub struct FormatStmtAnnAssign;
@@ -84,12 +83,4 @@ impl FormatNodeRule<StmtAnnAssign> for FormatStmtAnnAssign {
Ok(())
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -2,10 +2,9 @@ use ruff_formatter::prelude::{space, token};
use ruff_formatter::write;
use ruff_python_ast::StmtAssert;
use crate::comments::SourceComment;
use crate::expression::maybe_parenthesize_expression;
use crate::expression::parentheses::Parenthesize;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtAssert;
@@ -41,12 +40,4 @@ impl FormatNodeRule<StmtAssert> for FormatStmtAssert {
Ok(())
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -19,6 +19,7 @@ use crate::expression::{
maybe_parenthesize_expression,
};
use crate::other::interpolated_string::InterpolatedStringLayout;
use crate::prelude::*;
use crate::preview::is_parenthesize_lambda_bodies_enabled;
use crate::statement::trailing_semicolon;
use crate::string::StringLikeExtensions;
@@ -26,7 +27,6 @@ use crate::string::implicit::{
FormatImplicitConcatenatedStringExpanded, FormatImplicitConcatenatedStringFlat,
ImplicitConcatenatedLayout,
};
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
pub struct FormatStmtAssign;
@@ -104,14 +104,6 @@ impl FormatNodeRule<StmtAssign> for FormatStmtAssign {
Ok(())
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}
/// Formats a single target with the equal operator.

View File

@@ -1,15 +1,14 @@
use ruff_formatter::write;
use ruff_python_ast::StmtAugAssign;
use crate::comments::SourceComment;
use crate::expression::parentheses::is_expression_parenthesized;
use crate::prelude::*;
use crate::statement::stmt_assign::{
AnyAssignmentOperator, AnyBeforeOperator, FormatStatementsLastExpression,
has_target_own_parentheses,
};
use crate::statement::trailing_semicolon;
use crate::{AsFormat, FormatNodeRule};
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
pub struct FormatStmtAugAssign;
@@ -62,12 +61,4 @@ impl FormatNodeRule<StmtAugAssign> for FormatStmtAugAssign {
Ok(())
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,7 +1,6 @@
use ruff_python_ast::StmtBreak;
use crate::comments::SourceComment;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtBreak;
@@ -10,12 +9,4 @@ impl FormatNodeRule<StmtBreak> for FormatStmtBreak {
fn fmt_fields(&self, _item: &StmtBreak, f: &mut PyFormatter) -> FormatResult<()> {
token("break").fmt(f)
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,7 +1,6 @@
use ruff_python_ast::StmtContinue;
use crate::comments::SourceComment;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtContinue;
@@ -10,12 +9,4 @@ impl FormatNodeRule<StmtContinue> for FormatStmtContinue {
fn fmt_fields(&self, _item: &StmtContinue, f: &mut PyFormatter) -> FormatResult<()> {
token("continue").fmt(f)
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -3,10 +3,10 @@ use ruff_python_ast::StmtDelete;
use ruff_text_size::Ranged;
use crate::builders::{PyFormatterExtensions, parenthesize_if_expands};
use crate::comments::{SourceComment, dangling_node_comments};
use crate::comments::dangling_node_comments;
use crate::expression::maybe_parenthesize_expression;
use crate::expression::parentheses::Parenthesize;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtDelete;
@@ -57,12 +57,4 @@ impl FormatNodeRule<StmtDelete> for FormatStmtDelete {
}
}
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,12 +1,10 @@
use ruff_python_ast as ast;
use ruff_python_ast::{Expr, Operator, StmtExpr};
use crate::comments::SourceComment;
use crate::expression::maybe_parenthesize_expression;
use crate::expression::parentheses::Parenthesize;
use crate::prelude::*;
use crate::statement::trailing_semicolon;
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
pub struct FormatStmtExpr;
@@ -30,14 +28,6 @@ impl FormatNodeRule<StmtExpr> for FormatStmtExpr {
Ok(())
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}
const fn is_arithmetic_like(expression: &Expr) -> bool {

View File

@@ -1,8 +1,6 @@
use ruff_formatter::{format_args, write};
use ruff_python_ast::StmtGlobal;
use crate::comments::SourceComment;
use crate::has_skip_comment;
use crate::prelude::*;
#[derive(Default)]
@@ -47,12 +45,4 @@ impl FormatNodeRule<StmtGlobal> for FormatStmtGlobal {
)
}
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,8 +1,7 @@
use ruff_formatter::{format_args, write};
use ruff_python_ast::StmtImport;
use crate::comments::SourceComment;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtImport;
@@ -21,12 +20,4 @@ impl FormatNodeRule<StmtImport> for FormatStmtImport {
});
write!(f, [token("import"), space(), names])
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -3,9 +3,7 @@ use ruff_python_ast::StmtImportFrom;
use ruff_text_size::Ranged;
use crate::builders::{PyFormatterExtensions, TrailingComma, parenthesize_if_expands};
use crate::comments::SourceComment;
use crate::expression::parentheses::parenthesized;
use crate::has_skip_comment;
use crate::other::identifier::DotDelimitedIdentifier;
use crate::prelude::*;
@@ -72,12 +70,4 @@ impl FormatNodeRule<StmtImportFrom> for FormatStmtImportFrom {
.fmt(f)
}
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,8 +1,7 @@
use ruff_python_ast::StmtIpyEscapeCommand;
use ruff_text_size::Ranged;
use crate::comments::SourceComment;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtIpyEscapeCommand;
@@ -11,12 +10,4 @@ impl FormatNodeRule<StmtIpyEscapeCommand> for FormatStmtIpyEscapeCommand {
fn fmt_fields(&self, item: &StmtIpyEscapeCommand, f: &mut PyFormatter) -> FormatResult<()> {
source_text_slice(item.range()).fmt(f)
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,8 +1,6 @@
use ruff_formatter::{format_args, write};
use ruff_python_ast::StmtNonlocal;
use crate::comments::SourceComment;
use crate::has_skip_comment;
use crate::prelude::*;
#[derive(Default)]
@@ -47,12 +45,4 @@ impl FormatNodeRule<StmtNonlocal> for FormatStmtNonlocal {
)
}
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,7 +1,6 @@
use ruff_python_ast::StmtPass;
use crate::comments::SourceComment;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtPass;
@@ -10,12 +9,4 @@ impl FormatNodeRule<StmtPass> for FormatStmtPass {
fn fmt_fields(&self, _item: &StmtPass, f: &mut PyFormatter) -> FormatResult<()> {
token("pass").fmt(f)
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,10 +1,9 @@
use ruff_formatter::write;
use ruff_python_ast::StmtRaise;
use crate::comments::SourceComment;
use crate::expression::maybe_parenthesize_expression;
use crate::expression::parentheses::Parenthesize;
use crate::{has_skip_comment, prelude::*};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatStmtRaise;
@@ -43,12 +42,4 @@ impl FormatNodeRule<StmtRaise> for FormatStmtRaise {
}
Ok(())
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,10 +1,9 @@
use ruff_formatter::write;
use ruff_python_ast::{Expr, StmtReturn};
use crate::comments::SourceComment;
use crate::expression::expr_tuple::TupleParentheses;
use crate::prelude::*;
use crate::statement::stmt_assign::FormatStatementsLastExpression;
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
pub struct FormatStmtReturn;
@@ -43,12 +42,4 @@ impl FormatNodeRule<StmtReturn> for FormatStmtReturn {
None => Ok(()),
}
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -1,11 +1,10 @@
use ruff_formatter::write;
use ruff_python_ast::StmtTypeAlias;
use crate::comments::SourceComment;
use crate::prelude::*;
use crate::statement::stmt_assign::{
AnyAssignmentOperator, AnyBeforeOperator, FormatStatementsLastExpression,
};
use crate::{has_skip_comment, prelude::*};
#[derive(Default)]
pub struct FormatStmtTypeAlias;
@@ -42,12 +41,4 @@ impl FormatNodeRule<StmtTypeAlias> for FormatStmtTypeAlias {
]
)
}
fn is_suppressed(
&self,
trailing_comments: &[SourceComment],
context: &PyFormatContext,
) -> bool {
has_skip_comment(trailing_comments, context.source())
}
}

View File

@@ -4,11 +4,15 @@ use ruff_formatter::{
use ruff_python_ast::helpers::is_compound_statement;
use ruff_python_ast::{self as ast, Expr, PySourceType, Stmt, Suite};
use ruff_python_ast::{AnyNodeRef, StmtExpr};
use ruff_python_trivia::{lines_after, lines_after_ignoring_end_of_line_trivia, lines_before};
use ruff_python_trivia::{
SimpleTokenKind, SimpleTokenizer, lines_after, lines_after_ignoring_end_of_line_trivia,
lines_before,
};
use ruff_text_size::{Ranged, TextRange};
use crate::comments::{
Comments, LeadingDanglingTrailingComments, leading_comments, trailing_comments,
Comments, LeadingDanglingTrailingComments, has_skip_comment, leading_comments,
trailing_comments,
};
use crate::context::{NodeLevel, TopLevelStatementPosition, WithIndentLevel, WithNodeLevel};
use crate::other::string_literal::StringLiteralKind;
@@ -16,9 +20,9 @@ use crate::prelude::*;
use crate::preview::{
is_allow_newline_after_block_open_enabled, is_blank_line_before_decorated_class_in_stub_enabled,
};
use crate::statement::stmt_expr::FormatStmtExpr;
use crate::statement::trailing_semicolon;
use crate::verbatim::{
suppressed_node, write_suppressed_statements_starting_with_leading_comment,
write_skipped_statements, write_suppressed_statements_starting_with_leading_comment,
write_suppressed_statements_starting_with_trailing_comment,
};
@@ -152,7 +156,21 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
let first_comments = comments.leading_dangling_trailing(first);
let (mut preceding, mut empty_line_after_docstring) = if first_comments
let (mut preceding, mut empty_line_after_docstring) = if let Some(verbatim_range) =
skip_range(first.statement(), iter.as_slice(), f.context())
{
let preceding =
write_skipped_statements(first.statement(), &mut iter, verbatim_range, f)?;
// Insert a newline after a module level docstring, but treat
// it as a docstring otherwise. See: https://github.com/psf/black/pull/3932.
let empty_line_after_docstring =
matches!(self.kind, SuiteKind::TopLevel | SuiteKind::Class)
&& DocstringStmt::try_from_statement(preceding, self.kind, f.context())
.is_some();
(preceding, empty_line_after_docstring)
} else if first_comments
.leading
.iter()
.any(|comment| comment.is_suppression_off_comment(source))
@@ -391,7 +409,10 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
}
}
if following_comments
if let Some(verbatim_range) = skip_range(following, iter.as_slice(), f.context()) {
preceding = write_skipped_statements(following, &mut iter, verbatim_range, f)?;
preceding_comments = comments.leading_dangling_trailing(preceding);
} else if following_comments
.leading
.iter()
.any(|comment| comment.is_suppression_off_comment(source))
@@ -840,61 +861,57 @@ impl Format<PyFormatContext<'_>> for DocstringStmt<'_> {
let comments = f.context().comments().clone();
let node_comments = comments.leading_dangling_trailing(self.docstring);
if FormatStmtExpr.is_suppressed(node_comments.trailing, f.context()) {
suppressed_node(self.docstring).fmt(f)
} else {
// SAFETY: Safe because `DocStringStmt` guarantees that it only ever wraps a `ExprStmt` containing a `ExprStringLiteral`.
let string_literal = self
.docstring
.as_expr_stmt()
.unwrap()
.value
.as_string_literal_expr()
.unwrap();
// SAFETY: Safe because `DocStringStmt` guarantees that it only ever wraps a `ExprStmt` containing a `ExprStringLiteral`.
let string_literal = self
.docstring
.as_expr_stmt()
.unwrap()
.value
.as_string_literal_expr()
.unwrap();
// We format the expression, but the statement carries the comments
write!(
f,
[
leading_comments(node_comments.leading),
f.options()
.source_map_generation()
.is_enabled()
.then_some(source_position(self.docstring.start())),
string_literal
.format()
.with_options(StringLiteralKind::Docstring),
f.options()
.source_map_generation()
.is_enabled()
.then_some(source_position(self.docstring.end())),
]
)?;
// We format the expression, but the statement carries the comments
write!(
f,
[
leading_comments(node_comments.leading),
f.options()
.source_map_generation()
.is_enabled()
.then_some(source_position(self.docstring.start())),
string_literal
.format()
.with_options(StringLiteralKind::Docstring),
f.options()
.source_map_generation()
.is_enabled()
.then_some(source_position(self.docstring.end())),
]
)?;
if self.suite_kind == SuiteKind::Class {
// Comments after class docstrings need a newline between the docstring and the
// comment (https://github.com/astral-sh/ruff/issues/7948).
// ```python
// class ModuleBrowser:
// """Browse module classes and functions in IDLE."""
// # ^ Insert a newline above here
//
// def __init__(self, master, path, *, _htest=False, _utest=False):
// pass
// ```
if let Some(own_line) = node_comments
.trailing
.iter()
.find(|comment| comment.line_position().is_own_line())
{
if lines_before(own_line.start(), f.context().source()) < 2 {
empty_line().fmt(f)?;
}
if self.suite_kind == SuiteKind::Class {
// Comments after class docstrings need a newline between the docstring and the
// comment (https://github.com/astral-sh/ruff/issues/7948).
// ```python
// class ModuleBrowser:
// """Browse module classes and functions in IDLE."""
// # ^ Insert a newline above here
//
// def __init__(self, master, path, *, _htest=False, _utest=False):
// pass
// ```
if let Some(own_line) = node_comments
.trailing
.iter()
.find(|comment| comment.line_position().is_own_line())
{
if lines_before(own_line.start(), f.context().source()) < 2 {
empty_line().fmt(f)?;
}
}
trailing_comments(node_comments.trailing).fmt(f)
}
trailing_comments(node_comments.trailing).fmt(f)
}
}
@@ -938,6 +955,58 @@ impl Format<PyFormatContext<'_>> for SuiteChildStatement<'_> {
}
}
pub(crate) fn skip_range(
first: &Stmt,
statements: &[Stmt],
context: &PyFormatContext,
) -> Option<TextRange> {
let start = first.start();
let mut last_statement = first;
let comments = context.comments();
let source = context.source();
for statement in statements {
if new_logical_line_between_statements(
source,
TextRange::new(last_statement.end(), statement.start()),
) {
break;
}
last_statement = statement;
}
if has_skip_comment(comments.trailing(last_statement), source) {
Some(TextRange::new(
start,
trailing_semicolon(last_statement.into(), source)
.map_or_else(|| last_statement.end(), ruff_text_size::TextRange::end),
))
} else {
None
}
}
fn new_logical_line_between_statements(source: &str, between_statement_range: TextRange) -> bool {
let mut tokenizer = SimpleTokenizer::new(source, between_statement_range).map(|tok| tok.kind());
while let Some(token) = tokenizer.next() {
match token {
SimpleTokenKind::Continuation => {
tokenizer.next();
}
SimpleTokenKind::Newline => {
return true;
}
// Since we are between statements, there are
// no non-trivia tokens, so there is no need to check
// for these and do an early return.
_ => {}
}
}
false
}
#[cfg(test)]
mod tests {
use ruff_formatter::format;

View File

@@ -2,6 +2,7 @@ use std::borrow::Cow;
use std::iter::FusedIterator;
use std::slice::Iter;
use itertools::PeekingNext;
use ruff_formatter::{FormatError, write};
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::Stmt;
@@ -451,6 +452,40 @@ fn write_suppressed_statements<'a>(
}
}
#[cold]
pub(crate) fn write_skipped_statements<'a>(
first_skipped: &'a Stmt,
statements: &mut std::slice::Iter<'a, Stmt>,
verbatim_range: TextRange,
f: &mut PyFormatter,
) -> FormatResult<&'a Stmt> {
let comments = f.context().comments().clone();
comments.mark_verbatim_node_comments_formatted(first_skipped.into());
let mut preceding = first_skipped;
while let Some(prec) = statements.peeking_next(|next| next.end() <= verbatim_range.end()) {
comments.mark_verbatim_node_comments_formatted(prec.into());
preceding = prec;
}
let first_leading = comments.leading(first_skipped);
let preceding_trailing = comments.trailing(preceding);
// Write the outer comments and format the node as verbatim
write!(
f,
[
leading_comments(first_leading),
source_position(verbatim_range.start()),
verbatim_text(verbatim_range),
source_position(verbatim_range.end()),
trailing_comments(preceding_trailing)
]
)?;
Ok(preceding)
}
#[derive(Copy, Clone, Debug)]
enum InSuppression {
No,
@@ -893,65 +928,6 @@ impl Format<PyFormatContext<'_>> for VerbatimText {
}
}
/// Disables formatting for `node` and instead uses the same formatting as the node has in source.
///
/// The `node` gets indented as any formatted node to avoid syntax errors when the indentation string changes (e.g. from 2 spaces to 4).
/// The `node`s leading and trailing comments are formatted as usual, except if they fall into the suppressed node's range.
#[cold]
pub(crate) fn suppressed_node<'a, N>(node: N) -> FormatSuppressedNode<'a>
where
N: Into<AnyNodeRef<'a>>,
{
FormatSuppressedNode { node: node.into() }
}
pub(crate) struct FormatSuppressedNode<'a> {
node: AnyNodeRef<'a>,
}
impl Format<PyFormatContext<'_>> for FormatSuppressedNode<'_> {
fn fmt(&self, f: &mut Formatter<PyFormatContext<'_>>) -> FormatResult<()> {
let comments = f.context().comments().clone();
let node_comments = comments.leading_dangling_trailing(self.node);
// Mark all comments as formatted that fall into the node range
for comment in node_comments.leading {
if comment.start() > self.node.start() {
comment.mark_formatted();
}
}
for comment in node_comments.trailing {
if comment.start() < self.node.end() {
comment.mark_formatted();
}
}
// Some statements may end with a semicolon. Preserve the semicolon
let semicolon_range = self
.node
.is_statement()
.then(|| trailing_semicolon(self.node, f.context().source()))
.flatten();
let verbatim_range = semicolon_range.map_or(self.node.range(), |semicolon| {
TextRange::new(self.node.start(), semicolon.end())
});
comments.mark_verbatim_node_comments_formatted(self.node);
// Write the outer comments and format the node as verbatim
write!(
f,
[
leading_comments(node_comments.leading),
source_position(verbatim_range.start()),
verbatim_text(verbatim_range),
source_position(verbatim_range.end()),
trailing_comments(node_comments.trailing)
]
)
}
}
#[cold]
pub(crate) fn write_suppressed_clause_header(
header: ClauseHeader,

View File

@@ -0,0 +1,299 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
---
## Input
```python
class Simple:
x=1
x=2 # fmt: skip
x=3
class Semicolon:
x=1
x=2;x=3 # fmt: skip
x=4
class TrailingSemicolon:
x=1
x=2;x=3 ; # fmt: skip
x=4
class SemicolonNewLogicalLine:
x=1;
x=2;x=3 # fmt: skip
x=4
class ManySemicolonOneLine:
x=1
x=2;x=3;x=4 # fmt: skip
x=5
class CompoundInSuite:
x=1
def foo(): y=1 # fmt: skip
x=2
class CompoundInSuiteNewline:
x=1
def foo():
y=1 # fmt: skip
x=2
class MultiLineSkip:
x=1
x = [
'1',
'2',
] # fmt: skip
class MultiLineSemicolon:
x=1
x = [
'1',
'2',
]; x=2 # fmt: skip
class LineContinuationSemicolonAfter:
x=1
x = ['a']\
; y=1 # fmt: skip
class LineContinuationSemicolonBefore:
x=1
x = ['a']; \
y=1 # fmt: skip
class LineContinuationSemicolonAndNewline:
x=1
x = ['a']; \
y=1 # fmt: skip
class LineContinuationSemicolonAndNewlineAndComment:
x=1
x = ['a']; \
# 1
y=1 # fmt: skip
class RepeatedLineContinuation:
x=1
x = ['a']; \
\
\
y=1 # fmt: skip
class MultiLineSemicolonComments:
x=1
# 1
x = [ # 2
'1', # 3
'2',
# 4
]; x=2 # 5 # fmt: skip # 6
class DocstringSkipped:
'''This is a docstring''' # fmt: skip
x=1
class MultilineDocstringSkipped:
'''This is a docstring
''' # fmt: skip
x=1
class FirstStatementNewlines:
x=1 # fmt: skip
class ChainingSemicolons:
x=[
'1',
'2',
'3',
];x=1;x=[
'1',
'2',
'3'
];x=1;x=1 # fmt: skip
class LotsOfComments:
# 1
x=[ # 2
'1', # 3
'2',
'3'
] ;x=2;x=3 # 4 # fmt: skip # 5
# 6
class MixingCompound:
def foo(): bar(); import zoo # fmt: skip
# https://github.com/astral-sh/ruff/issues/17331
def main() -> None:
import ipdb; ipdb.set_trace() # noqa: E402,E702,I001 # fmt: skip
# https://github.com/astral-sh/ruff/issues/11430
print(); print() # noqa # fmt: skip
```
## Output
```python
class Simple:
x = 1
x=2 # fmt: skip
x = 3
class Semicolon:
x = 1
x=2;x=3 # fmt: skip
x = 4
class TrailingSemicolon:
x = 1
x=2;x=3 ; # fmt: skip
x = 4
class SemicolonNewLogicalLine:
x = 1
x=2;x=3 # fmt: skip
x = 4
class ManySemicolonOneLine:
x = 1
x=2;x=3;x=4 # fmt: skip
x = 5
class CompoundInSuite:
x = 1
def foo(): y=1 # fmt: skip
x = 2
class CompoundInSuiteNewline:
x = 1
def foo():
y=1 # fmt: skip
x = 2
class MultiLineSkip:
x = 1
x = [
'1',
'2',
] # fmt: skip
class MultiLineSemicolon:
x = 1
x = [
'1',
'2',
]; x=2 # fmt: skip
class LineContinuationSemicolonAfter:
x = 1
x = ['a']\
; y=1 # fmt: skip
class LineContinuationSemicolonBefore:
x = 1
x = ['a']; \
y=1 # fmt: skip
class LineContinuationSemicolonAndNewline:
x = 1
x = ["a"]
y=1 # fmt: skip
class LineContinuationSemicolonAndNewlineAndComment:
x = 1
x = ["a"]
# 1
y=1 # fmt: skip
class RepeatedLineContinuation:
x = 1
x = ['a']; \
\
\
y=1 # fmt: skip
class MultiLineSemicolonComments:
x = 1
# 1
x = [ # 2
'1', # 3
'2',
# 4
]; x=2 # 5 # fmt: skip # 6
class DocstringSkipped:
'''This is a docstring''' # fmt: skip
x = 1
class MultilineDocstringSkipped:
'''This is a docstring
''' # fmt: skip
x = 1
class FirstStatementNewlines:
x=1 # fmt: skip
class ChainingSemicolons:
x=[
'1',
'2',
'3',
];x=1;x=[
'1',
'2',
'3'
];x=1;x=1 # fmt: skip
class LotsOfComments:
# 1
x=[ # 2
'1', # 3
'2',
'3'
] ;x=2;x=3 # 4 # fmt: skip # 5
# 6
class MixingCompound:
def foo(): bar(); import zoo # fmt: skip
# https://github.com/astral-sh/ruff/issues/17331
def main() -> None:
import ipdb; ipdb.set_trace() # noqa: E402,E702,I001 # fmt: skip
# https://github.com/astral-sh/ruff/issues/11430
print(); print() # noqa # fmt: skip
```

View File

@@ -0,0 +1,56 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
---
## Input
```python
# 0
'''Module docstring
multiple lines''' # 1 # fmt: skip # 2
import a;import b; from c import (
x,y,z
); import f # fmt: skip
x=1;x=2;x=3;x=4 # fmt: skip
# 1
x=[ # 2
'1', # 3
'2',
'3'
];x=1;x=1 # 4 # fmt: skip # 5
# 6
def foo(): x=[
'1',
'2',
];x=1 # fmt: skip
```
## Output
```python
# 0
'''Module docstring
multiple lines''' # 1 # fmt: skip # 2
import a;import b; from c import (
x,y,z
); import f # fmt: skip
x=1;x=2;x=3;x=4 # fmt: skip
# 1
x=[ # 2
'1', # 3
'2',
'3'
];x=1;x=1 # 4 # fmt: skip # 5
# 6
def foo():
x=[
'1',
'2',
];x=1 # fmt: skip
```

View File

@@ -0,0 +1,147 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
---
## Input
```python
class Simple:
# Range comprises skip range
x=1
<RANGE_START>x=2 <RANGE_END># fmt: skip
x=3
class Semicolon:
# Range is part of skip range
x=1
x=2;<RANGE_START>x=3<RANGE_END> # fmt: skip
x=4
class FormatFirst:
x=1
<RANGE_START>x=2<RANGE_END>;x=3 # fmt: skip
x=4
class FormatMiddle:
x=1
x=2;<RANGE_START>x=3<RANGE_END>;x=4 # fmt: skip
x=5
class SemicolonNewLogicalLine:
# Range overlaps on right side
<RANGE_START>x=1;
x=2<RANGE_END>;x=3 # fmt: skip
x=4
class SemicolonNewLogicalLine:
# Range overlaps on left side
x=1;
x=2;<RANGE_START>x=3 # fmt: skip
x=4<RANGE_END>
class ManySemicolonOneLine:
x=1
x=2;x=3;x=4 # fmt: skip
x=5
class CompoundInSuite:
x=1
<RANGE_START>def foo(): y=1 <RANGE_END># fmt: skip
x=2
class CompoundInSuiteNewline:
x=1
def foo():
y=1 # fmt: skip
x=2
class MultiLineSkip:
# Range inside statement
x=1
x = <RANGE_START>[
'1',
'2',<RANGE_END>
] # fmt: skip
class LotsOfComments:
# 1
x=[ # 2
'1', # 3<RANGE_START>
'2',
'3'
] ;x=2;x=3 # 4<RANGE_END> # fmt: skip # 5
# 6
```
## Output
```python
class Simple:
# Range comprises skip range
x=1
x=2 # fmt: skip
x=3
class Semicolon:
# Range is part of skip range
x=1
x=2;x=3 # fmt: skip
x=4
class FormatFirst:
x=1
x=2;x=3 # fmt: skip
x=4
class FormatMiddle:
x=1
x=2;x=3;x=4 # fmt: skip
x=5
class SemicolonNewLogicalLine:
# Range overlaps on right side
x = 1
x=2;x=3 # fmt: skip
x=4
class SemicolonNewLogicalLine:
# Range overlaps on left side
x=1;
x=2;x=3 # fmt: skip
x = 4
class ManySemicolonOneLine:
x=1
x=2;x=3;x=4 # fmt: skip
x=5
class CompoundInSuite:
x=1
def foo(): y=1 # fmt: skip
x=2
class CompoundInSuiteNewline:
x=1
def foo():
y=1 # fmt: skip
x=2
class MultiLineSkip:
# Range inside statement
x=1
x = [
'1',
'2',
] # fmt: skip
class LotsOfComments:
# 1
x=[ # 2
'1', # 3
'2',
'3'
] ;x=2;x=3 # 4 # fmt: skip # 5
# 6
```

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_wasm"
version = "0.14.10"
version = "0.14.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -7,7 +7,6 @@ use std::collections::BTreeMap;
use std::env::VarError;
use std::num::{NonZeroU8, NonZeroU16};
use std::path::{Path, PathBuf};
use std::str::FromStr;
use anyhow::{Context, Result, anyhow};
use glob::{GlobError, Paths, PatternError, glob};
@@ -36,8 +35,7 @@ use ruff_linter::settings::{
DEFAULT_SELECTORS, DUMMY_VARIABLE_RGX, LinterSettings, TASK_TAGS, TargetVersion,
};
use ruff_linter::{
RUFF_PKG_VERSION, RuleSelector, fs, warn_user_once, warn_user_once_by_id,
warn_user_once_by_message,
RuleSelector, fs, warn_user_once, warn_user_once_by_id, warn_user_once_by_message,
};
use ruff_python_ast as ast;
use ruff_python_formatter::{
@@ -53,6 +51,7 @@ use crate::options::{
Flake8UnusedArgumentsOptions, FormatOptions, IsortOptions, LintCommonOptions, LintOptions,
McCabeOptions, Options, Pep8NamingOptions, PyUpgradeOptions, PycodestyleOptions,
PydoclintOptions, PydocstyleOptions, PyflakesOptions, PylintOptions, RuffOptions,
validate_required_version,
};
use crate::settings::{
EXCLUDE, FileResolverSettings, FormatterSettings, INCLUDE, INCLUDE_PREVIEW, LineEnding,
@@ -155,13 +154,7 @@ pub struct Configuration {
impl Configuration {
pub fn into_settings(self, project_root: &Path) -> Result<Settings> {
if let Some(required_version) = &self.required_version {
let ruff_pkg_version = pep440_rs::Version::from_str(RUFF_PKG_VERSION)
.expect("RUFF_PKG_VERSION is not a valid PEP 440 version specifier");
if !required_version.contains(&ruff_pkg_version) {
return Err(anyhow!(
"Required version `{required_version}` does not match the running version `{RUFF_PKG_VERSION}`"
));
}
validate_required_version(required_version)?;
}
let linter_target_version = TargetVersion(self.target_version);

View File

@@ -1,15 +1,19 @@
use anyhow::Result;
use regex::Regex;
use rustc_hash::{FxBuildHasher, FxHashMap, FxHashSet};
use serde::de::{self};
use serde::{Deserialize, Deserializer, Serialize};
use std::collections::{BTreeMap, BTreeSet};
use std::path::PathBuf;
use std::str::FromStr;
use strum::IntoEnumIterator;
use unicode_normalization::UnicodeNormalization;
use crate::settings::LineEnding;
use ruff_formatter::IndentStyle;
use ruff_graph::Direction;
use ruff_linter::RUFF_PKG_VERSION;
use ruff_linter::line_width::{IndentWidth, LineLength};
use ruff_linter::rules::flake8_import_conventions::settings::BannedAliases;
use ruff_linter::rules::flake8_pytest_style::settings::SettingsError;
@@ -556,6 +560,17 @@ pub struct LintOptions {
pub future_annotations: Option<bool>,
}
pub fn validate_required_version(required_version: &RequiredVersion) -> anyhow::Result<()> {
let ruff_pkg_version = pep440_rs::Version::from_str(RUFF_PKG_VERSION)
.expect("RUFF_PKG_VERSION is not a valid PEP 440 version specifier");
if !required_version.contains(&ruff_pkg_version) {
return Err(anyhow::anyhow!(
"Required version `{required_version}` does not match the running version `{RUFF_PKG_VERSION}`"
));
}
Ok(())
}
/// Newtype wrapper for [`LintCommonOptions`] that allows customizing the JSON schema and omitting the fields from the [`OptionsMetadata`].
#[derive(Clone, Debug, PartialEq, Eq, Default, Serialize, Deserialize)]
#[serde(transparent)]

View File

@@ -5,12 +5,13 @@ use std::path::{Path, PathBuf};
use anyhow::{Context, Result};
use log::debug;
use pep440_rs::{Operator, Version, VersionSpecifiers};
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use strum::IntoEnumIterator;
use ruff_linter::settings::types::PythonVersion;
use ruff_linter::settings::types::{PythonVersion, RequiredVersion};
use crate::options::Options;
use crate::options::{Options, validate_required_version};
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
struct Tools {
@@ -40,20 +41,38 @@ impl Pyproject {
}
}
/// Parse a `ruff.toml` file.
fn parse_ruff_toml<P: AsRef<Path>>(path: P) -> Result<Options> {
fn parse_toml<P: AsRef<Path>, T: DeserializeOwned>(path: P, table_path: &[&str]) -> Result<T> {
let path = path.as_ref();
let contents = std::fs::read_to_string(path)
.with_context(|| format!("Failed to read {}", path.display()))?;
toml::from_str(&contents).with_context(|| format!("Failed to parse {}", path.display()))
// Parse the TOML document once into a spanned representation so we can:
// - Inspect `required-version` without triggering strict deserialization errors.
// - Deserialize with precise spans (line/column and excerpt) on errors.
let root = toml::de::DeTable::parse(&contents)
.with_context(|| format!("Failed to parse {}", path.display()))?;
check_required_version(root.get_ref(), table_path)?;
let deserializer = toml::de::Deserializer::from(root);
T::deserialize(deserializer)
.map_err(|mut err| {
// `Deserializer::from` doesn't have access to the original input, but we do.
// Attach it so TOML errors include line/column and a source excerpt.
err.set_input(Some(&contents));
err
})
.with_context(|| format!("Failed to parse {}", path.display()))
}
/// Parse a `ruff.toml` file.
fn parse_ruff_toml<P: AsRef<Path>>(path: P) -> Result<Options> {
parse_toml(path, &[])
}
/// Parse a `pyproject.toml` file.
fn parse_pyproject_toml<P: AsRef<Path>>(path: P) -> Result<Pyproject> {
let path = path.as_ref();
let contents = std::fs::read_to_string(path)
.with_context(|| format!("Failed to read {}", path.display()))?;
toml::from_str(&contents).with_context(|| format!("Failed to parse {}", path.display()))
parse_toml(path, &["tool", "ruff"])
}
/// Return `true` if a `pyproject.toml` contains a `[tool.ruff]` section.
@@ -98,6 +117,33 @@ pub fn find_settings_toml<P: AsRef<Path>>(path: P) -> Result<Option<PathBuf>> {
Ok(None)
}
fn check_required_version(value: &toml::de::DeTable, table_path: &[&str]) -> Result<()> {
let mut current = value;
for key in table_path {
let Some(next) = current.get(*key) else {
return Ok(());
};
let toml::de::DeValue::Table(next) = next.get_ref() else {
return Ok(());
};
current = next;
}
let required_version = current
.get("required-version")
.and_then(|value| value.get_ref().as_str());
let Some(required_version) = required_version else {
return Ok(());
};
// If it doesn't parse, we just fall through to normal parsing; it will give a nicer error message.
if let Ok(required_version) = required_version.parse::<RequiredVersion>() {
validate_required_version(&required_version)?;
}
Ok(())
}
/// Derive target version from `required-version` in `pyproject.toml`, if
/// such a file exists in an ancestor directory.
pub fn find_fallback_target_version<P: AsRef<Path>>(path: P) -> Option<PythonVersion> {

View File

@@ -102,8 +102,8 @@ impl Relativity {
#[derive(Debug)]
pub struct Resolver<'a> {
pyproject_config: &'a PyprojectConfig,
/// All [`Settings`] that have been added to the resolver.
settings: Vec<Settings>,
/// All [`Settings`] that have been added to the resolver, along with their config file paths.
settings: Vec<(Settings, PathBuf)>,
/// A router from path to index into the `settings` vector.
router: Router<usize>,
}
@@ -146,8 +146,8 @@ impl<'a> Resolver<'a> {
}
/// Add a resolved [`Settings`] under a given [`PathBuf`] scope.
fn add(&mut self, path: &Path, settings: Settings) {
self.settings.push(settings);
fn add(&mut self, path: &Path, settings: Settings, config_path: PathBuf) {
self.settings.push((settings, config_path));
// Normalize the path to use `/` separators and escape the '{' and '}' characters,
// which matchit uses for routing parameters.
@@ -172,13 +172,27 @@ impl<'a> Resolver<'a> {
/// Return the appropriate [`Settings`] for a given [`Path`].
pub fn resolve(&self, path: &Path) -> &Settings {
self.resolve_with_path(path).0
}
/// Return the appropriate [`Settings`] and config file path for a given [`Path`].
pub fn resolve_with_path(&self, path: &Path) -> (&Settings, Option<&Path>) {
match self.pyproject_config.strategy {
PyprojectDiscoveryStrategy::Fixed => &self.pyproject_config.settings,
PyprojectDiscoveryStrategy::Fixed => (
&self.pyproject_config.settings,
self.pyproject_config.path.as_deref(),
),
PyprojectDiscoveryStrategy::Hierarchical => self
.router
.at(path.to_slash_lossy().as_ref())
.map(|Match { value, .. }| &self.settings[*value])
.unwrap_or(&self.pyproject_config.settings),
.map(|Match { value, .. }| {
let (settings, config_path) = &self.settings[*value];
(settings, Some(config_path.as_path()))
})
.unwrap_or((
&self.pyproject_config.settings,
self.pyproject_config.path.as_deref(),
)),
}
}
@@ -255,7 +269,8 @@ impl<'a> Resolver<'a> {
/// Return an iterator over the resolved [`Settings`] in this [`Resolver`].
pub fn settings(&self) -> impl Iterator<Item = &Settings> {
std::iter::once(&self.pyproject_config.settings).chain(&self.settings)
std::iter::once(&self.pyproject_config.settings)
.chain(self.settings.iter().map(|(settings, _)| settings))
}
}
@@ -379,17 +394,17 @@ pub fn resolve_configuration(
/// Extract the project root (scope) and [`Settings`] from a given
/// `pyproject.toml`.
fn resolve_scoped_settings<'a>(
pyproject: &'a Path,
fn resolve_scoped_settings(
pyproject: &Path,
transformer: &dyn ConfigurationTransformer,
origin: ConfigurationOrigin,
) -> Result<(&'a Path, Settings)> {
) -> Result<(PathBuf, Settings)> {
let relativity = Relativity::from(origin);
let configuration = resolve_configuration(pyproject, transformer, origin)?;
let project_root = relativity.resolve(pyproject);
let settings = configuration.into_settings(project_root)?;
Ok((project_root, settings))
Ok((project_root.to_path_buf(), settings))
}
/// Extract the [`Settings`] from a given `pyproject.toml` and process the
@@ -455,7 +470,7 @@ pub fn python_files_in_path<'a>(
transformer,
ConfigurationOrigin::Ancestor,
)?;
resolver.add(root, settings);
resolver.add(&root, settings, pyproject);
// We found the closest configuration.
break;
}
@@ -647,7 +662,11 @@ impl ParallelVisitor for PythonFilesVisitor<'_, '_> {
ConfigurationOrigin::Ancestor,
) {
Ok((root, settings)) => {
self.global.resolver.write().unwrap().add(root, settings);
self.global
.resolver
.write()
.unwrap()
.add(&root, settings, pyproject);
}
Err(err) => {
self.local_error = Err(err);
@@ -767,7 +786,7 @@ pub fn python_file_at_path(
if let Some(pyproject) = settings_toml(ancestor)? {
let (root, settings) =
resolve_scoped_settings(&pyproject, transformer, ConfigurationOrigin::Unknown)?;
resolver.add(root, settings);
resolver.add(&root, settings, pyproject);
break;
}
}

View File

@@ -34,12 +34,12 @@ cargo install cargo-insta
You'll need [uv](https://docs.astral.sh/uv/getting-started/installation/) (or `pipx` and `pip`) to
run Python utility commands.
You can optionally install pre-commit hooks to automatically run the validation checks
You can optionally install hooks to automatically run the validation checks
when making a commit:
```shell
uv tool install pre-commit
pre-commit install
uv tool install prek
prek install
```
We recommend [nextest](https://nexte.st/) to run ty's test suite (via `cargo nextest run`),
@@ -66,7 +66,7 @@ and that it passes both the lint and test validation checks:
```shell
cargo clippy --workspace --all-targets --all-features -- -D warnings # Rust linting
cargo test # Rust testing
uvx pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting, Markdown and Python linting, etc.
uvx prek run -a # Rust and Python formatting, Markdown and Python linting, etc.
```
These checks will run on GitHub Actions when you open your pull request, but running them locally

3
crates/ty/docs/cli.md generated
View File

@@ -37,7 +37,8 @@ ty check [OPTIONS] [PATH]...
<h3 class="cli-reference">Options</h3>
<dl class="cli-reference"><dt id="ty-check--color"><a href="#ty-check--color"><code>--color</code></a> <i>when</i></dt><dd><p>Control when colored output is used</p>
<dl class="cli-reference"><dt id="ty-check--add-ignore"><a href="#ty-check--add-ignore"><code>--add-ignore</code></a></dt><dd><p>Adds <code>ty: ignore</code> comments to suppress all rule diagnostics</p>
</dd><dt id="ty-check--color"><a href="#ty-check--color"><code>--color</code></a> <i>when</i></dt><dd><p>Control when colored output is used</p>
<p>Possible values:</p>
<ul>
<li><code>auto</code>: Display colors if the output goes to an interactive terminal</li>

View File

@@ -467,7 +467,7 @@ def test(): -> "int":
Default level: <a href="../../rules#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20ignore-comment-unknown-rule" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Fsuppression.rs#L50" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Fsuppression.rs#L54" target="_blank">View source</a>
</small>
@@ -1047,7 +1047,7 @@ class D(Generic[U, T]): ...
Default level: <a href="../../rules#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-ignore-comment" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Fsuppression.rs#L75" target="_blank">View source</a>
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Fsuppression.rs#L79" target="_blank">View source</a>
</small>
@@ -2895,7 +2895,7 @@ A() + A() # TypeError: unsupported operand type(s) for +: 'A' and 'A'
## `unused-ignore-comment`
<small>
Default level: <a href="../../rules#rule-levels" title="This lint has a default level of 'ignore'."><code>ignore</code></a> ·
Default level: <a href="../../rules#rule-levels" title="This lint has a default level of 'warn'."><code>warn</code></a> ·
Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.0.1-alpha.1</a> ·
<a href="https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unused-ignore-comment" target="_blank">Related issues</a> ·
<a href="https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Fsuppression.rs#L25" target="_blank">View source</a>
@@ -2904,11 +2904,11 @@ Added in <a href="https://github.com/astral-sh/ty/releases/tag/0.0.1-alpha.1">0.
**What it does**
Checks for `type: ignore` or `ty: ignore` directives that are no longer applicable.
Checks for `ty: ignore` or `type: ignore` directives that are no longer applicable.
**Why is this bad?**
A `type: ignore` directive that no longer matches any diagnostic violations is likely
A `ty: ignore` directive that no longer matches any diagnostic violations is likely
included by mistake, and should be removed to avoid confusion.
**Examples**
@@ -2923,6 +2923,11 @@ Use instead:
a = 20 / 2
```
**Options**
Set [`analysis.respect-type-ignore-comments`](https://docs.astral.sh/ty/reference/configuration/#respect-type-ignore-comments)
to `false` to prevent this rule from reporting unused `type: ignore` comments.
## `useless-overload-body`
<small>

View File

@@ -54,6 +54,10 @@ pub(crate) struct CheckCommand {
)]
pub paths: Vec<SystemPathBuf>,
/// Adds `ty: ignore` comments to suppress all rule diagnostics.
#[arg(long)]
pub(crate) add_ignore: bool,
/// Run the command within the given project directory.
///
/// All `pyproject.toml` files will be discovered by walking up the directory tree from the given project directory,

View File

@@ -4,37 +4,36 @@ mod printer;
mod python_version;
mod version;
pub use args::Cli;
use ty_project::metadata::settings::TerminalSettings;
use ty_static::EnvVars;
use std::fmt::Write;
use std::process::{ExitCode, Termination};
use std::sync::Mutex;
use anyhow::Result;
use crate::args::{CheckCommand, Command, TerminalColor};
use crate::logging::{VerbosityLevel, setup_tracing};
use crate::printer::Printer;
use anyhow::{Context, anyhow};
use clap::{CommandFactory, Parser};
use colored::Colorize;
use crossbeam::channel as crossbeam_channel;
use rayon::ThreadPoolBuilder;
use ruff_db::cancellation::{CancellationToken, CancellationTokenSource};
use ruff_db::cancellation::{Canceled, CancellationToken, CancellationTokenSource};
use ruff_db::diagnostic::{
Diagnostic, DiagnosticId, DisplayDiagnosticConfig, DisplayDiagnostics, Severity,
};
use ruff_db::files::File;
use ruff_db::max_parallelism;
use ruff_db::system::{OsSystem, SystemPath, SystemPathBuf};
use ruff_db::{STACK_SIZE, max_parallelism};
use salsa::Database;
use ty_project::metadata::options::ProjectOptionsOverrides;
use ty_project::metadata::settings::TerminalSettings;
use ty_project::watch::ProjectWatcher;
use ty_project::{CollectReporter, Db, watch};
use ty_project::{CollectReporter, Db, suppress_all_diagnostics, watch};
use ty_project::{ProjectDatabase, ProjectMetadata};
use ty_server::run_server;
use ty_static::EnvVars;
use crate::args::{CheckCommand, Command, TerminalColor};
use crate::logging::{VerbosityLevel, setup_tracing};
use crate::printer::Printer;
pub use args::Cli;
pub fn run() -> anyhow::Result<ExitStatus> {
setup_rayon();
@@ -112,6 +111,12 @@ fn run_check(args: CheckCommand) -> anyhow::Result<ExitStatus> {
.map(|path| SystemPath::absolute(path, &cwd))
.collect();
let mode = if args.add_ignore {
MainLoopMode::AddIgnore
} else {
MainLoopMode::Check
};
let system = OsSystem::new(&cwd);
let watch = args.watch;
let exit_zero = args.exit_zero;
@@ -144,7 +149,7 @@ fn run_check(args: CheckCommand) -> anyhow::Result<ExitStatus> {
}
let (main_loop, main_loop_cancellation_token) =
MainLoop::new(project_options_overrides, printer);
MainLoop::new(mode, project_options_overrides, printer);
// Listen to Ctrl+C and abort the watch mode.
let main_loop_cancellation_token = Mutex::new(Some(main_loop_cancellation_token));
@@ -215,6 +220,8 @@ impl Termination for ExitStatus {
}
struct MainLoop {
mode: MainLoopMode,
/// Sender that can be used to send messages to the main loop.
sender: crossbeam_channel::Sender<MainLoopMessage>,
@@ -237,6 +244,7 @@ struct MainLoop {
impl MainLoop {
fn new(
mode: MainLoopMode,
project_options_overrides: ProjectOptionsOverrides,
printer: Printer,
) -> (Self, MainLoopCancellationToken) {
@@ -247,6 +255,7 @@ impl MainLoop {
(
Self {
mode,
sender: sender.clone(),
receiver,
watcher: None,
@@ -325,80 +334,78 @@ impl MainLoop {
result,
revision: check_revision,
} => {
let terminal_settings = db.project().settings(db).terminal();
let display_config = DisplayDiagnosticConfig::default()
.format(terminal_settings.output_format.into())
.color(colored::control::SHOULD_COLORIZE.should_colorize())
.with_cancellation_token(Some(self.cancellation_token.clone()))
.show_fix_diff(true);
if check_revision == revision {
if db.project().files(db).is_empty() {
tracing::warn!("No python files found under the given path(s)");
}
// TODO: We should have an official flag to silence workspace diagnostics.
if std::env::var("TY_MEMORY_REPORT").as_deref() == Ok("mypy_primer") {
return Ok(ExitStatus::Success);
}
let is_human_readable = terminal_settings.output_format.is_human_readable();
if result.is_empty() {
if is_human_readable {
writeln!(
self.printer.stream_for_success_summary(),
"{}",
"All checks passed!".green().bold()
)?;
}
if self.watcher.is_none() {
return Ok(ExitStatus::Success);
}
} else {
let diagnostics_count = result.len();
let mut stdout = self.printer.stream_for_details().lock();
let exit_status =
exit_status_from_diagnostics(&result, terminal_settings);
// Only render diagnostics if they're going to be displayed, since doing
// so is expensive.
if stdout.is_enabled() {
write!(
stdout,
"{}",
DisplayDiagnostics::new(db, &display_config, &result)
)?;
}
if !self.cancellation_token.is_cancelled() {
if is_human_readable {
writeln!(
self.printer.stream_for_failure_summary(),
"Found {} diagnostic{}",
diagnostics_count,
if diagnostics_count > 1 { "s" } else { "" }
)?;
}
if exit_status.is_internal_error() {
tracing::warn!(
"A fatal error occurred while checking some files. Not all project files were analyzed. See the diagnostics list above for details."
);
}
}
if self.watcher.is_none() {
return Ok(exit_status);
}
}
} else {
if check_revision != revision {
tracing::debug!(
"Discarding check result for outdated revision: current: {revision}, result revision: {check_revision}"
);
continue;
}
if db.project().files(db).is_empty() {
tracing::warn!("No python files found under the given path(s)");
}
let result = match self.mode {
MainLoopMode::Check => {
// TODO: We should have an official flag to silence workspace diagnostics.
if std::env::var("TY_MEMORY_REPORT").as_deref() == Ok("mypy_primer") {
return Ok(ExitStatus::Success);
}
self.write_diagnostics(db, &result)?;
if self.cancellation_token.is_cancelled() {
Err(Canceled)
} else {
Ok(result)
}
}
MainLoopMode::AddIgnore => {
if let Ok(result) =
suppress_all_diagnostics(db, result, &self.cancellation_token)
{
self.write_diagnostics(db, &result.diagnostics)?;
let terminal_settings = db.project().settings(db).terminal();
let is_human_readable =
terminal_settings.output_format.is_human_readable();
if is_human_readable {
writeln!(
self.printer.stream_for_failure_summary(),
"Added {} ignore comment{}",
result.count,
if result.count > 1 { "s" } else { "" }
)?;
}
Ok(result.diagnostics)
} else {
Err(Canceled)
}
}
};
let exit_status = match result.as_deref() {
Ok([]) => ExitStatus::Success,
Ok(diagnostics) => {
let terminal_settings = db.project().settings(db).terminal();
exit_status_from_diagnostics(diagnostics, terminal_settings)
}
Err(Canceled) => ExitStatus::Success,
};
if exit_status.is_internal_error() {
tracing::warn!(
"A fatal error occurred while checking some files. Not all project files were analyzed. See the diagnostics list above for details."
);
}
if self.watcher.is_some() {
continue;
}
return Ok(exit_status);
}
MainLoopMessage::ApplyChanges(changes) => {
@@ -425,6 +432,65 @@ impl MainLoop {
Ok(ExitStatus::Success)
}
fn write_diagnostics(
&self,
db: &ProjectDatabase,
diagnostics: &[Diagnostic],
) -> anyhow::Result<()> {
let terminal_settings = db.project().settings(db).terminal();
let is_human_readable = terminal_settings.output_format.is_human_readable();
match diagnostics {
[] => {
if is_human_readable {
writeln!(
self.printer.stream_for_success_summary(),
"{}",
"All checks passed!".green().bold()
)?;
}
}
diagnostics => {
let diagnostics_count = diagnostics.len();
let mut stdout = self.printer.stream_for_details().lock();
// Only render diagnostics if they're going to be displayed, since doing
// so is expensive.
if stdout.is_enabled() {
let display_config = DisplayDiagnosticConfig::default()
.format(terminal_settings.output_format.into())
.color(colored::control::SHOULD_COLORIZE.should_colorize())
.with_cancellation_token(Some(self.cancellation_token.clone()))
.show_fix_diff(true);
write!(
stdout,
"{}",
DisplayDiagnostics::new(db, &display_config, diagnostics)
)?;
}
if !self.cancellation_token.is_cancelled() && is_human_readable {
writeln!(
self.printer.stream_for_failure_summary(),
"Found {} diagnostic{}",
diagnostics_count,
if diagnostics_count > 1 { "s" } else { "" }
)?;
}
}
}
Ok(())
}
}
#[derive(Copy, Clone, Debug)]
enum MainLoopMode {
Check,
AddIgnore,
}
fn exit_status_from_diagnostics(
@@ -559,12 +625,7 @@ fn set_colored_override(color: Option<TerminalColor>) {
fn setup_rayon() {
ThreadPoolBuilder::default()
.num_threads(max_parallelism().get())
// Use a reasonably large stack size to avoid running into stack overflows too easily. The
// size was chosen in such a way as to still be able to handle large expressions involving
// binary operators (x + x + … + x) both during the AST walk in semantic index building as
// well as during type checking. Using this stack size, we can handle handle expressions
// that are several times larger than the corresponding limits in existing type checkers.
.stack_size(16 * 1024 * 1024)
.stack_size(STACK_SIZE)
.build_global()
.unwrap();
}

View File

@@ -160,6 +160,65 @@ fn configuration_include() -> anyhow::Result<()> {
Ok(())
}
/// Files without extensions can be included by adding a literal glob to `include` that matches
/// the path exactly. A literal glob is a glob without any meta characters.
#[test]
fn configuration_include_no_extension() -> anyhow::Result<()> {
let case = CliTest::with_files([(
"src/main",
r#"
print(undefined_var) # error: unresolved-reference
"#,
)])?;
// By default, `src/main` is excluded because the file has no supported extension.
case.write_file(
"ty.toml",
r#"
[src]
include = ["src"]
"#,
)?;
assert_cmd_snapshot!(case.command(), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
WARN No python files found under the given path(s)
");
// The file can be included by adding an exactly matching pattern
case.write_file(
"ty.toml",
r#"
[src]
include = ["src", "src/main"]
"#,
)?;
assert_cmd_snapshot!(case.command(), @r"
success: false
exit_code: 1
----- stdout -----
error[unresolved-reference]: Name `undefined_var` used when not defined
--> src/main:2:7
|
2 | print(undefined_var) # error: unresolved-reference
| ^^^^^^^^^^^^^
|
info: rule `unresolved-reference` is enabled by default
Found 1 diagnostic
----- stderr -----
");
Ok(())
}
/// Test configuration file exclude functionality
#[test]
fn configuration_exclude() -> anyhow::Result<()> {

View File

@@ -0,0 +1,114 @@
use insta_cmd::assert_cmd_snapshot;
use crate::CliTest;
#[test]
fn add_ignore() -> anyhow::Result<()> {
let case = CliTest::with_file(
"different_violations.py",
r#"
import sys
x = 1 + a
if sys.does_not_exist:
...
def test(a, b): ...
test(x = 10, b = 12)
"#,
)?;
assert_cmd_snapshot!(case.command().arg("--add-ignore"), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
Added 4 ignore comments
----- stderr -----
");
// There should be no diagnostics when running ty again
assert_cmd_snapshot!(case.command(), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
Ok(())
}
#[test]
fn add_ignore_unfixable() -> anyhow::Result<()> {
let case = CliTest::with_files([
("has_syntax_error.py", r"print(x # [unresolved-reference]"),
(
"different_violations.py",
r#"
import sys
x = 1 + a
reveal_type(x)
if sys.does_not_exist:
...
"#,
),
(
"repeated_violations.py",
r#"
x = (
1 +
a * b
)
y = y # ty: ignore[unresolved-reference]
"#,
),
])?;
assert_cmd_snapshot!(case.command().arg("--add-ignore").env("RUST_BACKTRACE", "1"), @r"
success: false
exit_code: 1
----- stdout -----
info[revealed-type]: Revealed type
--> different_violations.py:6:13
|
4 | x = 1 + a # ty:ignore[unresolved-reference]
5 |
6 | reveal_type(x) # ty:ignore[undefined-reveal]
| ^ `Unknown`
7 |
8 | if sys.does_not_exist: # ty:ignore[unresolved-attribute]
|
error[unresolved-reference]: Name `x` used when not defined
--> has_syntax_error.py:1:7
|
1 | print(x # [unresolved-reference]
| ^
|
info: rule `unresolved-reference` is enabled by default
error[invalid-syntax]: unexpected EOF while parsing
--> has_syntax_error.py:1:34
|
1 | print(x # [unresolved-reference]
| ^
|
Found 3 diagnostics
Added 5 ignore comments
----- stderr -----
WARN Skipping file `<temp_dir>/has_syntax_error.py` with syntax errors
");
Ok(())
}

View File

@@ -2,6 +2,7 @@ mod analysis_options;
mod config_option;
mod exit_code;
mod file_selection;
mod fixes;
mod python_environment;
mod rule_selection;

View File

@@ -1,9 +1,10 @@
name,file,index,rank
auto-import-includes-modules,main.py,0,1
auto-import-includes-modules,main.py,1,7
auto-import-includes-modules,main.py,1,2
auto-import-includes-modules,main.py,2,1
auto-import-skips-current-module,main.py,0,1
class-arg-completion,main.py,0,1
exact-over-fuzzy,main.py,0,1
fstring-completions,main.py,0,1
higher-level-symbols-preferred,main.py,0,
higher-level-symbols-preferred,main.py,1,1
@@ -16,8 +17,10 @@ import-deprioritizes-type_check_only,main.py,3,2
import-deprioritizes-type_check_only,main.py,4,3
import-keyword-completion,main.py,0,1
internal-typeshed-hidden,main.py,0,2
local-over-auto-import,main.py,0,1
modules-over-other-symbols,main.py,0,1
none-completion,main.py,0,1
numpy-array,main.py,0,159
numpy-array,main.py,0,57
numpy-array,main.py,1,1
object-attr-instance-methods,main.py,0,1
object-attr-instance-methods,main.py,1,1
@@ -26,7 +29,12 @@ raise-uses-base-exception,main.py,0,1
scope-existing-over-new-import,main.py,0,1
scope-prioritize-closer,main.py,0,2
scope-simple-long-identifier,main.py,0,1
third-party-over-stdlib,main.py,0,1
tighter-over-looser-scope,main.py,0,3
tstring-completions,main.py,0,1
ty-extensions-lower-stdlib,main.py,0,9
type-var-typing-over-ast,main.py,0,3
type-var-typing-over-ast,main.py,1,253
ty-extensions-lower-stdlib,main.py,0,1
typing-gets-priority,main.py,0,1
typing-gets-priority,main.py,1,1
typing-gets-priority,main.py,2,1
typing-gets-priority,main.py,3,1
typing-gets-priority,main.py,4,1
1 name file index rank
2 auto-import-includes-modules main.py 0 1
3 auto-import-includes-modules main.py 1 7 2
4 auto-import-includes-modules main.py 2 1
5 auto-import-skips-current-module main.py 0 1
6 class-arg-completion main.py 0 1
7 exact-over-fuzzy main.py 0 1
8 fstring-completions main.py 0 1
9 higher-level-symbols-preferred main.py 0
10 higher-level-symbols-preferred main.py 1 1
17 import-deprioritizes-type_check_only main.py 4 3
18 import-keyword-completion main.py 0 1
19 internal-typeshed-hidden main.py 0 2
20 local-over-auto-import main.py 0 1
21 modules-over-other-symbols main.py 0 1
22 none-completion main.py 0 1
23 numpy-array main.py 0 159 57
24 numpy-array main.py 1 1
25 object-attr-instance-methods main.py 0 1
26 object-attr-instance-methods main.py 1 1
29 scope-existing-over-new-import main.py 0 1
30 scope-prioritize-closer main.py 0 2
31 scope-simple-long-identifier main.py 0 1
32 third-party-over-stdlib main.py 0 1
33 tighter-over-looser-scope main.py 0 3
34 tstring-completions main.py 0 1
35 ty-extensions-lower-stdlib main.py 0 9 1
36 type-var-typing-over-ast typing-gets-priority main.py 0 3 1
37 type-var-typing-over-ast typing-gets-priority main.py 1 253 1
38 typing-gets-priority main.py 2 1
39 typing-gets-priority main.py 3 1
40 typing-gets-priority main.py 4 1

View File

@@ -540,16 +540,17 @@ fn copy_project(src_dir: &SystemPath, dst_dir: &SystemPath) -> anyhow::Result<Ve
std::fs::create_dir_all(dst_dir).with_context(|| dst_dir.to_string())?;
let mut cursors = vec![];
for result in walkdir::WalkDir::new(src_dir.as_std_path()) {
let it = walkdir::WalkDir::new(src_dir.as_std_path())
.into_iter()
.filter_entry(|dent| {
!dent
.file_name()
.to_str()
.is_some_and(|name| name.starts_with('.'))
});
for result in it {
let dent =
result.with_context(|| format!("failed to get directory entry from {src_dir}"))?;
if dent
.file_name()
.to_str()
.is_some_and(|name| name.starts_with('.'))
{
continue;
}
let src = SystemPath::from_std_path(dent.path()).ok_or_else(|| {
anyhow::anyhow!("path `{}` is not valid UTF-8", dent.path().display())

View File

@@ -0,0 +1,3 @@
pattern = 1
pttn = 1
pttn<CURSOR: pttn>

View File

@@ -0,0 +1,2 @@
[settings]
auto-import = true

View File

@@ -0,0 +1,11 @@
def foo(x):
# We specifically want the local `x` to be
# suggested first here, and NOT an `x` or an
# `X` from some other module (via auto-import).
# We'd also like this to come before `except`,
# which is a keyword that contains `x`, but is
# not an exact match (where as `x` is). `except`
# also isn't legal in this context, although
# that sort of context sensitivity is a bit
# trickier.
return x<CURSOR: x>

View File

@@ -0,0 +1,5 @@
[project]
name = "test"
version = "0.1.0"
requires-python = ">=3.13"
dependencies = []

View File

@@ -0,0 +1,8 @@
version = 1
revision = 3
requires-python = ">=3.13"
[[package]]
name = "test"
version = "0.1.0"
source = { virtual = "." }

View File

@@ -0,0 +1,2 @@
[settings]
auto-import = true

Some files were not shown because too many files have changed in this diff Show More