Compare commits

...

151 Commits

Author SHA1 Message Date
Alex Waygood
68bdf9fa88 [ty] Make special cases for subscript inference exhaustive 2026-01-14 11:47:19 +00:00
Charlie Marsh
b5814b91c1 [ty] Add diagnostics to validate TypeIs and TypeGuard definitions (#22300)
## Summary

Closes https://github.com/astral-sh/ty/issues/2267.
2026-01-13 20:24:05 -05:00
Charlie Marsh
ea46426157 [ty] Apply narrowing to walrus targets (#22369)
## Summary

Closes https://github.com/astral-sh/ty/issues/2300.
2026-01-14 00:56:47 +00:00
Alex Waygood
ddd2fc7a90 [ty] Use "typeguard constraints" for two kinds of tuple narrowing (#22348)
## Summary

Since we've already filtered the union in these locations, it seems like
needless overhead to then intersect the previous union with the filtered
union. We know what that intersection will simplify to: it will simplify
to the filtered union. So rather than using a regular intersection-based
constraint, we can use a "typeguard constraint", which will just
directly replace the previous type with the new type instead of creating
an intersection.

## Test Plan

- Existing tests all pass
- The primer report should be clean
2026-01-13 23:37:09 +00:00
Will Duke
4ebf10cf1b [ty] Add a conformance script to compare ty diagnostics with expected errors (#22231) 2026-01-13 22:19:41 +00:00
Charlie Marsh
9a676bbeb7 [ty] Add diagnostic to catch generic enums (#22482)
## Summary

Closes https://github.com/astral-sh/ty/issues/2416.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2026-01-13 16:55:46 -05:00
Bhuminjay Soni
d9028a098b [isort] Insert imports in alphabetical order (I002) (#22493)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR fixes #20811 , current approach reverses the order in `BtreeSet`
however as pointed in
https://github.com/astral-sh/ruff/issues/20811#issuecomment-3398958832
here we cannot use I`IndexSet` to preserve config order since Settings
derives `CacheKey` which isn't implemented for `IndexSet`, another
approach to preserve the original order might be to use `Vec` however
lookup time complexity might get affected as a result.

<!-- How was it tested? -->
I have tested it locally its working as expected ,
<img width="2200" height="1071" alt="image"
src="https://github.com/user-attachments/assets/7d97b488-1552-4a42-9d90-92acf55ec493"
/>

---------

Signed-off-by: Bhuminjay <bhuminjaysoni@gmail.com>
2026-01-13 21:21:18 +00:00
Alex Waygood
56077ee9a9 [ty] Fix @Todo type for starred expressions (#22503) 2026-01-13 21:09:29 +00:00
Alex Waygood
20c01d2553 [ty] Use the top materialization of classes for if type(x) is y narrowing (#22553) 2026-01-13 20:53:52 +00:00
Harutaka Kawamura
c98ea1bc24 [flake8-pytest-style] Add check parameter example to PT017 docs (#22546)
## Summary

- Adds an alternative example to the PT017 (`pytest-assert-in-except`)
rule documentation showing pytest's `check` parameter for validating
exceptions, available since pytest 8.4.0

Closes #22529

## Test plan

Documentation-only change. Verified with `uvx prek run -a`.

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 15:45:53 -05:00
GiGaGon
9a2990b2a1 [ruff] Make example error out-of-the-box (RUF103) (#22558)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Part of #18972

This PR makes [invalid-suppression-comment
(RUF103)](https://docs.astral.sh/ruff/rules/invalid-suppression-comment/#invalid-suppression-comment-ruf103)'s
example error out-of-the-box.

[Old example](https://play.ruff.rs/3ff757f3-04ae-4d27-986d-49972338fa24)
```py
ruff: disable # missing codes
```

[New example](https://play.ruff.rs/4a9970c4-3b33-4533-8ffa-f15d481b1e6f)
```py
# ruff: disable # missing codes
```

## Test Plan

<!-- How was it tested? -->

N/A, no functionality/tests affected
2026-01-13 15:06:54 -05:00
Carl Meyer
a697050a83 [ty] Fix stack overflow with recursive type aliases containing tuple … (#22543)
This fixes issue #2470 where recursive type aliases like `type
RecursiveT = int | tuple[RecursiveT, ...]` caused a stack overflow when
used in return type checking with constructors like `list()`.

The fix moves all type mapping processing for `UniqueSpecialization`
(and other non-EagerExpansion mappings) inside the `visitor.visit()`
closure. This ensures that if we encounter the same TypeAlias
recursively during type mapping, the cycle detector will properly detect
it and return the fallback value instead of recursing infinitely.

The key insight is that the previous code called
`apply_function_specialization` followed by another
`apply_type_mapping_impl` AFTER the visitor closure returned. At that
point, the TypeAlias was no longer in the visitor's `seen` set, so
recursive references would not be detected as cycles.
2026-01-13 11:25:01 -08:00
Dex Devlon
2f64ef9c72 [ty] Include type parameters in generic callable display (#22435) 2026-01-13 17:29:08 +00:00
RasmusNygren
fde7d72fbb [ty] Add diagnostics for __init_subclass__ argument mismatch (#22185) 2026-01-13 16:15:51 +00:00
drbh
d13b5db066 [ty] narrow the right-hand side of ==, !=, is and is not conditions when the left-hand side is not narrowable (#22511)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2026-01-13 16:01:54 +00:00
Alex Waygood
c7b41060f4 [ty] Improve disambiguation of types (#22547) 2026-01-13 14:56:56 +00:00
Charlie Marsh
3878701265 [ty] Support own instance members for type(...) classes (#22480)
## Summary

Addresses
https://github.com/astral-sh/ruff/pull/22291#discussion_r2674467950.
2026-01-13 09:36:03 -05:00
Enric Calabuig
6e89e0abff [ty] Fix classmethod + contextmanager + Self (#22407)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

The test I've added illustrates the fix. Copying it here too:

```python
from contextlib import contextmanager
from typing import Iterator
from typing_extensions import Self

class Base:
    @classmethod
    @contextmanager
    def create(cls) -> Iterator[Self]:
        yield cls()

class Child(Base): ...

with Base.create() as base:
    reveal_type(base)  # revealed: Base (after the fix, None before)

with Child.create() as child:
    reveal_type(child)  # revealed: Child (after the fix, None before)
```

Full disclosure: I've used LLMs for this PR, but the result is
thoroughly reviewed by me before submitting. I'm excited about my first
Rust contribution to Astral tools and will address feedback quickly.

Related to https://github.com/astral-sh/ty/issues/2030, I am working on
a fix for the TypeVar case also reported in that issue (by me)

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->

Updated mdtests

---------

Co-authored-by: Douglas Creager <dcreager@dcreager.net>
2026-01-13 09:29:58 -05:00
Manuel Jacob
cb31883c5f Correct comment about public functions starting with an underscore. (#22550) 2026-01-13 14:04:19 +00:00
Charlie Marsh
6d8f2864c3 [ty] Rename MRO structs to match static nomenclature (#22549)
## Summary

I didn't want to make the "dynamic" `type(...)` PR any larger, but it
probably makes sense to rename these now that we have `Dynamic`
variants.
2026-01-13 08:53:49 -05:00
Dhruv Manilawala
990d0a8999 [ty] Improve log guidance message for Zed (#22530)
## Summary

closes: https://github.com/astral-sh/ty/issues/1717

## Test Plan


https://github.com/user-attachments/assets/5d8d6d03-3451-4403-a6cd-2deba9e796fc
2026-01-13 08:13:11 +00:00
Carl Meyer
99beabdde8 [ty] Fix false positive for bounded type parameters with NewType (#22542)
Fixes https://github.com/astral-sh/ty/issues/2467

When calling a method on an instance of a generic class with bounded
type parameters (e.g., `C[T: K]` where `K` is a NewType), ty was
incorrectly reporting: "Argument type `C[K]` does not satisfy upper
bound `C[T@C]` of type variable `Self`"

The issue was introduced by PR #22105, which moved the catch-all case
for NewType assignments that falls back to the concrete base type. This
case was moved before the TypeVar handling cases, so when checking `K <:
T@C` (where K is a NewType and T@C is a TypeVar with upper bound K):

1. The NewType fallback matched first
2. It delegated to `int` (K's concrete base type)
3. Then checked `int <: T@C`, which checks if `int` satisfies bound `K`
4. But `int` is not assignable to `K` (NewTypes are distinct from their
bases)

The fix moves the NewType fallback case after the TypeVar cases, so
TypeVar handling takes precedence. Now when checking `K <: T@C`, we use
the TypeVar case at line 828 which returns `false` for non-inferable
typevars - but this is correct because the *other* direction (`T@C <:
K`) passes, and for the overall specialization comparison both
directions are checked.
2026-01-12 17:23:31 -08:00
Ibraheem Ahmed
3ae4db3ccd [ty] Support assignment to unions of TypedDicts (#22294)
## Summary

Resolves https://github.com/astral-sh/ty/issues/2265.
2026-01-12 16:10:58 -05:00
Ibraheem Ahmed
8ac5f9d8bc [ty] Use key and value parameter types as type context for __setitem__ dunder calls (#22148)
## Summary

Resolves https://github.com/astral-sh/ty/issues/2136.
2026-01-12 16:05:05 -05:00
Charlie Marsh
4abc5fe2f1 [ty] Add support for dynamic type() classes (#22291)
## Summary

This PR adds support for dynamic classes created via `type()`. The core
of the change is that `ClassLiteral` is now an enum:

```rust
pub enum ClassLiteral<'db> {
    /// A class defined via a `class` statement.
    Stmt(StmtClassLiteral<'db>),
    /// A class created via the functional form `type(name, bases, dict)`.
    Functional(FunctionalClassLiteral<'db>),
}
```

And, in turn, various methods on `ClassLiteral` like `body_scope` now
return `Option` or similar (and callers must adjust to that change in
signature).

Over time, we can expand the enum to include functional namedtuples,
etc. (I already have this working in a separate branch, and I believe it
slots in well.)

(I'd love help with the names -- I think `StmtClassLiteral` is kind of
lame. Maybe `DeclarativeClassLiteral`?)

Closes https://github.com/astral-sh/ty/issues/740.

---------

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2026-01-12 15:20:42 -05:00
renovate[bot]
78ef241200 Update actions/checkout digest to 0c366fd (#22513)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-12 18:36:50 +01:00
Charlie Marsh
e4ba29392b [ty] Fix __file__ type in completions to show str instead of str | None (#22510)
## Summary

The type inference system already correctly special-cases `__file__` to
return `str` for the current module (since the code is executing from an
existing file). However, the completion system was bypassing this logic
and pulling `__file__: str | None` directly from `types.ModuleType` in
typeshed.

This PR adds implicit module globals (like `__file__`, `__name__`, etc.)
with their correctly-typed values to completions, reusing the existing
`module_type_implicit_global_symbol` function that already handles the
special-casing.

Closes https://github.com/astral-sh/ty/issues/2445.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2026-01-12 14:20:32 +00:00
Charlie Marsh
29064034ba Use rustfmt directly in prek (#22508)
## Summary

Apparently this is about ~18x faster (2.25s → 0.12s) for a single-file
change, and ~2x faster (2.36s → 1.25s) for `prek run -a`.
2026-01-12 08:59:20 -05:00
Charlie Marsh
f1db842821 [ty] Avoid panic for comparison on synthesized variants (#22509)
## Summary

Like `ProtocolInstance`, we now use `left.cmp(right)` by deriving
`PartialOrd` and `Ord`. IIUC, this uses Salsa ID for Salsa-interned
types, but avoids `None.cmp(None)` for synthesized variants.

Closes https://github.com/astral-sh/ty/issues/2451.
2026-01-12 13:35:56 +00:00
Alex Waygood
5a3deee353 [ty] Fix incorrect narrowing for if type(x) == y (#22531) 2026-01-12 12:26:47 +00:00
Dex Devlon
e15f88ff21 [ty] Fix contravariant type variable bound checking in specialization inference (#22488)
## Summary

Correctly handle upper bounds for contravariant type variables during
specialization inference. Previously, the type checker incorrectly
applied covariant subtyping rules, requiring the actual type to directly
satisfy the bound rather than checking for a valid intersection.

In contravariant positions, subtyping relationships are inverted. The
bug caused valid code like `f(x: Contra[str])` where `f` expects
`Contra[T: int]` to be incorrectly rejected, when it should solve `T` to
`Never` (the intersection of `int` and `str`).

Closes https://github.com/astral-sh/ty/issues/2427

## Details

- Added `is_contravariant()` helper to `TypeVarVariance` in
`variance.rs`
- Updated `SpecializationBuilder::infer_map_impl` in `generics.rs` to
treat bounds and constraints differently based on variance:
  * Skip immediate `ty <: bound` check for contravariant upper bounds
* Flip constraint check to `constraint <: ty` for contravariant
positions
- Added test case for bounded contravariant type variables in
`variance.md`
- All 308 mdtest cases pass & 150 ty_python_semantic unit tests pass

---------

Co-authored-by: Douglas Creager <dcreager@dcreager.net>
2026-01-12 05:00:35 -05:00
renovate[bot]
a559275c3e Update prek dependencies (#22526)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-12 08:11:01 +00:00
renovate[bot]
95199c4217 Update astral-sh/setup-uv action to v7.2.0 (#22523)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [astral-sh/setup-uv](https://redirect.github.com/astral-sh/setup-uv) |
action | minor | `v7.1.6` → `v7.2.0` |

---

### Release Notes

<details>
<summary>astral-sh/setup-uv (astral-sh/setup-uv)</summary>

###
[`v7.2.0`](https://redirect.github.com/astral-sh/setup-uv/releases/tag/v7.2.0):
🌈 add outputs python-version and python-cache-hit

[Compare
Source](https://redirect.github.com/astral-sh/setup-uv/compare/v7.1.6...v7.2.0)

##### Changes

Among some minor typo fixes and quality of life features for developers
of actions the main feature of this release are new outputs:

- **python-version:** The Python version that was set (same content as
existing `UV_PYTHON`)
- **python-cache-hit:** A boolean value to indicate the Python cache
entry was found

While implementing this it became clear, that it is easier to handle the
Python binaries in a separate cache entry. The added benefit for users
is that the "normal" cache containing the dependencies can be used in
all runs no matter if these cache the Python binaries or not.

> \[!NOTE]\
> This release will invalidate caches that contain the Python binaries.
This happens a single time.

##### 🐛 Bug fixes

- chore: remove stray space from UV\_PYTHON\_INSTALL\_DIR message
[@&#8203;akx](https://redirect.github.com/akx)
([#&#8203;720](https://redirect.github.com/astral-sh/setup-uv/issues/720))

##### 🚀 Enhancements

- add outputs python-version and python-cache-hit
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;728](https://redirect.github.com/astral-sh/setup-uv/issues/728))
- Add action typings with validation
[@&#8203;krzema12](https://redirect.github.com/krzema12)
([#&#8203;721](https://redirect.github.com/astral-sh/setup-uv/issues/721))

##### 🧰 Maintenance

- fix: use uv\_build backend for old-python-constraint-project
[@&#8203;eifinger](https://redirect.github.com/eifinger)
([#&#8203;729](https://redirect.github.com/astral-sh/setup-uv/issues/729))
- chore: update known checksums for 0.9.22
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;727](https://redirect.github.com/astral-sh/setup-uv/issues/727))
- chore: update known checksums for 0.9.21
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;726](https://redirect.github.com/astral-sh/setup-uv/issues/726))
- chore: update known checksums for 0.9.20
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;725](https://redirect.github.com/astral-sh/setup-uv/issues/725))
- chore: update known checksums for 0.9.18
@&#8203;[github-actions\[bot\]](https://redirect.github.com/apps/github-actions)
([#&#8203;718](https://redirect.github.com/astral-sh/setup-uv/issues/718))

##### ⬆️ Dependency updates

- Bump peter-evans/create-pull-request from 7.0.9 to 8.0.0
@&#8203;[dependabot\[bot\]](https://redirect.github.com/apps/dependabot)
([#&#8203;719](https://redirect.github.com/astral-sh/setup-uv/issues/719))
- Bump github/codeql-action from 4.31.6 to 4.31.9
@&#8203;[dependabot\[bot\]](https://redirect.github.com/apps/dependabot)
([#&#8203;723](https://redirect.github.com/astral-sh/setup-uv/issues/723))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi43NC41IiwidXBkYXRlZEluVmVyIjoiNDIuNzQuNSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:07:15 +00:00
renovate[bot]
f92a818fdd Update Rust crate insta to v1.46.0 (#22527)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 07:56:12 +00:00
renovate[bot]
39ec97df79 Update taiki-e/install-action action to v2.65.13 (#22522)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:54:28 +01:00
renovate[bot]
a21988d820 Update NPM Development dependencies (#22525)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:53:56 +01:00
renovate[bot]
eab41d5a4c Update dependency ruff to v0.14.11 (#22517)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:52:14 +01:00
renovate[bot]
52f4a529f7 Update CodSpeedHQ/action action to v4.5.2 (#22516)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:49:54 +01:00
renovate[bot]
8fd142f4ef Update Rust crate libc to v0.2.179 (#22520)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:48:43 +01:00
renovate[bot]
5dca6d22df Update Rust crate clap to v4.5.54 (#22518)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:48:30 +01:00
renovate[bot]
4c5846c6fe Update Rust crate imperative to v1.0.7 (#22519)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:48:06 +01:00
renovate[bot]
0289d1b163 Update Rust crate syn to v2.0.113 (#22521)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-12 08:47:25 +01:00
Charlie Marsh
09ff3e7056 Set priority for prek to enable concurrent execution (#22506)
## Summary

prek allows you to set priorities, and can run tasks of the same
priority concurrently (e.g., we can run Ruff's Python formatting and
`cargo fmt` at the same time). On my machine, this takes `uvx prek run
-a` from 19.4s to 5.0s (~a 4x speed-up).
2026-01-11 19:21:38 +00:00
Charlie Marsh
7bacca9b62 Use prek in documentation and CI (#22505)
## Summary

AFAIK, many of us on the team are using prek now. It seems appropriate
to modify the docs to formalize it.
2026-01-11 14:17:10 -05:00
Charlie Marsh
2c68057c4b [ty] Preserve argument signature in @total_ordering (#22496)
## Summary

Closes https://github.com/astral-sh/ty/issues/2435.
2026-01-10 14:35:58 -05:00
Charlie Marsh
8e29be9c1c Add let chains preference to development guidelines (#22497) 2026-01-10 12:29:07 -05:00
Dylan
880513a013 Respect fmt: skip for multiple statements on same logical line (#22119)
This PR adjusts the logic for skipping formatting so that a `fmt: skip`
can affect multiple statements if they lie on the same line.

Specifically, a `fmt: skip` comment will now suppress all the statements
in the suite in which it appears whose range intersects the line
containing the skip directive. For example:

```python
x=[
'1'
];x=2 # fmt: skip
```

remains unchanged after formatting.

(Note that compound statements are somewhat special and were handled in
a previous PR - see #20633).


Closes #17331 and #11430.

Simplest to review commit by commit - the key diffs of interest are the
commit introducing the core logic, and the diff between the snapshots
introduced in the last commit (compared to the second commit).

# Implementation

On `main` we format a suite of statements by iterating through them. If
we meet a statement with a leading or trailing (own-line)`fmt: off`
comment, then we suppress formatting until we meet a `fmt: on` comment.
Otherwise we format the statement using its own formatting rule.

How are `fmt: skip` comments handled then? They are handled internally
to the formatting of each statement. Specifically, calling `.fmt` on a
statement node will first check to see if there is a trailing,
end-of-line `fmt: skip` (or `fmt: off`/`yapf: off`), and if so then
write the node with suppressed formatting.

In this PR we move the responsibility for handling `fmt: skip` into the
formatting logic of the suite itself. This is done as follows:

- Before beginning to format the suite, we do a pass through the
statements and collect the data of ranges with skipped formatting. More
specifically, we create a map with key given by the _first_ skipped
statement in a block and value a pair consisting of the _last_ skipped
statement and the _range_ to write verbatim.
- We iterate as before, but if we meet a statement that is a key in the
map constructed above, we pause to write the associated range verbatim.
We then advance the iterator to the last statement in the block and
proceed as before.

## Addendum on range formatting

We also had to make some changes to range formatting in order to support
this new behavior. For example, we want to make sure that

```python
<RANGE_START>x=1<RANGE_END>;x=2 # fmt: skip
```

formats verbatim, rather than becoming 

```python
x = 1;x=2 # fmt: skip
```

Recall that range formatting proceeds in two steps:
1. Find the smallest enclosing node containing the range AND that has
enough info to format the range (so it may be larger than you think,
e.g. a docstring has enclosing node given by the suite, not the string
itself.)
2. Carve out the formatted range from the result of formatting that
enclosing node.

We had to modify (1), since the suite knows how to format skipped nodes,
but nodes may not "know" they are skipped. To do this we altered the
`visit_body` bethod of the `FindEnclosingNode` visitor: now we iterate
through the statements and check for skipped ranges intersecting the
format range. If we find them, we return without descending. The result
is to consider the statement containing the suite as the enclosing node
in this case.
2026-01-10 14:56:58 +00:00
Charlie Marsh
046c5a46d8 [ty] Support dataclass_transform as a function call (#22378)
## Summary

Instead of just as a decorator.

Closes https://github.com/astral-sh/ty/issues/2319.
2026-01-10 08:45:45 -05:00
Micha Reiser
cfed34334c Update mypy primer pin (#22490) 2026-01-10 14:00:00 +01:00
Charlie Marsh
11cc324449 [ty] Detect invalid @total_ordering applications in non-decorator contexts (#22486)
## Summary

E.g., `ValidOrderedClass = total_ordering(HasOrderingMethod)`.
2026-01-09 19:37:58 -05:00
Charlie Marsh
c88e1a0663 [ty] Avoid emitting Liskov repeated violations from grandparent to child (#22484)
## Summary

If parent violates LSP against grandparent, and child has the same
violation (but matches parent), we no longer flag the LSP violation on
child, since it can't be fixed without violating parent.

If parent violates LSP against grandparent, and child violates LSP
against both parent and grandparent, we emit two diagnostics (one for
each violation).

If parent violates LSP against grandparent, and child violates LSP
against parent (but not grandparent), we flag it.

Closes https://github.com/astral-sh/ty/issues/2000.
2026-01-09 19:27:57 -05:00
William Woodruff
2c7ac17b1e Remove two old zizmor ignore comments (#22485) 2026-01-09 18:02:44 -05:00
Matthias Schoettle
a0f2cd0ded Add language: golang to actionlint pre-commit hook (#22483) 2026-01-09 19:30:00 +00:00
Alex Waygood
dc61104726 [ty] Derive Default in a few more places in place.rs (#22481) 2026-01-09 18:37:36 +00:00
Jason K Hall
f40c578ffb [ruff] document RUF100 trailing comment fix behavior (#22479)
## Summary
Ruff's `--fix` for `RUF100` can inadvertently remove trailing comments
(e.g., `pylint` or `mypy` suppressions) by interpreting them as
descriptions. This PR adds a "Conflict with other linters" section to
the rule documentation to clarify this behavior and provide the
double-hash (`# noqa # pylint`) workaround.

## Fixes
Fixes #20762
2026-01-09 09:01:09 -08:00
Leo Hayashi
baaf6966f6 [ruff] Split WASM build and publish into separate workflows (#12387) (#22476)
Co-authored-by: Hayashi Reo <reo@wantedly.com>
2026-01-09 17:41:13 +01:00
Zanie Blue
a6df4a3be7 Add llms.txt support for documentation (#22463)
Matching uv.

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-09 09:56:41 -06:00
Micha Reiser
1094009790 Fix configuration path in --show-settings (#22478) 2026-01-09 16:28:57 +01:00
Andrew Gallant
10eb3d52d5 [ty] Rank top-level module symbols above most other symbols
This makes it so, e.g., `os<CURSOR>` will suggest the top-level stdlib
`os` module even if there is an `os` symbol elsewhere in your project.

The way this is done is somewhat overwrought, but it's done to avoid
suggesting top-level modules over other symbols already in scope.

Fixes astral-sh/issues#1852
2026-01-09 10:11:37 -05:00
Andrew Gallant
91d24ebb92 [ty] Give exact completion matches very high ranking
Aside from ruling out "impossible" completions in the current context,
we give completions with an exact match the highest possible ranking.
2026-01-09 09:55:32 -05:00
Andrew Gallant
e68fba20a9 [ty] Remove type-var-typing-over-ast evaluation task
This is now strictly redundant with the `typing-gets-priority` task.
2026-01-09 09:55:32 -05:00
Andrew Gallant
64117c1146 [ty] Improve completion ranking based on origin of symbols
Part of this was already done, but it was half-assed. We now look at the
search path that a symbol came from and centralize a symbol's origin
classification.

The preference ordering here is maybe not the right one, but we can
iterate as users give us feedback. Note also that the preference
ordering based on the origin is pretty low in the relevance sorting.
This means that other more specific criteria will and can override this.

This results in some nice improvements to our evaluation tasks.
2026-01-09 09:55:32 -05:00
Andrew Gallant
2196ef3a33 [ty] Add package priority completion evaluation tasks
This commit adds two new tests. One checks that a symbol in the current
project gets priority over a symbol in the standard library. Another
checks that a symbol in a third party dependency gets priority over a
symbol in the standard library. We don't get either of these right
today.

Note that these comparisons are done ceteris paribus. A symbol from the
standard library could still be ranked above a symbol elsewhere.
(Although I believe currently this is somewhat rare.)
2026-01-09 09:55:32 -05:00
Andrew Gallant
c36397031b [ty] Actually ignore hidden directories in completion evaluation
I apparently don't know how to use my own API. Previously,
we would skip, e.g., `.venv`, but still descend into it.

This was annoying in practice because I sometimes have an
environment in one of the truth task directories. The eval
command should ignore that entirely, but it ended up
choking on it without properly ignoring hidden files
and directories.
2026-01-09 09:55:32 -05:00
Andrew Gallant
eef34958f9 [ty] More sensible sorting for results returned by all_symbols
Previously, we would only sort by name and file path (unless the symbol
refers to an entire module). This led to somewhat confusing ordering,
since the file path is absolute and can be anything.

Sorting by module name after symbol name gives a more predictable
ordering in common practice.

This is mostly just an improvement for debugging purposes, i.e., when
looking at the output of `all_symbols` directly. This mostly shouldn't
impact completion ordering since completions do their own ranking.
2026-01-09 09:55:32 -05:00
Andrew Gallant
f1bd5f1941 [ty] Rank unimported symbols from typing higher
This addresses a number of minor annoyances where users really want
imports from `typing` most of the time.

For example:
https://github.com/astral-sh/ty/issues/1274#issuecomment-3345884227
2026-01-09 09:55:32 -05:00
Andrew Gallant
56862f8241 [ty] Refactor ranking
This moves the information we want to use to rank completions into
`Completion` itself. (This was the primary motivation for creating a
`CompletionBuilder` in the first place.)

The principal advantage here is that we now only need to compute the
relevance information for each completion exactly once. Previously, we
were computing it on every comparison, which might end up doing
redundant work for a not insignifcant number of completions.

The relevance information is also specifically constructed from the
builder so that, in the future, we might choose to short-circuit
construction if we know we'll never send it back to the client (e.g.,
its ranking is worse than the lowest ranked completion). But we don't
implement that optimization yet.
2026-01-09 09:55:32 -05:00
Andrew Gallant
e9cc2f6f42 [ty] Refactor completion construction to use a builder
I want to be able to attach extra data to each `Completion`, but not
burden callers with the need to construct it. This commit helps get us
to that point by requiring callers to use a `CompletionBuilder` for
construction instead of a `Completion` itself.

I think this will also help in the future if it proves to be the case
that we can improve performance by delaying work until we actually build
a `Completion`, which might only happen if we know we won't throw it
out. But we aren't quite there yet.

This also lets us tighten things up a little bit and makes completion
construction less noisy. The downside is that callers no longer need to
consider "every" completion field.

There should not be any behavior changes here.
2026-01-09 09:55:32 -05:00
Andrew Gallant
8dd56d4264 [ty] Rename internal completion Rank type to Relevance
I think "relevance" captures the meaning of this type more precisely.
2026-01-09 09:55:32 -05:00
Andrew Gallant
d4c1b0ccc7 [ty] Add evaluation tasks for a few cases
I went through https://github.com/astral-sh/ty/issues/1274 and tried to
extract what I could into eval tasks. Some of the suggestions from that
issue have already been done, but most haven't.

This captures the status quo.
2026-01-09 09:55:32 -05:00
Micha Reiser
e61657ff3c [ty] Enable unused-type-ignore-comment by default (#22474) 2026-01-09 10:58:43 +00:00
Micha Reiser
ba5dd5837c [ty] Pass slice to specialize (#22421) 2026-01-09 09:45:39 +01:00
Micha Reiser
c5f6a74da5 [ty] Fix goto definition for relative imports in third-party files (#22457) 2026-01-09 09:30:31 +01:00
Micha Reiser
b3cde98cd1 [ty] Update salsa (#22473) 2026-01-09 09:21:45 +01:00
Carl Meyer
f9f7a6901b Complete minor TODO in ty_python_semantic crate (#22468)
This TODO is very old -- we have long since recorded this definition.
Updating the test to actually assert the declaration requires a new
helper method for declarations, to complement the existing
`first_public_binding` helper.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-08 16:04:34 -08:00
Dylan
c920cf8cdb Bump 0.14.11 (#22462) 2026-01-08 12:51:47 -06:00
Micha Reiser
bb757b5a79 [ty] Don't show diagnostics for excluded files (#22455) 2026-01-08 18:27:28 +01:00
Charlie Marsh
1f49e8ef51 Include configured src directories when resolving graphs (#22451)
## Summary

This PR augments the detected source paths with the user-configured
`src` when computing roots for `ruff analyze graph`.
2026-01-08 15:19:15 +00:00
Douglas Creager
701f5134ab [ty] Only consider fully static pivots when deriving transitive constraints (#22444)
When working with constraint sets, we track transitive relationships
between the constraints in the set. For instance, in `S ≤ int ∧ int ≤
T`, we can infer that `S ≤ T`. However, we should only consider fully
static types when looking for a "pivot" for this kind of transitive
relationship. The same pattern does not hold for `S ≤ Any ∧ Any ≤ T`;
because the two `Any`s can materialize to different types, we cannot
infer that `S ≤ T`.

Fixes https://github.com/astral-sh/ty/issues/2371
2026-01-08 09:31:55 -05:00
Micha Reiser
eea9ad8352 Pin maturin version (#22454) 2026-01-08 12:39:51 +01:00
Alex Waygood
eeac2bd3ee [ty] Optimize union building for unions with many enum-literal members (#22363) 2026-01-08 10:50:04 +00:00
Jason K Hall
7319c37f4e docs: fix jupyter notebook discovery info for editors (#22447)
Resolves #21892

## Summary

This PR updates `docs/editors/features.md` to clarify that Jupyter
Notebooks are now included by default as of version 0.6.0.
2026-01-08 11:52:01 +05:30
Amethyst Reese
805503c19a [ruff] Improve fix title for RUF102 invalid rule code (#22100)
## Summary

Updates the fix title for RUF102 to either specify which rule code to
remove, or clarify
that the entire suppression comment should be removed.

## Test Plan

Updated test snapshots.
2026-01-07 17:23:18 -08:00
Charlie Marsh
68a2f6c57d [ty] Fix super() with TypeVar-annotated self and cls parameter (#22208)
## Summary

This PR fixes `super()` handling when the first parameter (`self` or
`cls`) is annotated with a TypeVar, like `Self`.

Previously, `super()` would incorrectly resolve TypeVars to their bounds
before creating the `BoundSuperType`. So if you had `self: Self` where
`Self` is bounded by `Parent`, we'd process `Parent` as a
`NominalInstance` and end up with `SuperOwnerKind::Instance(Parent)`.

As a result:

```python
class Parent:
    @classmethod
    def create(cls) -> Self:
        return cls()

class Child(Parent):
    @classmethod
    def create(cls) -> Self:
        return super().create()  # Error: Argument type `Self@create` does not satisfy upper bound `Parent`
```

We now track two additional variants on `SuperOwnerKind` for TypeVar
owners:

- `InstanceTypeVar`: for instance methods where self is a TypeVar (e.g.,
`self: Self`).
- `ClassTypeVar`: for classmethods where `cls` is a `TypeVar` wrapped in
`type[...]` (e.g., `cls: type[Self]`).

Closes https://github.com/astral-sh/ty/issues/2122.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2026-01-07 19:56:09 -05:00
Alex Waygood
abaa735e1d [ty] Improve UnionBuilder performance by changing Type::is_subtype_of calls to Type::is_redundant_with (#22337) 2026-01-07 22:17:44 +00:00
Jelle Zijlstra
c02d164357 Check required-version before parsing rules (#22410)
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-07 17:29:27 +00:00
Andrew Gallant
88aa3f82f0 [ty] Fix generally poor ranking in playground completions
We enabled [`CompletionListisIncomplete`] in our LSP server a while back
in order to have more of a say in how we rank and filter completions.
When it isn't set, the client tends to ask for completions less
frequently and will instead do its own filtering.

But... we did not enable it for the playground. Which I guess didn't
result in anything noticeably bad until we started limiting completions
to 1,000 suggestions. This meant that if the _initial_ completion
response didn't include the ultimate desired answer, then it would never
show up in the results until the client requested completions again.
This in turn led to some very poor completions in some cases.

This all gets fixed by simply enabling `isIncomplete` for Monaco.

Fixes astral-sh/ty#2340

[`CompletionList::isIncomplete`](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#completionList)
2026-01-07 12:25:26 -05:00
Carl Meyer
30902497db [ty] Make signature return and parameter types non-optional (#22425)
## Summary

Fixes https://github.com/astral-sh/ty/issues/2363
Fixes https://github.com/astral-sh/ty/issues/2013

And several other bugs with the same root cause. And makes any similar
bugs impossible by construction.

Previously we distinguished "no annotation" (Rust `None`) from
"explicitly annotated with something of type `Unknown`" (which is not an
error, and results in the annotation being of Rust type
`Some(Type::DynamicType(Unknown))`), even though semantically these
should be treated the same.

This was a bit of a bug magnet, because it was easy to forget to make
this `None` -> `Unknown` translation everywhere we needed to. And in
fact we did fail to do it in the case of materializing a callable,
leading to a top-materialized callable still having (rust) `None` return
type, which should have instead materialized to `object`.

This also fixes several other bugs related to not handling un-annotated
return types correctly:
1. We previously considered the return type of an unannotated `async
def` to be `Unknown`, where it should be `CoroutineType[Any, Any,
Unknown]`.
2. We previously failed to infer a ParamSpec if the return type of the
callable we are inferring against was not annotated.
3. We previously wrongly returned `Unknown` from `some_dict.get("key",
None)` if the value type of `some_dict` included a callable type with
un-annotated return type.

We now make signature return types and annotated parameter types
required, and we eagerly insert `Unknown` if there's no annotation. Most
of the diff is just a bunch of mechanical code changes where we
construct these types, and simplifications where we use them.

One exception is type display: when a callable type has un-annotated
parameters, we want to display them as un-annotated, but if it has a
parameter explicitly annotated with something of `Unknown` type, we want
to display that parameter as `x: Unknown` (it would be confusing if it
looked like your annotation just disappeared entirely).

Fortunately, we already have a mechanism in place for handling this: the
`inferred_annotation` flag, which suppresses display of an annotation.
Previously we used it only for `self` and `cls` parameters with an
inferred annotated type -- but we now also set it for any un-annotated
parameter, for which we infer `Unknown` type.

We also need to normalize `inferred_annotation`, since it's display-only
and shouldn't impact type equivalence. (This is technically a
previously-existing bug, it just never came up when it only affected
self types -- now it comes up because we have tests asserting that `def
f(x)` and `def g(x: Unknown)` are equivalent.)

## Test Plan

Added mdtests.
2026-01-07 09:18:39 -08:00
Alex Waygood
3ad99fb1f4 [ty] Fix an mdtest title (#22439) 2026-01-07 16:34:56 +00:00
Micha Reiser
d0ff59cfe5 [ty] Use Pool from regex_automata to reuse the matches allocations (#22438) 2026-01-07 17:22:35 +01:00
Andrew Gallant
952193e0c6 [ty] Offer completions for T when a value has type Unknown | T
Fixes astral-sh/ty#2197
2026-01-07 10:15:36 -05:00
Alex Waygood
4cba2e8f91 [ty] Generalize len() narrowing somewhat (#22330) 2026-01-07 13:57:50 +00:00
Alex Waygood
1a7f53022a [ty] Link to Callable __name__ FAQ directly from unresolved-attribute diagnostic (#22437) 2026-01-07 13:22:53 +00:00
Micha Reiser
266a7bc4c5 [ty] Fix stack overflow due to too small stack size (#22433) 2026-01-07 13:55:23 +01:00
Micha Reiser
3b7a5e4de8 [ty] Allow including files with no extension (#22243) 2026-01-07 11:38:02 +01:00
Micha Reiser
93039d055d [ty] Add --add-ignore CLI option (#21696) 2026-01-07 11:17:05 +01:00
Jason K Hall
3b61da0da3 Allow Python 3.15 as valid target-version value in preview (#22419) 2026-01-07 09:38:36 +01:00
Alex Waygood
5933cc0101 [ty] Optimize and simplify some object-related code (#22366)
## Summary

I wondered if this might improve performance a little. It doesn't seem
to, but it's a net reduction in LOC and I think the changes make sense.
I think it's worth it anyway just in terms of simplifying the code.

## Test Plan

Our existing tests all pass and the primer report is clean (aside from
our usual flakes).
2026-01-07 08:35:26 +00:00
Dhruv Manilawala
2190fcebe0 [ty] Substitute ParamSpec in overloaded functions (#22416)
## Summary

fixes: https://github.com/astral-sh/ty/issues/2027

This PR fixes a bug where the type mapping for a `ParamSpec` was not
being applied in an overloaded function.

This PR also fixes https://github.com/astral-sh/ty/issues/2081 and
reveals new diagnostics which doesn't look related to the bug:

```py
from prefect import flow, task

@task
def task_get() -> int:
    """Task get integer."""
    return 42

@task
def task_add(x: int, y: int) -> int:
    """Task add two integers."""
    print(f"Adding {x} and {y}")
    return x + y

@flow
def my_flow():
    """My flow."""
    x = 23
    future_y = task_get.submit()

	# error: [no-matching-overload]
    task_add(future_y, future_y)
	# error: [no-matching-overload]
    task_add(x, future_y)
```

The reason is that the type of `future_y` is `PrefectFuture[int]` while
the type of `task_add` is `Task[(x: int, y: int), int]` which means that
the assignment between `int` and `PrefectFuture[int]` fails which
results in no overload matching. Pyright also raises the invalid
argument type error on all three usages of `future_y` in those two
calls.

## Test Plan

Add regression mdtest from the linked issue.
2026-01-07 13:30:34 +05:30
Douglas Creager
df9d6886d4 [ty] Remove redundant apply_specialization type mappings (#22422)
@dhruvmanila encountered this in #22416 — there are two different
`TypeMapping` variants for apply a specialization to a type. One
operates on a full `Specialization` instance, the other on a partially
constructed one. If we move this enum-ness "down a level" it reduces
some copy/paste in places where we are operating on a `TypeMapping`.
2026-01-07 13:10:26 +05:30
Aria Desires
5133fa4516 [ty] fix typo in CODEOWNERS (#22430) 2026-01-07 07:44:46 +01:00
Amethyst Reese
21c5cfe236 Consolidate diagnostics for matched disable/enable suppression comments (#22099)
## Summary

Combines diagnostics for matched suppression comments, so that ranges
and autofixes for both
the `#ruff:disable` and `#ruff:enable` comments will be reported as a
single diagnostic.

## Test Plan

Snapshot changes, added new snapshot for full output from preview mode
rather than just a diff.

Issue #3711
2026-01-06 18:42:51 -08:00
Carl Meyer
f97da18267 [ty] improve typevar solving from constraint sets (#22411)
## Summary

Fixes https://github.com/astral-sh/ty/issues/2292

When solving a bounded typevar, we preferred the upper bound over the
actual type seen in the call. This change fixes that.

## Test Plan

Added mdtest, existing tests pass.
2026-01-06 13:10:51 -08:00
Alex Waygood
bc191f59b9 Convert more ty snapshots to the new format (#22424) 2026-01-06 20:01:41 +00:00
Alex Waygood
00f86c39e0 Add Alex Waygood back as a ty_ide codeowner (#22423) 2026-01-06 19:24:13 +00:00
Alex Waygood
2ec29b7418 [ty] Optimize Type::negate() (#22402) 2026-01-06 19:17:59 +00:00
Jack O'Connor
ab1ac254d9 [ty] fix comparisons and arithmetic with NewTypes of float (#22105)
Fixes https://github.com/astral-sh/ty/issues/2077.
2026-01-06 09:32:22 -08:00
Charlie Marsh
01de8bef3e [ty] Add named fields for Place enum (#22172)
## Summary

Mechanical refactor to migrate this enum to named fields. No functional
changes.

See:
https://github.com/astral-sh/ruff/pull/22093#discussion_r2636050127.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 17:24:51 +00:00
Charlie Marsh
b59f6eb5e9 [ty] Support comparisons between variable-length tuples (#21824)
## Summary

Closes https://github.com/astral-sh/ty/issues/1741.
2026-01-06 12:09:40 -05:00
Aria Desires
9ca78bdf76 [ty] Add Gankra as a CODEOWNER for lsp and imports work (#22420)
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-06 16:50:16 +00:00
Charlie Marsh
d65542c05e [ty] Make tuple intersection a fallible operation (#22094)
## Summary

This PR attempts to address a TODO in
https://github.com/astral-sh/ruff/pull/21965#discussion_r2635378498.
2026-01-06 10:47:04 -05:00
Aria Desires
98728b2c98 [ty] improve indented codefence rendering in docstrings (#22408)
By stripping leading indents from codefence lines to ensure they're
properly understood by markdown (but otherwise preserving the indent in
the codeblock so all the code renders roughly at the right indent).

As described in [this
comment](https://github.com/astral-sh/ty/issues/2352#issuecomment-3711686053)
this solution is very "do what I mean" for when a user has an explicit
markdown codeblock in e.g. a `Returns:` section which "has" to be
indented but that indent makes the verbatim codefence invalid markdown.

* Fixes https://github.com/astral-sh/ty/issues/2352
2026-01-06 10:44:31 -05:00
Dylan
924b2972f2 Update Black tests (#22405)
I am updating these because we didn't have test coverage for the
different handling of `fmt: skip` comments applied to multiple
statements on the same line. This is in preparation for #22119 (to show
before/after deviations).

Follows the same procedure as in #20794

Edit: As it happens, the new fixtures do not even cover the case
relevant to #22119 - they just deal with the already handled case of a
one-line compound statement. Nevertheless, it seems worthwhile to make
this update, especially since it uncovered a (possible?) bug.
2026-01-06 09:09:05 -06:00
Andrew Gallant
d035744959 [ty] Include = in completion suggestions in playground
This was an accidental omission in #21988 and identified in
astral-sh/ty#2203.
2026-01-06 09:26:29 -05:00
RasmusNygren
ce059c4857 [ty] Sort keyword argument completions higher (#22297) 2026-01-06 10:57:10 +00:00
Micha Reiser
acbc83d6d2 [ty] Fix stale semantic tokens after opening the same document with new content (#22414) 2026-01-06 11:52:51 +01:00
RasmusNygren
a9e5246786 [ty] Ensure the ty playground module is only ever loaded once (#22409) 2026-01-06 10:52:02 +01:00
Charlie Marsh
8b8b174e4f [ty] Add a diagnostic for @functools.total_ordering without a defined comparison method (#22183)
## Summary

This raises a `ValueError` at runtime:

```python
from functools import total_ordering

@total_ordering
class NoOrdering:
    def __eq__(self, other: object) -> bool:
        return True
```

Specifically:

```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/functools.py", line 193, in total_ordering
    raise ValueError('must define at least one ordering operation: < > <= >=')
ValueError: must define at least one ordering operation: < > <= >=
```

See: https://github.com/astral-sh/ty/issues/1202.
2026-01-06 04:14:06 +00:00
Charlie Marsh
28fa02129b [ty] Add support for @total_ordering (#22181)
## Summary

We have some suppressions in the pyx codebase related to this, so wanted
to resolve.

Closes https://github.com/astral-sh/ty/issues/1202.
2026-01-05 22:47:03 -05:00
Brent Westbrook
a10e42294b [pylint] Demote PLW1510 fix to display-only (#22318)
Summary
--

Closes #17091. `PLW1510` checks for `subprocess.run` calls without a
`check`
keyword argument and previously had a safe fix to add `check=False`.
That's the
default value, so technically it preserved the code's behavior, but as
discussed
in #17091 and #17087, Ruff can't actually know what the author intended.

I don't think it hurts to keep this as a display-only fix instead of
removing it
entirely, but it definitely shouldn't be safe at the very least.

Test Plan
--

Existing tests
2026-01-05 19:36:16 -05:00
Amethyst Reese
12a4ca003f [flake8_print] better suggestion for basicConfig in T201 docs (#22101)
`logging.basicConfig` should not be called at a global module scope,
as that produces a race condition to configure logging based on which
module gets imported first.  Logging should instead be initialized
in an entrypoint to the program, either in a `main()` or in the
typical `if __name__ == "__main__"` block.
2026-01-05 11:42:47 -08:00
Charlie Marsh
60f7ec90ef Add a fast-test profile (#22382)
## Summary

We use this profile in uv to create success, as an optimization for the
iterative test loop. We include `opt-level=1` because it ends up being
"worth it" for testing (empirically), even though it means the build is
actually a big slower than `dev` (if you remove `opt-level=1`, clean
compile is about 22% faster than `dev`).

Here are some benchmarks I generated with Claude -- the main motivator
here is the incremental testing for `ty_python_semantic` which is 2.4x
faster:

### `ty_python_semantic`

Full test suite (471 tests):
| Scenario    | dev   | fast-test | Improvement |
|-------------|-------|------------|-------------|
| Clean       | 53s   | 49s        | 8% faster   |
| Incremental | 17.8s | 6.8s       | 2.4x faster |

Single test:
| Scenario    | dev   | fast-test | Improvement |
|-------------|-------|------------|-------------|
| Clean       | 42.5s | 55.3s      | 30% slower  |
| Incremental | 6.5s  | 6.1s       | ~same       |

### `ruff_linter`

Full test suite (2622 tests):
| Scenario    | dev   | fast-test | Improvement |
|-------------|-------|------------|-------------|
| Clean       | 31s   | 41s        | 32% slower  |
| Incremental | 11.9s | 10.5s      | 12% faster  |

Single test:
| Scenario    | dev  | fast-test | Improvement |
|-------------|------|------------|-------------|
| Clean       | 26s  | 36.5s      | 40% slower  |
| Incremental | 4.5s | 5.5s       | 22% slower  |
2026-01-05 19:35:43 +00:00
Jack O'Connor
922d964bcb [ty] emit diagnostics for method definitions and other invalid statements in TypedDict class bodies (#22351)
Fixes https://github.com/astral-sh/ty/issues/2277.
2026-01-05 11:28:04 -08:00
Jack O'Connor
4712503c6d [ty] cargo insta test --force-update-snapshots (#22313)
Snapshot tests recently started reporting this warning:

> Snapshot test passes but the existing value is in a legacy format.
> Please run cargo insta test --force-update-snapshots to update to a
> newer format.

This PR is the result of that forced update.

One file (crates/ruff_db/src/diagnostic/render/full.rs) seems to get
corrupted, because it contains strings with unprintable characters that
trigger some bug in cargo-insta. I've manually reverted that file, and
also manually reverted the `input_file:` lines, which we like.
2026-01-05 07:55:47 -08:00
Alex Waygood
6b3de1517a [ty] Improve tracebacks when installing dependencies fails in ty_benchmark (#22399) 2026-01-05 14:55:08 +00:00
Alex Waygood
f3dea6e5c9 [ty] Optimize IntersectionType for the common case of a single negated element (#22344)
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-05 13:41:50 +00:00
Micha Reiser
24dd149e03 [ty] Extract relation module from types.rs (#22232) 2026-01-05 13:16:49 +00:00
Alex Waygood
b8d527ff46 [ty] Optimize and simplify UnionElement::try_reduce (#22339) 2026-01-05 12:54:44 +00:00
Aria Desires
e63cf978ae [ty] Implement support for explicit markdown code fences in docstring rendering (#22373)
* Fixes https://github.com/astral-sh/ty/issues/2291
2026-01-05 07:13:24 -05:00
Rob Hand
3dab4ff8ad [ty] (docs) - Note insta is required for working with ty tests in ty CONTRIBUTING.md (#22332) 2026-01-05 11:05:13 +01:00
Jason K Hall
24580e2ee8 flake8-simplify: avoid unnecessary builtins import for SIM105 (#22358) 2026-01-05 10:58:46 +01:00
renovate[bot]
3d3af6f7c8 Update pre-commit dependencies (#22393)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-05 10:24:40 +01:00
renovate[bot]
7cc34c081a Update CodSpeedHQ/action action to v4.5.1 (#22389)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 09:10:06 +01:00
renovate[bot]
7a95013f56 Update Rust crate serde_json to v1.0.148 (#22387)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 09:09:23 +01:00
renovate[bot]
2395954d9a Update Rust crate schemars to v1.2.0 (#22391)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2026-01-05 09:08:58 +01:00
renovate[bot]
670bd01fb5 Update Rust crate arc-swap to v1.8.0 (#22390)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 09:08:32 +01:00
renovate[bot]
eae5c685f8 Update Rust crate proc-macro2 to v1.0.104 (#22386)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 09:05:21 +01:00
renovate[bot]
994f05f3ca Update Rust crate tempfile to v3.24.0 (#22392)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 09:04:59 +01:00
renovate[bot]
8dcecf323b Update taiki-e/install-action action to v2.65.6 (#22388)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 08:03:31 +00:00
renovate[bot]
b12c94e411 Update Rust crate jiff to v0.2.17 (#22384)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 07:57:34 +00:00
renovate[bot]
a9c3ea9674 Update Rust crate matchit to v0.9.1 (#22385)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 07:56:54 +00:00
renovate[bot]
704c57f491 Update Rust crate insta to v1.45.1 (#22383)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 07:56:36 +00:00
renovate[bot]
7a27662eca Update cargo-bins/cargo-binstall action to v1.16.6 (#22380)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 08:30:34 +01:00
renovate[bot]
ce2490ee93 Update dependency @cloudflare/workers-types to v4.20251229.0 (#22381)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-05 08:30:16 +01:00
Charlie Marsh
92a2f2c992 [ty] Apply class decorators via try_call() (#22375)
## Summary

Decorators are now called with the class as an argument, and the return
type becomes the class's type. This mirrors how function decorators
already work.

Closes https://github.com/astral-sh/ty/issues/2313.
2026-01-04 17:11:00 -05:00
Charlie Marsh
11b551c2be Add a CLAUDE.md (#22370)
## Summary

This is a starting point based on my own experiments. Feedback and
changes welcome -- I think we should iterate on this a lot as we go.
2026-01-04 15:00:31 -05:00
Micha Reiser
b85c0190c5 [ty] Use upstream GetSize implementation for OrderMap and OrderSet (#22374) 2026-01-04 19:54:03 +00:00
Micha Reiser
46a4bfc478 [ty] Use default HashSet for TypeCollector (#22368) 2026-01-04 18:58:30 +00:00
Alex Waygood
0c53395917 [ty] Add a second benchmark for enums with many members (#22364) 2026-01-04 17:58:20 +00:00
Alex Waygood
8464aca795 Bump docstring-adder pin (#22361) 2026-01-03 20:21:45 +00:00
Alex Waygood
e1439beab2 [ty] Use UnionType helper methods more consistently (#22357) 2026-01-03 14:19:06 +00:00
1243 changed files with 25498 additions and 9597 deletions

View File

@@ -5,4 +5,4 @@ rustup component add clippy rustfmt
cargo install cargo-insta
cargo fetch
pip install maturin pre-commit
pip install maturin prek

10
.github/CODEOWNERS vendored
View File

@@ -20,9 +20,11 @@
# ty
/crates/ty* @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_project/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_server/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_project/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/crates/ty_ide/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager @Gankra
/crates/ty_server/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/crates/ty/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_wasm/ @carljm @MichaReiser @sharkdp @dcreager
/crates/ty_wasm/ @carljm @MichaReiser @sharkdp @dcreager @Gankra
/scripts/ty_benchmark/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ty_python_semantic @carljm @AlexWaygood @sharkdp @dcreager
/crates/ty_python_semantic/ @carljm @AlexWaygood @sharkdp @dcreager
/crates/ty_module_resolver/ @carljm @MichaReiser @AlexWaygood @Gankra

View File

@@ -1,4 +1,4 @@
# Configuration for the actionlint tool, which we run via pre-commit
# Configuration for the actionlint tool, which we run via prek
# to verify the correctness of the syntax in our GitHub Actions workflows.
self-hosted-runner:
@@ -17,4 +17,4 @@ self-hosted-runner:
paths:
".github/workflows/mypy_primer.yaml":
ignore:
- 'condition "false" is always evaluated to false. remove the if: section'
- 'constant expression "false" in condition. remove the if: section'

View File

@@ -5,5 +5,5 @@
[rules]
possibly-unresolved-reference = "warn"
possibly-missing-import = "warn"
unused-ignore-comment = "warn"
division-by-zero = "warn"
unsupported-dynamic-base = "warn"

View File

@@ -76,9 +76,9 @@
enabled: false,
},
{
groupName: "pre-commit dependencies",
groupName: "prek dependencies",
matchManagers: ["pre-commit"],
description: "Weekly update of pre-commit dependencies",
description: "Weekly update of prek dependencies",
},
{
groupName: "NPM Development dependencies",

View File

@@ -51,6 +51,7 @@ jobs:
- name: "Build sdist"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
command: sdist
args: --out dist
- name: "Test sdist"
@@ -81,6 +82,7 @@ jobs:
- name: "Build wheels - x86_64"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: x86_64
args: --release --locked --out dist
- name: "Upload wheels"
@@ -123,6 +125,7 @@ jobs:
- name: "Build wheels - aarch64"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: aarch64
args: --release --locked --out dist
- name: "Test wheel - aarch64"
@@ -179,6 +182,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.platform.target }}
args: --release --locked --out dist
env:
@@ -232,6 +236,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.target }}
manylinux: auto
args: --release --locked --out dist
@@ -308,6 +313,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.platform.target }}
manylinux: auto
docker-options: ${{ matrix.platform.maturin_docker_options }}
@@ -374,6 +380,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist
@@ -439,6 +446,7 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@86b9d133d34bc1b40018696f782949dac11bd380 # v1.49.4
with:
maturin-version: v1.9.6
target: ${{ matrix.platform.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist

58
.github/workflows/build-wasm.yml vendored Normal file
View File

@@ -0,0 +1,58 @@
# Build ruff_wasm for npm.
#
# Assumed to run as a subworkflow of .github/workflows/release.yml; specifically, as a local
# artifacts job within `cargo-dist`.
name: "Build wasm"
on:
workflow_call:
inputs:
plan:
required: true
type: string
pull_request:
paths:
- .github/workflows/build-wasm.yml
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
jobs:
build:
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-build') }}
runs-on: ubuntu-latest
strategy:
matrix:
target: [web, bundler, nodejs]
fail-fast: false
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: jetli/wasm-pack-action@0d096b08b4e5a7de8c28de67e11e945404e9eefa # v0.4.0
with:
version: v0.13.1
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Run wasm-pack build"
run: wasm-pack build --target ${{ matrix.target }} crates/ruff_wasm
- name: "Rename generated package"
run: | # Replace the package name w/ jq
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
- name: "Upload wasm artifact"
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: artifacts-wasm-${{ matrix.target }}
path: crates/ruff_wasm/pkg

View File

@@ -281,15 +281,15 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-insta
- name: "Install uv"
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
enable-cache: "true"
- name: ty mdtests (GitHub annotations)
@@ -343,11 +343,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-nextest
- name: "Install uv"
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
enable-cache: "true"
- name: "Run tests"
@@ -376,11 +376,11 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-nextest
- name: "Install uv"
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
enable-cache: "true"
- name: "Run tests"
@@ -468,7 +468,7 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@4a9028576ed64318f7b24193a62695e96dcbe015 # v1.16.5
uses: cargo-bins/cargo-binstall@80aaafe04903087c333980fa2686259ddd34b2d9 # v1.16.6
- name: "Install cargo-fuzz"
# Download the latest version from quick install and not the github releases because github releases only has MUSL targets.
run: cargo binstall cargo-fuzz --force --disable-strategies crate-meta-data --no-confirm
@@ -486,7 +486,7 @@ jobs:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
shared-key: ruff-linux-debug
@@ -521,7 +521,7 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: "Install Rust toolchain"
run: rustup component add rustfmt
# Run all code generation scripts, and verify that the current output is
@@ -561,7 +561,7 @@ jobs:
ref: ${{ github.event.pull_request.base.ref }}
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
python-version: ${{ env.PYTHON_VERSION }}
activate-environment: true
@@ -667,7 +667,7 @@ jobs:
with:
fetch-depth: 0
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
@@ -713,7 +713,7 @@ jobs:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@4a9028576ed64318f7b24193a62695e96dcbe015 # v1.16.5
- uses: cargo-bins/cargo-binstall@80aaafe04903087c333980fa2686259ddd34b2d9 # v1.16.6
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
@@ -726,7 +726,7 @@ jobs:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
@@ -769,32 +769,29 @@ jobs:
- name: "Remove wheels from cache"
run: rm -rf target/wheels
pre-commit:
name: "pre-commit"
prek:
name: "prek"
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-16' || 'ubuntu-latest' }}
timeout-minutes: 10
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: 24
- name: "Cache pre-commit"
- name: "Cache prek"
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
with:
path: ~/.cache/pre-commit
key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: "Run pre-commit"
path: ~/.cache/prek
key: prek-${{ hashFiles('.pre-commit-config.yaml') }}
- name: "Run prek"
run: |
echo '```console' > "$GITHUB_STEP_SUMMARY"
# Enable color output for pre-commit and remove it for the summary
# Use --hook-stage=manual to enable slower pre-commit hooks that are skipped by default
SKIP=cargo-fmt uvx --python="${PYTHON_VERSION}" pre-commit run --all-files --show-diff-on-failure --color=always --hook-stage=manual | \
# Enable color output for prek and remove it for the summary
# Use --hook-stage=manual to enable slower hooks that are skipped by default
SKIP=rustfmt uvx prek run --all-files --show-diff-on-failure --color always --hook-stage manual | \
tee >(sed -E 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})*)?[mGK]//g' >> "$GITHUB_STEP_SUMMARY") >&1
exit_code="${PIPESTATUS[0]}"
echo '```' >> "$GITHUB_STEP_SUMMARY"
@@ -814,7 +811,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: Install uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
python-version: 3.13
activate-environment: true
@@ -966,13 +963,13 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -980,7 +977,7 @@ jobs:
run: cargo codspeed build --features "codspeed,ruff_instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser
- name: "Run benchmarks"
uses: CodSpeedHQ/action@346a2d8a8d9d38909abd0bc3d23f773110f076ad # v4.4.1
uses: CodSpeedHQ/action@dbda7111f8ac363564b0c51b992d4ce76bb89f2f # v4.5.2
with:
mode: simulation
run: cargo codspeed run
@@ -1011,7 +1008,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1044,10 +1041,10 @@ jobs:
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: "Install codspeed"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1061,7 +1058,7 @@ jobs:
run: chmod +x target/codspeed/simulation/ruff_benchmark/ty
- name: "Run benchmarks"
uses: CodSpeedHQ/action@346a2d8a8d9d38909abd0bc3d23f773110f076ad # v4.4.1
uses: CodSpeedHQ/action@dbda7111f8ac363564b0c51b992d4ce76bb89f2f # v4.5.2
with:
mode: simulation
run: cargo codspeed run --bench ty "${{ matrix.benchmark }}"
@@ -1092,13 +1089,13 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1133,10 +1130,10 @@ jobs:
with:
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: "Install codspeed"
uses: taiki-e/install-action@b9c5db3aef04caffaf95a1d03931de10fb2a140f # v2.65.1
uses: taiki-e/install-action@0e76c5c569f13f7eb21e8e5b26fe710062b57b62 # v2.65.13
with:
tool: cargo-codspeed
@@ -1150,7 +1147,7 @@ jobs:
run: chmod +x target/codspeed/walltime/ruff_benchmark/ty_walltime
- name: "Run benchmarks"
uses: CodSpeedHQ/action@346a2d8a8d9d38909abd0bc3d23f773110f076ad # v4.4.1
uses: CodSpeedHQ/action@dbda7111f8ac363564b0c51b992d4ce76bb89f2f # v4.5.2
env:
# enabling walltime flamegraphs adds ~6 minutes to the CI time, and they don't
# appear to provide much useful insight for our walltime benchmarks right now

View File

@@ -34,7 +34,7 @@ jobs:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"

View File

@@ -48,7 +48,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
@@ -87,7 +87,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
@@ -129,7 +129,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:

View File

@@ -22,7 +22,7 @@ jobs:
id-token: write
steps:
- name: "Install uv"
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
pattern: wheels-*

View File

@@ -1,25 +1,18 @@
# Build and publish ruff-api for wasm.
# Publish ruff_wasm to npm.
#
# Assumed to run as a subworkflow of .github/workflows/release.yml; specifically, as a publish
# job within `cargo-dist`.
name: "Build and publish wasm"
name: "Publish wasm"
on:
workflow_dispatch:
workflow_call:
inputs:
plan:
required: true
type: string
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
jobs:
ruff_wasm:
publish:
runs-on: ubuntu-latest
permissions:
contents: read
@@ -29,31 +22,19 @@ jobs:
target: [web, bundler, nodejs]
fail-fast: false
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: jetli/wasm-pack-action@0d096b08b4e5a7de8c28de67e11e945404e9eefa # v0.4.0
with:
version: v0.13.1
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Run wasm-pack build"
run: wasm-pack build --target ${{ matrix.target }} crates/ruff_wasm
- name: "Rename generated package"
run: | # Replace the package name w/ jq
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
name: artifacts-wasm-${{ matrix.target }}
path: pkg
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: 24
registry-url: "https://registry.npmjs.org"
- name: "Publish (dry-run)"
if: ${{ inputs.plan == '' || fromJson(inputs.plan).announcement_tag_is_implicit }}
run: npm publish --dry-run crates/ruff_wasm/pkg
run: npm publish --dry-run pkg
- name: "Publish"
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
run: npm publish --provenance --access public crates/ruff_wasm/pkg
run: npm publish --provenance --access public pkg
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

View File

@@ -60,7 +60,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with:
persist-credentials: false
submodules: recursive
@@ -112,18 +112,28 @@ jobs:
"contents": "read"
"packages": "write"
custom-build-wasm:
needs:
- plan
if: ${{ needs.plan.outputs.publishing == 'true' || fromJson(needs.plan.outputs.val).ci.github.pr_run_mode == 'upload' || inputs.tag == 'dry-run' }}
uses: ./.github/workflows/build-wasm.yml
with:
plan: ${{ needs.plan.outputs.val }}
secrets: inherit
# Build and package all the platform-agnostic(ish) things
build-global-artifacts:
needs:
- plan
- custom-build-binaries
- custom-build-docker
- custom-build-wasm
runs-on: "depot-ubuntu-latest-4"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with:
persist-credentials: false
submodules: recursive
@@ -165,16 +175,17 @@ jobs:
- plan
- custom-build-binaries
- custom-build-docker
- custom-build-wasm
- build-global-artifacts
# Only run if we're "publishing", and only if plan, local and global didn't fail (skipped is fine)
if: ${{ always() && needs.plan.result == 'success' && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') }}
if: ${{ always() && needs.plan.result == 'success' && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') && (needs.custom-build-wasm.result == 'skipped' || needs.custom-build-wasm.result == 'success') }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
runs-on: "depot-ubuntu-latest-4"
outputs:
val: ${{ steps.host.outputs.manifest }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with:
persist-credentials: false
submodules: recursive
@@ -250,7 +261,7 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd
with:
persist-credentials: false
submodules: recursive

View File

@@ -76,7 +76,7 @@ jobs:
run: |
git config --global user.name typeshedbot
git config --global user.email '<>'
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: Sync typeshed stubs
run: |
rm -rf "ruff/${VENDORED_TYPESHED}"
@@ -130,7 +130,7 @@ jobs:
with:
persist-credentials: true
ref: ${{ env.UPSTREAM_BRANCH}}
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: Setup git
run: |
git config --global user.name typeshedbot
@@ -169,7 +169,7 @@ jobs:
with:
persist-credentials: true
ref: ${{ env.UPSTREAM_BRANCH}}
- uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
- name: Setup git
run: |
git config --global user.name typeshedbot

View File

@@ -38,14 +38,14 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
enable-cache: true
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
workspaces: "ruff"
lookup-only: false # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact
lookup-only: false
- name: Install Rust toolchain
run: rustup show

View File

@@ -32,7 +32,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
with:
enable-cache: true

View File

@@ -21,30 +21,89 @@ exclude: |
)$
repos:
# Priority 0: Read-only hooks; hooks that modify disjoint file types.
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: check-merge-conflict
priority: 0
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.24.1
hooks:
- id: validate-pyproject
priority: 0
- repo: https://github.com/crate-ci/typos
rev: v1.41.0
hooks:
- id: typos
priority: 0
- repo: local
hooks:
- id: rustfmt
name: rustfmt
entry: rustfmt
language: system
types: [rust]
priority: 0
# Prettier
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.7.4
hooks:
- id: prettier
types: [yaml]
priority: 0
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/zizmorcore/zizmor-pre-commit
rev: v1.19.0
hooks:
- id: zizmor
priority: 0
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.36.0
hooks:
- id: check-github-workflows
priority: 0
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.11.0.1
hooks:
- id: shellcheck
priority: 0
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.22
rev: 1.0.0
hooks:
- id: mdformat
language: python # means renovate will also update `additional_dependencies`
additional_dependencies:
- mdformat-mkdocs==4.0.0
- mdformat-footnote==0.1.1
- mdformat-mkdocs==5.0.0
- mdformat-footnote==0.1.2
exclude: |
(?x)^(
docs/formatter/black\.md
| docs/\w+\.md
)$
priority: 0
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10
hooks:
- id: ruff-format
priority: 0
- id: ruff-check
args: [--fix, --exit-non-zero-on-fix]
types_or: [python, pyi]
require_serial: true
priority: 1
# Priority 1: Second-pass fixers (e.g., markdownlint-fix runs after mdformat).
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.47.0
hooks:
@@ -54,7 +113,9 @@ repos:
docs/formatter/black\.md
| docs/\w+\.md
)$
priority: 1
# Priority 2: blacken-docs runs after markdownlint-fix (both modify markdown).
- repo: https://github.com/adamchainz/blacken-docs
rev: 1.20.0
hooks:
@@ -68,70 +129,26 @@ repos:
)$
additional_dependencies:
- black==25.12.0
- repo: https://github.com/crate-ci/typos
rev: v1.40.0
hooks:
- id: typos
- repo: local
hooks:
- id: cargo-fmt
name: cargo fmt
entry: cargo fmt --
language: system
types: [rust]
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10
hooks:
- id: ruff-format
- id: ruff-check
args: [--fix, --exit-non-zero-on-fix]
types_or: [python, pyi]
require_serial: true
# Prettier
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.7.4
hooks:
- id: prettier
types: [yaml]
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/zizmorcore/zizmor-pre-commit
rev: v1.19.0
hooks:
- id: zizmor
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.36.0
hooks:
- id: check-github-workflows
priority: 2
# `actionlint` hook, for verifying correct syntax in GitHub Actions workflows.
# Some additional configuration for `actionlint` can be found in `.github/actionlint.yaml`.
- repo: https://github.com/rhysd/actionlint
rev: v1.7.9
rev: v1.7.10
hooks:
- id: actionlint
stages:
# This hook is disabled by default, since it's quite slow.
# To run all hooks *including* this hook, use `uvx pre-commit run -a --hook-stage=manual`.
# To run *just* this hook, use `uvx pre-commit run -a actionlint --hook-stage=manual`.
# To run all hooks *including* this hook, use `uvx prek run -a --hook-stage=manual`.
# To run *just* this hook, use `uvx prek run -a actionlint --hook-stage=manual`.
- manual
args:
- "-ignore=SC2129" # ignorable stylistic lint from shellcheck
- "-ignore=SC2016" # another shellcheck lint: seems to have false positives?
language: golang # means renovate will also update `additional_dependencies`
additional_dependencies:
# actionlint has a shellcheck integration which extracts shell scripts in `run:` steps from GitHub Actions
# and checks these with shellcheck. This is arguably its most useful feature,
# but the integration only works if shellcheck is installed
- "github.com/wasilibs/go-shellcheck/cmd/shellcheck@v0.11.1"
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.11.0.1
hooks:
- id: shellcheck
priority: 0

View File

@@ -1,5 +1,63 @@
# Changelog
## 0.14.11
Released on 2026-01-08.
### Preview features
- Consolidate diagnostics for matched disable/enable suppression comments ([#22099](https://github.com/astral-sh/ruff/pull/22099))
- Report diagnostics for invalid/unmatched range suppression comments ([#21908](https://github.com/astral-sh/ruff/pull/21908))
- \[`airflow`\] Passing positional argument into `airflow.lineage.hook.HookLineageCollector.create_asset` is not allowed (`AIR303`) ([#22046](https://github.com/astral-sh/ruff/pull/22046))
- \[`refurb`\] Mark `FURB192` fix as always unsafe ([#22210](https://github.com/astral-sh/ruff/pull/22210))
- \[`ruff`\] Add `non-empty-init-module` (`RUF067`) ([#22143](https://github.com/astral-sh/ruff/pull/22143))
### Bug fixes
- Fix GitHub format for multi-line diagnostics ([#22108](https://github.com/astral-sh/ruff/pull/22108))
- \[`flake8-unused-arguments`\] Mark `**kwargs` in `TypeVar` as used (`ARG001`) ([#22214](https://github.com/astral-sh/ruff/pull/22214))
### Rule changes
- Add `help:` subdiagnostics for several Ruff rules that can sometimes appear to disagree with `ty` ([#22331](https://github.com/astral-sh/ruff/pull/22331))
- \[`pylint`\] Demote `PLW1510` fix to display-only ([#22318](https://github.com/astral-sh/ruff/pull/22318))
- \[`pylint`\] Ignore identical members (`PLR1714`) ([#22220](https://github.com/astral-sh/ruff/pull/22220))
- \[`pylint`\] Improve diagnostic range for `PLC0206` ([#22312](https://github.com/astral-sh/ruff/pull/22312))
- \[`ruff`\] Improve fix title for `RUF102` invalid rule code ([#22100](https://github.com/astral-sh/ruff/pull/22100))
- \[`flake8-simplify`\]: Avoid unnecessary builtins import for `SIM105` ([#22358](https://github.com/astral-sh/ruff/pull/22358))
### Configuration
- Allow Python 3.15 as valid `target-version` value in preview ([#22419](https://github.com/astral-sh/ruff/pull/22419))
- Check `required-version` before parsing rules ([#22410](https://github.com/astral-sh/ruff/pull/22410))
- Include configured `src` directories when resolving graphs ([#22451](https://github.com/astral-sh/ruff/pull/22451))
### Documentation
- Update `T201` suggestion to not use root logger to satisfy `LOG015` ([#22059](https://github.com/astral-sh/ruff/pull/22059))
- Fix `iter` example in unsafe fixes doc ([#22118](https://github.com/astral-sh/ruff/pull/22118))
- \[`flake8_print`\] better suggestion for `basicConfig` in `T201` docs ([#22101](https://github.com/astral-sh/ruff/pull/22101))
- \[`pylint`\] Restore the fix safety docs for `PLW0133` ([#22211](https://github.com/astral-sh/ruff/pull/22211))
- Fix Jupyter notebook discovery info for editors ([#22447](https://github.com/astral-sh/ruff/pull/22447))
### Contributors
- [@charliermarsh](https://github.com/charliermarsh)
- [@ntBre](https://github.com/ntBre)
- [@cenviity](https://github.com/cenviity)
- [@njhearp](https://github.com/njhearp)
- [@cbachhuber](https://github.com/cbachhuber)
- [@jelle-openai](https://github.com/jelle-openai)
- [@AlexWaygood](https://github.com/AlexWaygood)
- [@ValdonVitija](https://github.com/ValdonVitija)
- [@BurntSushi](https://github.com/BurntSushi)
- [@Jkhall81](https://github.com/Jkhall81)
- [@PeterJCLaw](https://github.com/PeterJCLaw)
- [@harupy](https://github.com/harupy)
- [@amyreese](https://github.com/amyreese)
- [@sjyangkevin](https://github.com/sjyangkevin)
- [@woodruffw](https://github.com/woodruffw)
## 0.14.10
Released on 2025-12-18.

71
CLAUDE.md Normal file
View File

@@ -0,0 +1,71 @@
# Ruff Repository
This repository contains both Ruff (a Python linter and formatter) and ty (a Python type checker). The crates follow a naming convention: `ruff_*` for Ruff-specific code and `ty_*` for ty-specific code. ty reuses several Ruff crates, including the Python parser (`ruff_python_parser`) and AST definitions (`ruff_python_ast`).
## Running Tests
Run all tests (using `nextest` for faster execution):
```sh
cargo nextest run
```
For faster test execution, use the `fast-test` profile which enables optimizations while retaining debug info:
```sh
cargo nextest run --cargo-profile fast-test
```
Run tests for a specific crate:
```sh
cargo nextest run -p ty_python_semantic
```
Run a specific mdtest (use a substring of the test name):
```sh
MDTEST_TEST_FILTER="<filter>" cargo nextest run -p ty_python_semantic mdtest
```
Update snapshots after running tests:
```sh
cargo insta accept
```
## Running Clippy
```sh
cargo clippy --workspace --all-targets --all-features -- -D warnings
```
## Running Debug Builds
Use debug builds (not `--release`) when developing, as release builds lack debug assertions and have slower compile times.
Run Ruff:
```sh
cargo run --bin ruff -- check path/to/file.py
```
Run ty:
```sh
cargo run --bin ty -- check path/to/file.py
```
## Pull Requests
When working on ty, PR titles should start with `[ty]` and be tagged with the `ty` GitHub label.
## Development Guidelines
- All changes must be tested. If you're not testing your changes, you're not done.
- Get your tests to pass. If you didn't run the tests, your code does not work.
- Follow existing code style. Check neighboring files for patterns.
- Always run `uvx prek run -a` at the end of a task.
- Avoid writing significant amounts of new code. This is often a sign that we're missing an existing method or mechanism that could help solve the problem. Look for existing utilities first.
- Avoid falling back to patterns that require `panic!`, `unreachable!`, or `.unwrap()`. Instead, try to encode those constraints in the type system.
- Prefer let chains (`if let` combined with `&&`) over nested `if let` statements to reduce indentation and improve readability.

View File

@@ -53,12 +53,12 @@ cargo install cargo-insta
You'll need [uv](https://docs.astral.sh/uv/getting-started/installation/) (or `pipx` and `pip`) to
run Python utility commands.
You can optionally install pre-commit hooks to automatically run the validation checks
You can optionally install hooks to automatically run the validation checks
when making a commit:
```shell
uv tool install pre-commit
pre-commit install
uv tool install prek
prek install
```
We recommend [nextest](https://nexte.st/) to run Ruff's test suite (via `cargo nextest run`),
@@ -85,7 +85,7 @@ and that it passes both the lint and test validation checks:
```shell
cargo clippy --workspace --all-targets --all-features -- -D warnings # Rust linting
RUFF_UPDATE_SCHEMA=1 cargo test # Rust testing and updating ruff.schema.json
uvx pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting, Markdown and Python linting, etc.
uvx prek run -a # Rust and Python formatting, Markdown and Python linting, etc.
```
These checks will run on GitHub Actions when you open your pull request, but running them locally
@@ -381,7 +381,7 @@ Commit each step of this process separately for easier review.
- Often labels will be missing from pull requests they will need to be manually organized into the proper section
- Changes should be edited to be user-facing descriptions, avoiding internal details
- Square brackets (eg, `[ruff]` project name) will be automatically escaped by `pre-commit`
- Square brackets (eg, `[ruff]` project name) will be automatically escaped by `prek`
Additionally, for minor releases:

136
Cargo.lock generated
View File

@@ -146,9 +146,12 @@ dependencies = [
[[package]]
name = "arc-swap"
version = "1.7.1"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69f7f8c3906b62b754cd5326047894316021dcfe5a194c8ea52bdd94934a3457"
checksum = "51d03449bb8ca2cc2ef70869af31463d1ae5ccc8fa3e334b307203fbf815207e"
dependencies = [
"rustversion",
]
[[package]]
name = "argfile"
@@ -463,9 +466,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.53"
version = "4.5.54"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9e340e012a1bf4935f5282ed1436d1489548e8f72308207ea5df0e23d2d03f8"
checksum = "c6e6ff9dcd79cff5cd969a17a545d79e84ab086e444102a591e288a8aa3ce394"
dependencies = [
"clap_builder",
"clap_derive",
@@ -473,9 +476,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.53"
version = "4.5.54"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d76b5d13eaa18c901fd2f7fca939fefe3a0727a953561fefdf3b2922b8569d00"
checksum = "fa42cf4d2b7a41bc8f663a7cab4031ebafa1bf3875705bfaf8466dc60ab52c00"
dependencies = [
"anstream",
"anstyle",
@@ -1030,7 +1033,7 @@ dependencies = [
"libc",
"option-ext",
"redox_users",
"windows-sys 0.59.0",
"windows-sys 0.60.2",
]
[[package]]
@@ -1122,7 +1125,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
dependencies = [
"libc",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -1580,11 +1583,11 @@ dependencies = [
[[package]]
name = "imperative"
version = "1.0.6"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "29a1f6526af721f9aec9ceed7ab8ebfca47f3399d08b80056c2acca3fcb694a9"
checksum = "35e1d0bd9c575c52e59aad8e122a11786e852a154678d0c86e9e243d55273970"
dependencies = [
"phf",
"phf 0.13.1",
"rust-stemmers",
]
@@ -1645,9 +1648,9 @@ dependencies = [
[[package]]
name = "insta"
version = "1.45.0"
version = "1.46.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b76866be74d68b1595eb8060cb9191dca9c021db2316558e52ddc5d55d41b66c"
checksum = "1b66886d14d18d420ab5052cbff544fc5d34d0b2cdd35eb5976aaa10a4a472e5"
dependencies = [
"console 0.15.11",
"once_cell",
@@ -1778,9 +1781,9 @@ checksum = "4a5f13b858c8d314ee3e8f639011f7ccefe71f97f96e50151fb991f267928e2c"
[[package]]
name = "jiff"
version = "0.2.16"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49cce2b81f2098e7e3efc35bc2e0a6b7abec9d34128283d7a26fa8f32a6dbb35"
checksum = "a87d9b8105c23642f50cbbae03d1f75d8422c5cb98ce7ee9271f7ff7505be6b8"
dependencies = [
"jiff-static",
"jiff-tzdb-platform",
@@ -1788,14 +1791,14 @@ dependencies = [
"portable-atomic",
"portable-atomic-util",
"serde_core",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
name = "jiff-static"
version = "0.2.16"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "980af8b43c3ad5d8d349ace167ec8170839f753a42d233ba19e08afe1850fa69"
checksum = "b787bebb543f8969132630c51fd0afab173a86c6abae56ff3b9e5e3e3f9f6e58"
dependencies = [
"proc-macro2",
"quote",
@@ -1871,9 +1874,9 @@ checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]]
name = "libc"
version = "0.2.178"
version = "0.2.179"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37c93d8daa9d8a012fd8ab92f088405fb202ea0b6ab73ee2482ae66af4f42091"
checksum = "c5a2d376baa530d1238d133232d15e239abad80d05838b4b59354e5268af431f"
[[package]]
name = "libcst"
@@ -2057,9 +2060,9 @@ checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5"
[[package]]
name = "matchit"
version = "0.9.0"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ea5f97102eb9e54ab99fb70bb175589073f554bdadfb74d9bd656482ea73e2a"
checksum = "b3eede3bdf92f3b4f9dc04072a9ce5ab557d5ec9038773bf9ffcd5588b3cc05b"
[[package]]
name = "memchr"
@@ -2485,7 +2488,17 @@ version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fd6780a80ae0c52cc120a26a1a42c1ae51b247a253e4e06113d23d2c2edd078"
dependencies = [
"phf_shared",
"phf_shared 0.11.3",
]
[[package]]
name = "phf"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1562dc717473dbaa4c1f85a36410e03c047b2e7df7f45ee938fbef64ae7fadf"
dependencies = [
"phf_shared 0.13.1",
"serde",
]
[[package]]
@@ -2495,7 +2508,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aef8048c789fa5e851558d709946d6d79a8ff88c0440c587967f8e94bfb1216a"
dependencies = [
"phf_generator",
"phf_shared",
"phf_shared 0.11.3",
]
[[package]]
@@ -2504,7 +2517,7 @@ version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
dependencies = [
"phf_shared",
"phf_shared 0.11.3",
"rand 0.8.5",
]
@@ -2517,6 +2530,15 @@ dependencies = [
"siphasher",
]
[[package]]
name = "phf_shared"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e57fef6bc5981e38c2ce2d63bfa546861309f875b8a75f092d1d54ae2d64f266"
dependencies = [
"siphasher",
]
[[package]]
name = "pin-project-lite"
version = "0.2.16"
@@ -2631,9 +2653,9 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.103"
version = "1.0.104"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ee95bc4ef87b8d5ba32e8b7714ccc834865276eab0aed5c9958d00ec45f49e8"
checksum = "9695f8df41bb4f3d222c95a67532365f569318332d03d5f3f67f37b20e6ebdf0"
dependencies = [
"unicode-ident",
]
@@ -2909,7 +2931,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"anyhow",
"argfile",
@@ -2925,6 +2947,7 @@ dependencies = [
"filetime",
"globwalk",
"ignore",
"indexmap",
"indoc",
"insta",
"insta-cmd",
@@ -3168,7 +3191,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3246,7 +3269,6 @@ name = "ruff_memory_usage"
version = "0.0.0"
dependencies = [
"get-size2",
"ordermap",
]
[[package]]
@@ -3283,7 +3305,6 @@ dependencies = [
"compact_str",
"get-size2",
"is-macro",
"itertools 0.14.0",
"memchr",
"ruff_cache",
"ruff_macros",
@@ -3527,7 +3548,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -3617,15 +3638,15 @@ checksum = "781442f29170c5c93b7185ad559492601acdc71d5bb0706f5868094f45cfcd08"
[[package]]
name = "rustix"
version = "1.1.2"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd15f8a2c5551a84d56efdc1cd049089e409ac19a3072d5037a17fd70719ff3e"
checksum = "146c9e247ccc180c1f61615433868c99f3de3ae256a30a43b49f67c2d9171f34"
dependencies = [
"bitflags 2.10.0",
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -3643,7 +3664,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.25.2"
source = "git+https://github.com/salsa-rs/salsa.git?rev=309c249088fdeef0129606fa34ec2eefc74736ff#309c249088fdeef0129606fa34ec2eefc74736ff"
source = "git+https://github.com/salsa-rs/salsa.git?rev=9860ff6ca0f1f8f3a8d6b832020002790b501254#9860ff6ca0f1f8f3a8d6b832020002790b501254"
dependencies = [
"boxcar",
"compact_str",
@@ -3668,12 +3689,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.25.2"
source = "git+https://github.com/salsa-rs/salsa.git?rev=309c249088fdeef0129606fa34ec2eefc74736ff#309c249088fdeef0129606fa34ec2eefc74736ff"
source = "git+https://github.com/salsa-rs/salsa.git?rev=9860ff6ca0f1f8f3a8d6b832020002790b501254#9860ff6ca0f1f8f3a8d6b832020002790b501254"
[[package]]
name = "salsa-macros"
version = "0.25.2"
source = "git+https://github.com/salsa-rs/salsa.git?rev=309c249088fdeef0129606fa34ec2eefc74736ff#309c249088fdeef0129606fa34ec2eefc74736ff"
source = "git+https://github.com/salsa-rs/salsa.git?rev=9860ff6ca0f1f8f3a8d6b832020002790b501254#9860ff6ca0f1f8f3a8d6b832020002790b501254"
dependencies = [
"proc-macro2",
"quote",
@@ -3692,9 +3713,9 @@ dependencies = [
[[package]]
name = "schemars"
version = "1.1.0"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9558e172d4e8533736ba97870c4b2cd63f84b382a3d6eb063da41b91cce17289"
checksum = "54e910108742c57a770f492731f99be216a52fadd361b06c8fb59d74ccc267d2"
dependencies = [
"dyn-clone",
"ref-cast",
@@ -3705,9 +3726,9 @@ dependencies = [
[[package]]
name = "schemars_derive"
version = "1.1.0"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "301858a4023d78debd2353c7426dc486001bddc91ae31a76fb1f55132f7e2633"
checksum = "4908ad288c5035a8eb12cfdf0d49270def0a268ee162b75eeee0f85d155a7c45"
dependencies = [
"proc-macro2",
"quote",
@@ -3781,15 +3802,15 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.146"
version = "1.0.148"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "217ca874ae0207aac254aa02c957ded05585a90892cc8d87f9e5fa49669dadd8"
checksum = "3084b546a1dd6289475996f182a22aba973866ea8e8b02c51d9f46b1336a22da"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
"serde_core",
"zmij",
]
[[package]]
@@ -3991,9 +4012,9 @@ checksum = "e396b6523b11ccb83120b115a0b7366de372751aa6edf19844dfb13a6af97e91"
[[package]]
name = "syn"
version = "2.0.111"
version = "2.0.113"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "390cc9a294ab71bdb1aa2e99d13be9c753cd2d7bd6560c77118597410c4d2e87"
checksum = "678faa00651c9eb72dd2020cbdf275d92eccb2400d568e419efdd64838145cb4"
dependencies = [
"proc-macro2",
"quote",
@@ -4019,15 +4040,15 @@ checksum = "55937e1799185b12863d447f42597ed69d9928686b8d88a1df17376a097d8369"
[[package]]
name = "tempfile"
version = "3.23.0"
version = "3.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2d31c77bdf42a745371d260a26ca7163f1e0924b64afa0b688e61b5a9fa02f16"
checksum = "655da9c7eb6305c55742045d5a8d2037996d61d8de95806335c7c86ce0f82e9c"
dependencies = [
"fastrand",
"getrandom 0.3.4",
"once_cell",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -4057,7 +4078,7 @@ checksum = "d4ea810f0692f9f51b382fff5893887bb4580f5fa246fde546e0b13e7fcee662"
dependencies = [
"fnv",
"nom",
"phf",
"phf 0.11.3",
"phf_codegen",
]
@@ -4441,6 +4462,7 @@ version = "0.0.0"
dependencies = [
"bitflags 2.10.0",
"camino",
"compact_str",
"get-size2",
"insta",
"itertools 0.14.0",
@@ -4509,11 +4531,13 @@ dependencies = [
"regex-automata",
"ruff_cache",
"ruff_db",
"ruff_diagnostics",
"ruff_macros",
"ruff_memory_usage",
"ruff_options_metadata",
"ruff_python_ast",
"ruff_python_formatter",
"ruff_python_trivia",
"ruff_text_size",
"rustc-hash",
"salsa",
@@ -4802,7 +4826,7 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1673eca9782c84de5f81b82e4109dcfb3611c8ba0d52930ec4a9478f547b2dd"
dependencies = [
"phf",
"phf 0.11.3",
"unicode_names2_generator",
]
@@ -5131,7 +5155,7 @@ version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -5523,6 +5547,12 @@ dependencies = [
"zstd",
]
[[package]]
name = "zmij"
version = "1.0.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30e0d8dffbae3d840f64bda38e28391faef673a7b5a6017840f2a106c8145868"
[[package]]
name = "zstd"
version = "0.11.2+zstd.1.5.2"

View File

@@ -93,6 +93,7 @@ get-size2 = { version = "0.7.3", features = [
"smallvec",
"hashbrown",
"compact-str",
"ordermap"
] }
getrandom = { version = "0.3.1" }
glob = { version = "0.3.1" }
@@ -149,7 +150,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "309c249088fdeef0129606fa34ec2eefc74736ff", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "9860ff6ca0f1f8f3a8d6b832020002790b501254", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",
@@ -334,6 +335,11 @@ strip = false
debug = "full"
lto = false
# Profile for faster iteration: applies minimal optimizations for faster tests.
[profile.fast-test]
inherits = "dev"
opt-level = 1
# The profile that 'cargo dist' will build with.
[profile.dist]
inherits = "release"

View File

@@ -150,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.10/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.11/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.11/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -184,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
publish = true
authors = { workspace = true }
edition = { workspace = true }
@@ -48,6 +48,7 @@ colored = { workspace = true }
filetime = { workspace = true }
globwalk = { workspace = true }
ignore = { workspace = true }
indexmap = { workspace = true }
is-macro = { workspace = true }
itertools = { workspace = true }
jiff = { workspace = true }

View File

@@ -2,6 +2,7 @@ use crate::args::{AnalyzeGraphArgs, ConfigArguments};
use crate::resolve::resolve;
use crate::{ExitStatus, resolve_default_files};
use anyhow::Result;
use indexmap::IndexSet;
use log::{debug, warn};
use path_absolutize::CWD;
use ruff_db::system::{SystemPath, SystemPathBuf};
@@ -11,7 +12,7 @@ use ruff_linter::source_kind::SourceKind;
use ruff_linter::{warn_user, warn_user_once};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{ResolvedFile, match_exclusion, python_files_in_path};
use rustc_hash::FxHashMap;
use rustc_hash::{FxBuildHasher, FxHashMap};
use std::io::Write;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
@@ -59,17 +60,34 @@ pub(crate) fn analyze_graph(
})
.collect::<FxHashMap<_, _>>();
// Create a database from the source roots.
let src_roots = package_roots
.values()
.filter_map(|package| package.as_deref())
.filter_map(|package| package.parent())
.map(Path::to_path_buf)
.filter_map(|path| SystemPathBuf::from_path_buf(path).ok())
.collect();
// Create a database from the source roots, combining configured `src` paths with detected
// package roots. Configured paths are added first so they take precedence, and duplicates
// are removed.
let mut src_roots: IndexSet<SystemPathBuf, FxBuildHasher> = IndexSet::default();
// Add configured `src` paths first (for precedence), filtering to only include existing
// directories.
src_roots.extend(
pyproject_config
.settings
.linter
.src
.iter()
.filter(|path| path.is_dir())
.filter_map(|path| SystemPathBuf::from_path_buf(path.clone()).ok()),
);
// Add detected package roots.
src_roots.extend(
package_roots
.values()
.filter_map(|package| package.as_deref())
.filter_map(|path| path.parent())
.filter_map(|path| SystemPathBuf::from_path_buf(path.to_path_buf()).ok()),
);
let db = ModuleDb::from_src_roots(
src_roots,
src_roots.into_iter().collect(),
pyproject_config
.settings
.analyze

View File

@@ -1305,7 +1305,7 @@ mod tests {
settings.add_filter(r"(Panicked at) [^:]+:\d+:\d+", "$1 <location>");
let _s = settings.bind_to_scope();
assert_snapshot!(str::from_utf8(&buf)?, @r"
assert_snapshot!(str::from_utf8(&buf)?, @"
io: test.py: Permission denied
--> test.py:1:1

View File

@@ -29,10 +29,10 @@ pub(crate) fn show_settings(
bail!("No files found under the given path");
};
let settings = resolver.resolve(&path);
let (settings, config_path) = resolver.resolve_with_path(&path);
writeln!(writer, "Resolved settings for: \"{}\"", path.display())?;
if let Some(settings_path) = pyproject_config.path.as_ref() {
if let Some(settings_path) = config_path {
writeln!(writer, "Settings path: \"{}\"", settings_path.display())?;
}
write!(writer, "{settings}")?;

View File

@@ -4,4 +4,3 @@ source: crates/ruff/src/commands/check.rs
/home/ferris/project/code.py:1:1: E902 Permission denied (os error 13)
/home/ferris/project/notebook.ipynb:1:1: E902 Permission denied (os error 13)
/home/ferris/project/pyproject.toml:1:1: E902 Permission denied (os error 13)

View File

@@ -1,6 +1,5 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
0.0.0

View File

@@ -1,6 +1,5 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
0.0.0 (53b0f5d92 2023-10-19)

View File

@@ -1,6 +1,5 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
0.0.0+24 (53b0f5d92 2023-10-19)

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
{
"version": "0.0.0",

View File

@@ -132,29 +132,29 @@ fn dependents() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("--direction").arg("dependents").current_dir(&root), @r###"
success: true
exit_code: 0
----- stdout -----
{
"ruff/__init__.py": [],
"ruff/a.py": [],
"ruff/b.py": [
"ruff/a.py"
],
"ruff/c.py": [
"ruff/b.py"
],
"ruff/d.py": [
"ruff/c.py"
],
"ruff/e.py": [
"ruff/d.py"
]
}
assert_cmd_snapshot!(command().arg("--direction").arg("dependents").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"ruff/__init__.py": [],
"ruff/a.py": [],
"ruff/b.py": [
"ruff/a.py"
],
"ruff/c.py": [
"ruff/b.py"
],
"ruff/d.py": [
"ruff/c.py"
],
"ruff/e.py": [
"ruff/d.py"
]
}
----- stderr -----
"###);
----- stderr -----
"#);
});
Ok(())
@@ -184,21 +184,21 @@ fn string_detection() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
success: true
exit_code: 0
----- stdout -----
{
"ruff/__init__.py": [],
"ruff/a.py": [
"ruff/b.py"
],
"ruff/b.py": [],
"ruff/c.py": []
}
assert_cmd_snapshot!(command().current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"ruff/__init__.py": [],
"ruff/a.py": [
"ruff/b.py"
],
"ruff/b.py": [],
"ruff/c.py": []
}
----- stderr -----
"###);
----- stderr -----
"#);
});
insta::with_settings!({
@@ -319,7 +319,7 @@ fn globs() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
assert_cmd_snapshot!(command().current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -340,7 +340,7 @@ fn globs() -> Result<()> {
}
----- stderr -----
"###);
"#);
});
Ok(())
@@ -368,7 +368,7 @@ fn exclude() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
assert_cmd_snapshot!(command().current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -381,7 +381,7 @@ fn exclude() -> Result<()> {
}
----- stderr -----
"###);
"#);
});
Ok(())
@@ -421,7 +421,7 @@ fn wildcard() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
assert_cmd_snapshot!(command().current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -443,7 +443,7 @@ fn wildcard() -> Result<()> {
}
----- stderr -----
"###);
"#);
});
Ok(())
@@ -639,7 +639,7 @@ fn venv() -> Result<()> {
}, {
assert_cmd_snapshot!(
command().args(["--python", "none"]).arg("packages/albatross").current_dir(&root),
@r"
@"
success: false
exit_code: 2
----- stdout -----
@@ -695,7 +695,7 @@ fn notebook_basic() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
assert_cmd_snapshot!(command().current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -708,7 +708,122 @@ fn notebook_basic() -> Result<()> {
}
----- stderr -----
"###);
"#);
});
Ok(())
}
/// Test that the `src` configuration option is respected.
///
/// This is useful for monorepos where there are multiple source directories that need to be
/// included in the module resolution search path.
#[test]
fn src_option() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
// Create a lib directory with a package.
root.child("lib")
.child("mylib")
.child("__init__.py")
.write_str("def helper(): pass")?;
// Create an app directory with a file that imports from mylib.
root.child("app").child("__init__.py").write_str("")?;
root.child("app")
.child("main.py")
.write_str("from mylib import helper")?;
// Without src configured, the import from mylib won't resolve.
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": []
}
----- stderr -----
"#);
});
// With src = ["lib"], the import should resolve.
root.child("ruff.toml").write_str(indoc::indoc! {r#"
src = ["lib"]
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": [
"lib/mylib/__init__.py"
]
}
----- stderr -----
"#);
});
Ok(())
}
/// Test that glob patterns in `src` are expanded.
#[test]
fn src_glob_expansion() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
// Create multiple lib directories with packages.
root.child("libs")
.child("lib_a")
.child("pkg_a")
.child("__init__.py")
.write_str("def func_a(): pass")?;
root.child("libs")
.child("lib_b")
.child("pkg_b")
.child("__init__.py")
.write_str("def func_b(): pass")?;
// Create an app that imports from both packages.
root.child("app").child("__init__.py").write_str("")?;
root.child("app")
.child("main.py")
.write_str("from pkg_a import func_a\nfrom pkg_b import func_b")?;
// Use a glob pattern to include all lib directories.
root.child("ruff.toml").write_str(indoc::indoc! {r#"
src = ["libs/*"]
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": [
"libs/lib_a/pkg_a/__init__.py",
"libs/lib_b/pkg_b/__init__.py"
]
}
----- stderr -----
"#);
});
Ok(())
@@ -765,7 +880,7 @@ fn notebook_with_magic() -> Result<()> {
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
assert_cmd_snapshot!(command().current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -778,7 +893,7 @@ fn notebook_with_magic() -> Result<()> {
}
----- stderr -----
"###);
"#);
});
Ok(())

View File

@@ -29,7 +29,7 @@ fn type_checking_imports() -> anyhow::Result<()> {
("ruff/c.py", ""),
])?;
assert_cmd_snapshot!(test.command(), @r###"
assert_cmd_snapshot!(test.command(), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -46,12 +46,12 @@ fn type_checking_imports() -> anyhow::Result<()> {
}
----- stderr -----
"###);
"#);
assert_cmd_snapshot!(
test.command()
.arg("--no-type-checking-imports"),
@r###"
@r#"
success: true
exit_code: 0
----- stdout -----
@@ -65,7 +65,7 @@ fn type_checking_imports() -> anyhow::Result<()> {
}
----- stderr -----
"###
"#
);
Ok(())
@@ -103,7 +103,7 @@ fn type_checking_imports_from_config() -> anyhow::Result<()> {
),
])?;
assert_cmd_snapshot!(test.command(), @r###"
assert_cmd_snapshot!(test.command(), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -117,7 +117,7 @@ fn type_checking_imports_from_config() -> anyhow::Result<()> {
}
----- stderr -----
"###);
"#);
test.write_file(
"ruff.toml",
@@ -127,7 +127,7 @@ fn type_checking_imports_from_config() -> anyhow::Result<()> {
"#,
)?;
assert_cmd_snapshot!(test.command(), @r###"
assert_cmd_snapshot!(test.command(), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -144,7 +144,7 @@ fn type_checking_imports_from_config() -> anyhow::Result<()> {
}
----- stderr -----
"###
"#
);
Ok(())

View File

@@ -51,7 +51,7 @@ fn default_files() -> Result<()> {
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("--check"), @r"
.arg("--check"), @"
success: false
exit_code: 1
----- stdout -----
@@ -71,7 +71,7 @@ fn format_warn_stdin_filename_with_files() -> Result<()> {
assert_cmd_snapshot!(test.format_command()
.args(["--isolated", "--stdin-filename", "foo.py"])
.arg("foo.py")
.pass_stdin("foo = 1"), @r"
.pass_stdin("foo = 1"), @"
success: true
exit_code: 0
----- stdout -----
@@ -87,7 +87,7 @@ fn format_warn_stdin_filename_with_files() -> Result<()> {
fn nonexistent_config_file() -> Result<()> {
let test = CliTest::new()?;
assert_cmd_snapshot!(test.format_command()
.args(["--config", "foo.toml", "."]), @r"
.args(["--config", "foo.toml", "."]), @"
success: false
exit_code: 2
----- stdout -----
@@ -111,7 +111,7 @@ fn nonexistent_config_file() -> Result<()> {
fn config_override_rejected_if_invalid_toml() -> Result<()> {
let test = CliTest::new()?;
assert_cmd_snapshot!(test.format_command()
.args(["--config", "foo = bar", "."]), @r"
.args(["--config", "foo = bar", "."]), @"
success: false
exit_code: 2
----- stdout -----
@@ -145,7 +145,7 @@ fn too_many_config_files() -> Result<()> {
.arg("ruff.toml")
.arg("--config")
.arg("ruff2.toml")
.arg("."), @r"
.arg("."), @"
success: false
exit_code: 2
----- stdout -----
@@ -168,7 +168,7 @@ fn config_file_and_isolated() -> Result<()> {
.arg("--isolated")
.arg("--config")
.arg("ruff.toml")
.arg("."), @r"
.arg("."), @"
success: false
exit_code: 2
----- stdout -----
@@ -390,7 +390,7 @@ fn mixed_line_endings() -> Result<()> {
assert_cmd_snapshot!(test.format_command()
.arg("--diff")
.arg("--isolated")
.arg("."), @r"
.arg("."), @"
success: true
exit_code: 0
----- stdout -----
@@ -446,7 +446,7 @@ OTHER = "OTHER"
// Explicitly pass test.py, should be formatted regardless of it being excluded by format.exclude
.arg("test.py")
// Format all other files in the directory, should respect the `exclude` and `format.exclude` options
.arg("."), @r"
.arg("."), @"
success: false
exit_code: 1
----- stdout -----
@@ -469,7 +469,7 @@ fn deduplicate_directory_and_explicit_file() -> Result<()> {
.arg("--check")
.arg(".")
.arg("main.py"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -495,7 +495,7 @@ from module import =
assert_cmd_snapshot!(test.format_command()
.arg("--check")
.arg("--isolated")
.arg("main.py"), @r"
.arg("main.py"), @"
success: false
exit_code: 2
----- stdout -----
@@ -522,7 +522,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("--check")
.arg("main.py"), @r"
.arg("main.py"), @"
success: false
exit_code: 1
----- stdout -----
@@ -534,7 +534,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("main.py"), @r"
.arg("main.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -545,7 +545,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("main.py"), @r"
.arg("main.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -614,7 +614,7 @@ fn output_format_notebook() -> Result<()> {
assert_cmd_snapshot!(
test.format_command().args(["--isolated", "--preview", "--check"]).arg(path),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -672,7 +672,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("--exit-non-zero-on-format")
.arg("main.py"), @r"
.arg("main.py"), @"
success: false
exit_code: 1
----- stdout -----
@@ -685,7 +685,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("--exit-non-zero-on-format")
.arg("main.py"), @r"
.arg("main.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -701,7 +701,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("--exit-non-zero-on-fix")
.arg("main.py"), @r"
.arg("main.py"), @"
success: false
exit_code: 1
----- stdout -----
@@ -714,7 +714,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(test.format_command()
.arg("--isolated")
.arg("--exit-non-zero-on-fix")
.arg("main.py"), @r"
.arg("main.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -771,7 +771,7 @@ OTHER = "OTHER"
// Explicitly pass test.py, should not be formatted because of --force-exclude
.arg("test.py")
// Format all other files in the directory, should respect the `exclude` and `format.exclude` options
.arg("."), @r"
.arg("."), @"
success: false
exit_code: 1
----- stdout -----
@@ -931,7 +931,7 @@ tab-size = 2
.pass_stdin(r"
if True:
pass
"), @r"
"), @"
success: false
exit_code: 2
----- stdout -----
@@ -1144,7 +1144,7 @@ def say_hy(name: str):
assert_cmd_snapshot!(test.format_command()
.arg("--config")
.arg("ruff.toml")
.arg("test.py"), @r"
.arg("test.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -1184,7 +1184,7 @@ def say_hy(name: str):
assert_cmd_snapshot!(test.format_command()
.arg("--config")
.arg("ruff.toml")
.arg("test.py"), @r"
.arg("test.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -1216,7 +1216,7 @@ def say_hy(name: str):
assert_cmd_snapshot!(test.format_command()
.arg("--config")
.arg("ruff.toml")
.arg("test.py"), @r"
.arg("test.py"), @"
success: true
exit_code: 0
----- stdout -----
@@ -1246,7 +1246,7 @@ fn test_diff() -> Result<()> {
assert_cmd_snapshot!(
test.format_command().args(["--isolated", "--diff"]).args(paths),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1311,7 +1311,7 @@ fn test_diff_no_change() -> Result<()> {
let paths = [fixtures.join("unformatted.py")];
assert_cmd_snapshot!(
test.format_command().args(["--isolated", "--diff"]).args(paths),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1341,7 +1341,7 @@ fn test_diff_stdin_unformatted() -> Result<()> {
test.format_command()
.args(["--isolated", "--diff", "-", "--stdin-filename", "unformatted.py"])
.pass_stdin(unformatted),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1366,7 +1366,7 @@ fn test_diff_stdin_formatted() -> Result<()> {
let unformatted = fs::read(fixtures.join("formatted.py")).unwrap();
assert_cmd_snapshot!(
test.format_command().args(["--isolated", "--diff", "-"]).pass_stdin(unformatted),
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -1873,7 +1873,7 @@ include = ["*.ipy"]
assert_cmd_snapshot!(test.format_command()
.args(["--config", "ruff.toml"])
.args(["--extension", "ipy:ipynb"])
.arg("."), @r"
.arg("."), @"
success: false
exit_code: 2
----- stdout -----
@@ -1938,7 +1938,7 @@ include = ["*.ipy"]
assert_cmd_snapshot!(test.format_command()
.args(["--config", "ruff.toml"])
.args(["--extension", "ipy:ipynb"])
.arg("."), @r"
.arg("."), @"
success: true
exit_code: 0
----- stdout -----
@@ -2021,7 +2021,7 @@ def file2(arg1, arg2,):
assert_cmd_snapshot!(test.format_command()
.args(["--isolated", "--range=1:8-1:15"])
.arg("file1.py")
.arg("file2.py"), @r"
.arg("file2.py"), @"
success: false
exit_code: 2
----- stdout -----
@@ -2068,7 +2068,7 @@ fn range_start_larger_than_end() -> Result<()> {
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r"
"#), @"
success: false
exit_code: 2
----- stdout -----
@@ -2168,7 +2168,7 @@ fn range_missing_line() -> Result<()> {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r"
"#), @"
success: false
exit_code: 2
----- stdout -----
@@ -2192,7 +2192,7 @@ fn zero_line_number() -> Result<()> {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r"
"#), @"
success: false
exit_code: 2
----- stdout -----
@@ -2217,7 +2217,7 @@ fn column_and_line_zero() -> Result<()> {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r"
"#), @"
success: false
exit_code: 2
----- stdout -----
@@ -2274,7 +2274,7 @@ fn range_formatting_notebook() -> Result<()> {
"nbformat": 4,
"nbformat_minor": 5
}
"#), @r"
"#), @"
success: false
exit_code: 2
----- stdout -----
@@ -2355,7 +2355,7 @@ fn cookiecutter_globbing() -> Result<()> {
])?;
assert_cmd_snapshot!(test.format_command()
.args(["--isolated", "--diff", "."]), @r"
.args(["--isolated", "--diff", "."]), @"
success: true
exit_code: 0
----- stdout -----
@@ -2374,7 +2374,7 @@ fn stable_output_format_warning() -> Result<()> {
test.format_command()
.args(["--output-format=full", "-"])
.pass_stdin(""),
@r"
@"
success: true
exit_code: 0
----- stdout -----

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:
@@ -17,7 +17,6 @@ info:
- "--fix"
- "-"
stdin: "1"
snapshot_kind: text
---
success: false
exit_code: 2

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -12,7 +12,6 @@ info:
- "--target-version"
- py39
- input.py
snapshot_kind: text
---
success: false
exit_code: 1

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -16,6 +16,7 @@ success: true
exit_code: 0
----- stdout -----
Resolved settings for: "[TMP]/foo/test.py"
Settings path: "[TMP]/foo/pyproject.toml"
# General Settings
cache_dir = "[TMP]/foo/.ruff_cache"

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff/tests/lint.rs
source: crates/ruff/tests/cli/lint.rs
info:
program: ruff
args:

View File

@@ -18,13 +18,13 @@ fn check_in_deleted_directory_errors() {
set_current_dir(&temp_path).unwrap();
drop(temp_dir);
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME)).arg("check"), @r###"
success: false
exit_code: 2
----- stdout -----
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME)).arg("check"), @"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Working directory does not exist
"###);
----- stderr -----
ruff failed
Cause: Working directory does not exist
");
}

View File

@@ -97,7 +97,7 @@ impl<'a> RuffCheck<'a> {
fn stdin_success() {
let mut cmd = RuffCheck::default().args([]).build();
assert_cmd_snapshot!(cmd
.pass_stdin(""), @r"
.pass_stdin(""), @"
success: true
exit_code: 0
----- stdout -----
@@ -111,7 +111,7 @@ fn stdin_success() {
fn stdin_error() {
let mut cmd = RuffCheck::default().args([]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\n"), @r"
.pass_stdin("import os\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -136,7 +136,7 @@ fn stdin_filename() {
.args(["--stdin-filename", "F401.py"])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\n"), @r"
.pass_stdin("import os\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -172,7 +172,7 @@ import bar # unused import
)?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--isolated", "--no-cache", "--select", "F401"]).current_dir(tempdir.path()), @r"
.args(["check", "--isolated", "--no-cache", "--select", "F401"]).current_dir(tempdir.path()), @"
success: false
exit_code: 1
----- stdout -----
@@ -208,7 +208,7 @@ fn check_warn_stdin_filename_with_files() {
.filename("foo.py")
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\n"), @r"
.pass_stdin("import os\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -235,7 +235,7 @@ fn stdin_source_type_py() {
.args(["--stdin-filename", "TCH.py"])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\n"), @r"
.pass_stdin("import os\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -261,7 +261,7 @@ fn stdin_source_type_pyi() {
.args(["--stdin-filename", "TCH.pyi", "--select", "TCH"])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\n"), @r"
.pass_stdin("import os\n"), @"
success: true
exit_code: 0
----- stdout -----
@@ -294,7 +294,7 @@ fn stdin_json() {
fn stdin_fix_py() {
let mut cmd = RuffCheck::default().args(["--fix"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\nimport sys\n\nprint(sys.version)\n"), @r"
.pass_stdin("import os\nimport sys\n\nprint(sys.version)\n"), @"
success: true
exit_code: 0
----- stdout -----
@@ -572,7 +572,7 @@ fn stdin_override_parser_ipynb() {
},
"nbformat": 4,
"nbformat_minor": 5
}"#), @r"
}"#), @"
success: false
exit_code: 1
----- stdout -----
@@ -610,7 +610,7 @@ fn stdin_override_parser_py() {
])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\n"), @r"
.pass_stdin("import os\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -633,7 +633,7 @@ fn stdin_override_parser_py() {
fn stdin_fix_when_not_fixable_should_still_print_contents() {
let mut cmd = RuffCheck::default().args(["--fix"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("import os\nimport sys\n\nif (1, 2):\n print(sys.version)\n"), @r###"
.pass_stdin("import os\nimport sys\n\nif (1, 2):\n print(sys.version)\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -654,14 +654,14 @@ fn stdin_fix_when_not_fixable_should_still_print_contents() {
|
Found 2 errors (1 fixed, 1 remaining).
"###);
");
}
#[test]
fn stdin_fix_when_no_issues_should_still_print_contents() {
let mut cmd = RuffCheck::default().args(["--fix"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("import sys\n\nprint(sys.version)\n"), @r"
.pass_stdin("import sys\n\nprint(sys.version)\n"), @"
success: true
exit_code: 0
----- stdout -----
@@ -805,7 +805,7 @@ fn stdin_format_jupyter() {
fn stdin_parse_error() {
let mut cmd = RuffCheck::default().build();
assert_cmd_snapshot!(cmd
.pass_stdin("from foo import\n"), @r"
.pass_stdin("from foo import\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -826,7 +826,7 @@ fn stdin_parse_error() {
fn stdin_multiple_parse_error() {
let mut cmd = RuffCheck::default().build();
assert_cmd_snapshot!(cmd
.pass_stdin("from foo import\nbar =\n"), @r"
.pass_stdin("from foo import\nbar =\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -857,7 +857,7 @@ fn parse_error_not_included() {
// Parse errors are always shown
let mut cmd = RuffCheck::default().args(["--select=I"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("foo =\n"), @r"
.pass_stdin("foo =\n"), @"
success: false
exit_code: 1
----- stdout -----
@@ -878,7 +878,7 @@ fn parse_error_not_included() {
fn full_output_preview() {
let mut cmd = RuffCheck::default().args(["--preview"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("l = 1"), @r"
.pass_stdin("l = 1"), @"
success: false
exit_code: 1
----- stdout -----
@@ -907,7 +907,7 @@ preview = true
",
)?;
let mut cmd = RuffCheck::default().config(&pyproject_toml).build();
assert_cmd_snapshot!(cmd.pass_stdin("l = 1"), @r"
assert_cmd_snapshot!(cmd.pass_stdin("l = 1"), @"
success: false
exit_code: 1
----- stdout -----
@@ -929,7 +929,7 @@ preview = true
fn full_output_format() {
let mut cmd = RuffCheck::default().output_format("full").build();
assert_cmd_snapshot!(cmd
.pass_stdin("l = 1"), @r"
.pass_stdin("l = 1"), @"
success: false
exit_code: 1
----- stdout -----
@@ -967,7 +967,7 @@ fn rule_f401_output_text() {
#[test]
fn rule_invalid_rule_name() {
assert_cmd_snapshot!(ruff_cmd().args(["rule", "RUF404"]), @r"
assert_cmd_snapshot!(ruff_cmd().args(["rule", "RUF404"]), @"
success: false
exit_code: 2
----- stdout -----
@@ -981,7 +981,7 @@ fn rule_invalid_rule_name() {
#[test]
fn rule_invalid_rule_name_output_json() {
assert_cmd_snapshot!(ruff_cmd().args(["rule", "RUF404", "--output-format", "json"]), @r"
assert_cmd_snapshot!(ruff_cmd().args(["rule", "RUF404", "--output-format", "json"]), @"
success: false
exit_code: 2
----- stdout -----
@@ -995,7 +995,7 @@ fn rule_invalid_rule_name_output_json() {
#[test]
fn rule_invalid_rule_name_output_text() {
assert_cmd_snapshot!(ruff_cmd().args(["rule", "RUF404", "--output-format", "text"]), @r"
assert_cmd_snapshot!(ruff_cmd().args(["rule", "RUF404", "--output-format", "text"]), @"
success: false
exit_code: 2
----- stdout -----
@@ -1016,7 +1016,7 @@ fn show_statistics() {
.pass_stdin(r#"
def mvce(keys, values):
return {key: value for key, value in zip(keys, values)}
"#), @r"
"#), @"
success: false
exit_code: 1
----- stdout -----
@@ -1037,7 +1037,7 @@ fn show_statistics_unsafe_fixes() {
.pass_stdin(r#"
def mvce(keys, values):
return {key: value for key, value in zip(keys, values)}
"#), @r"
"#), @"
success: false
exit_code: 1
----- stdout -----
@@ -1152,7 +1152,7 @@ fn show_statistics_partial_fix() {
.args(["--select", "UP035", "--statistics"])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("from typing import List, AsyncGenerator"), @r"
.pass_stdin("from typing import List, AsyncGenerator"), @"
success: false
exit_code: 1
----- stdout -----
@@ -1173,7 +1173,7 @@ fn show_statistics_syntax_errors() {
// ParseError
assert_cmd_snapshot!(
cmd.pass_stdin("x ="),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1186,7 +1186,7 @@ fn show_statistics_syntax_errors() {
// match before 3.10, UnsupportedSyntaxError
assert_cmd_snapshot!(
cmd.pass_stdin("match 2:\n case 1: ..."),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1199,7 +1199,7 @@ fn show_statistics_syntax_errors() {
// rebound comprehension variable, SemanticSyntaxError
assert_cmd_snapshot!(
cmd.pass_stdin("[x := 1 for x in range(0)]"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1216,7 +1216,7 @@ fn preview_enabled_prefix() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF9", "--output-format=concise", "--preview"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1238,7 +1238,7 @@ fn preview_enabled_all() {
let mut cmd = RuffCheck::default()
.args(["--select", "ALL", "--output-format=concise", "--preview"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1265,7 +1265,7 @@ fn preview_enabled_direct() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF911", "--output-format=concise", "--preview"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1282,7 +1282,7 @@ fn preview_disabled_direct() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF911", "--output-format=concise"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1299,7 +1299,7 @@ fn preview_disabled_prefix_empty() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF91", "--output-format=concise"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1316,7 +1316,7 @@ fn preview_disabled_does_not_warn_for_empty_ignore_selections() {
let mut cmd = RuffCheck::default()
.args(["--ignore", "RUF9", "--output-format=concise"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1332,7 +1332,7 @@ fn preview_disabled_does_not_warn_for_empty_fixable_selections() {
let mut cmd = RuffCheck::default()
.args(["--fixable", "RUF9", "--output-format=concise"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1354,7 +1354,7 @@ fn preview_group_selector() {
])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("I=42\n"), @r"
.pass_stdin("I=42\n"), @"
success: false
exit_code: 2
----- stdout -----
@@ -1379,7 +1379,7 @@ fn preview_enabled_group_ignore() {
"--output-format=concise",
])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1400,7 +1400,7 @@ fn preview_enabled_group_ignore() {
fn removed_direct() {
// Selection of a removed rule should fail
let mut cmd = RuffCheck::default().args(["--select", "RUF931"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 2
----- stdout -----
@@ -1418,7 +1418,7 @@ fn removed_direct_multiple() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF930", "--select", "RUF931"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 2
----- stdout -----
@@ -1436,7 +1436,7 @@ fn removed_indirect() {
// Selection _including_ a removed rule without matching should not fail
// nor should the rule be used
let mut cmd = RuffCheck::default().args(["--select", "RUF93"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1449,7 +1449,7 @@ fn removed_indirect() {
#[test]
fn removed_ignore_direct() {
let mut cmd = RuffCheck::default().args(["--ignore", "UP027"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1466,7 +1466,7 @@ fn removed_ignore_multiple_direct() {
let mut cmd = RuffCheck::default()
.args(["--ignore", "UP027", "--ignore", "PLR1706"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1482,7 +1482,7 @@ fn removed_ignore_multiple_direct() {
#[test]
fn removed_ignore_remapped_direct() {
let mut cmd = RuffCheck::default().args(["--ignore", "PGH001"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1498,7 +1498,7 @@ fn removed_ignore_indirect() {
// `PLR170` includes removed rules but should not select or warn
// since it is not a "direct" selection
let mut cmd = RuffCheck::default().args(["--ignore", "PLR170"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1512,7 +1512,7 @@ fn removed_ignore_indirect() {
fn redirect_direct() {
// Selection of a redirected rule directly should use the new rule and warn
let mut cmd = RuffCheck::default().args(["--select", "RUF940"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1531,7 +1531,7 @@ fn redirect_indirect() {
// Selection _including_ a redirected rule without matching should not fail
// nor should the rule be used
let mut cmd = RuffCheck::default().args(["--select", "RUF94"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1546,7 +1546,7 @@ fn redirect_prefix() {
// Selection using a redirected prefix should switch to all rules in the
// new prefix
let mut cmd = RuffCheck::default().args(["--select", "RUF96"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1565,7 +1565,7 @@ fn deprecated_direct() {
// Selection of a deprecated rule without preview enabled should still work
// but a warning should be displayed
let mut cmd = RuffCheck::default().args(["--select", "RUF920"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1586,7 +1586,7 @@ fn deprecated_multiple_direct() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF920", "--select", "RUF921"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 1
----- stdout -----
@@ -1609,7 +1609,7 @@ fn deprecated_indirect() {
// `RUF92` includes deprecated rules but should not warn
// since it is not a "direct" selection
let mut cmd = RuffCheck::default().args(["--select", "RUF92"]).build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1625,7 +1625,7 @@ fn deprecated_direct_preview_enabled() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF920", "--preview"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 2
----- stdout -----
@@ -1642,7 +1642,7 @@ fn deprecated_indirect_preview_enabled() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF92", "--preview"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1659,7 +1659,7 @@ fn deprecated_multiple_direct_preview_enabled() {
let mut cmd = RuffCheck::default()
.args(["--select", "RUF920", "--select", "RUF921", "--preview"])
.build();
assert_cmd_snapshot!(cmd, @r"
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 2
----- stdout -----
@@ -1720,7 +1720,7 @@ fn unreadable_dir() -> Result<()> {
.filename(unreadable_dir.to_str().unwrap())
.args([])
.build();
assert_cmd_snapshot!(cmd, @r###"
assert_cmd_snapshot!(cmd, @"
success: true
exit_code: 0
----- stdout -----
@@ -1728,7 +1728,7 @@ fn unreadable_dir() -> Result<()> {
----- stderr -----
warning: Encountered error: Permission denied (os error 13)
"###);
");
Ok(())
}
@@ -1758,7 +1758,7 @@ fn check_input_from_argfile() -> Result<()> {
(file_a_path.display().to_string().as_str(), "/path/to/a.py"),
]}, {
assert_cmd_snapshot!(cmd
.pass_stdin(""), @r"
.pass_stdin(""), @"
success: false
exit_code: 1
----- stdout -----
@@ -1787,17 +1787,17 @@ fn missing_argfile_reports_error() {
insta::with_settings!({filters => vec![
("The system cannot find the file specified.", "No such file or directory")
]}, {
assert_cmd_snapshot!(cmd, @r"
success: false
exit_code: 2
----- stdout -----
assert_cmd_snapshot!(cmd, @"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: Failed to read CLI arguments from files
Cause: failed to open file `!.txt`
Cause: No such file or directory (os error 2)
");
----- stderr -----
ruff failed
Cause: Failed to read CLI arguments from files
Cause: failed to open file `!.txt`
Cause: No such file or directory (os error 2)
");
});
}
@@ -1807,7 +1807,7 @@ fn check_hints_hidden_unsafe_fixes() {
.args(["--select", "RUF901,RUF902"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1829,7 +1829,7 @@ fn check_hints_hidden_unsafe_fixes_with_no_safe_fixes() {
let mut cmd = RuffCheck::default().args(["--select", "RUF902"]).build();
assert_cmd_snapshot!(cmd
.pass_stdin("x = {'a': 1, 'a': 1}\n"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1849,7 +1849,7 @@ fn check_no_hint_for_hidden_unsafe_fixes_when_disabled() {
.args(["--select", "RUF901,RUF902", "--no-unsafe-fixes"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1873,7 +1873,7 @@ fn check_no_hint_for_hidden_unsafe_fixes_with_no_safe_fixes_when_disabled() {
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("x = {'a': 1, 'a': 1}\n"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1892,7 +1892,7 @@ fn check_shows_unsafe_fixes_with_opt_in() {
.args(["--select", "RUF901,RUF902", "--unsafe-fixes"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1915,7 +1915,7 @@ fn fix_applies_safe_fixes_by_default() {
.args(["--select", "RUF901,RUF902", "--fix"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1936,7 +1936,7 @@ fn fix_applies_unsafe_fixes_with_opt_in() {
.args(["--select", "RUF901,RUF902", "--fix", "--unsafe-fixes"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -1955,7 +1955,7 @@ fn fix_does_not_apply_display_only_fixes() {
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("def add_to_list(item, some_list=[]): ..."),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1975,7 +1975,7 @@ fn fix_does_not_apply_display_only_fixes_with_unsafe_fixes_enabled() {
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("def add_to_list(item, some_list=[]): ..."),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -1994,7 +1994,7 @@ fn fix_only_unsafe_fixes_available() {
.args(["--select", "RUF902", "--fix"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2014,7 +2014,7 @@ fn fix_only_flag_applies_safe_fixes_by_default() {
.args(["--select", "RUF901,RUF902", "--fix-only"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2031,7 +2031,7 @@ fn fix_only_flag_applies_unsafe_fixes_with_opt_in() {
.args(["--select", "RUF901,RUF902", "--fix-only", "--unsafe-fixes"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2049,7 +2049,7 @@ fn diff_shows_safe_fixes_by_default() {
.args(["--select", "RUF901,RUF902", "--diff"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2069,7 +2069,7 @@ fn diff_shows_unsafe_fixes_with_opt_in() {
.args(["--select", "RUF901,RUF902", "--diff", "--unsafe-fixes"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2091,7 +2091,7 @@ fn diff_does_not_show_display_only_fixes_with_unsafe_fixes_enabled() {
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("def add_to_list(item, some_list=[]): ..."),
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2106,7 +2106,7 @@ fn diff_only_unsafe_fixes_available() {
.args(["--select", "RUF902", "--diff"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2134,7 +2134,7 @@ extend-unsafe-fixes = ["RUF901"]
.args(["--select", "RUF901,RUF902"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2170,7 +2170,7 @@ extend-safe-fixes = ["RUF902"]
.args(["--select", "RUF901,RUF902"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2208,7 +2208,7 @@ extend-safe-fixes = ["RUF902"]
.args(["--select", "RUF901,RUF902"])
.build();
assert_cmd_snapshot!(cmd,
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2248,7 +2248,7 @@ extend-safe-fixes = ["RUF9"]
.build();
assert_cmd_snapshot!(cmd
.pass_stdin("x = {'a': 1, 'a': 1}\nprint(('foo'))\nprint(str('foo'))\nisinstance(x, (int, str))\n"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2307,7 +2307,7 @@ def log(x, base) -> float:
.args(["--select", "D41"])
.build();
assert_cmd_snapshot!(cmd
.pass_stdin(stdin), @r"
.pass_stdin(stdin), @"
success: true
exit_code: 0
----- stdout -----
@@ -2360,7 +2360,7 @@ select = ["RUF017"]
let mut cmd = RuffCheck::default().config(&ruff_toml).build();
assert_cmd_snapshot!(cmd
.pass_stdin("x = [1, 2, 3]\ny = [4, 5, 6]\nsum([x, y], [])"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2401,7 +2401,7 @@ unfixable = ["RUF"]
let mut cmd = RuffCheck::default().config(&ruff_toml).build();
assert_cmd_snapshot!(cmd
.pass_stdin("x = [1, 2, 3]\ny = [4, 5, 6]\nsum([x, y], [])"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2431,7 +2431,7 @@ fn pyproject_toml_stdin_syntax_error() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2457,7 +2457,7 @@ fn pyproject_toml_stdin_schema_error() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project]\nname = 1"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2484,7 +2484,7 @@ fn pyproject_toml_stdin_no_applicable_rules_selected() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project"),
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2503,7 +2503,7 @@ fn pyproject_toml_stdin_no_applicable_rules_selected_2() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project"),
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2522,7 +2522,7 @@ fn pyproject_toml_stdin_no_errors() {
assert_cmd_snapshot!(
cmd.pass_stdin(r#"[project]\nname = "ruff"\nversion = "0.0.0""#),
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2547,7 +2547,7 @@ fn pyproject_toml_stdin_schema_error_fix() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project]\nname = 1"),
@r"
@"
success: false
exit_code: 1
----- stdout -----
@@ -2581,7 +2581,7 @@ fn pyproject_toml_stdin_schema_error_fix_only() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project]\nname = 1"),
@r"
@"
success: true
exit_code: 0
----- stdout -----
@@ -2607,7 +2607,7 @@ fn pyproject_toml_stdin_schema_error_fix_diff() {
assert_cmd_snapshot!(
cmd.pass_stdin("[project]\nname = 1"),
@r"
@"
success: true
exit_code: 0
----- stdout -----

View File

@@ -29,7 +29,7 @@ fn check_project_include_defaults() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r"
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @"
success: true
exit_code: 0
----- stdout -----
@@ -53,7 +53,7 @@ fn check_project_respects_direct_paths() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files", "b.py"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r"
.args(["check", "--show-files", "b.py"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @"
success: true
exit_code: 0
----- stdout -----
@@ -72,7 +72,7 @@ fn check_project_respects_subdirectory_includes() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files", "subdirectory"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r"
.args(["check", "--show-files", "subdirectory"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @"
success: true
exit_code: 0
----- stdout -----
@@ -91,7 +91,7 @@ fn check_project_from_project_subdirectory_respects_includes() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test/subdirectory")), @r"
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test/subdirectory")), @"
success: true
exit_code: 0
----- stdout -----

View File

@@ -50,6 +50,56 @@ ignore = [
Ok(())
}
#[test]
fn display_settings_from_nested_directory() -> anyhow::Result<()> {
let tempdir = TempDir::new().context("Failed to create temp directory.")?;
// Tempdir path's on macos are symlinks, which doesn't play nicely with
// our snapshot filtering.
let project_dir =
dunce::canonicalize(tempdir.path()).context("Failed to canonical tempdir path.")?;
// Root pyproject.toml.
std::fs::write(
project_dir.join("pyproject.toml"),
r#"
[tool.ruff]
line-length = 100
[tool.ruff.lint]
select = ["E", "F"]
"#,
)?;
// Create a subdirectory with its own pyproject.toml.
let subdir = project_dir.join("subdir");
std::fs::create_dir(&subdir)?;
std::fs::write(
subdir.join("pyproject.toml"),
r#"
[tool.ruff]
line-length = 120
[tool.ruff.lint]
select = ["E", "F", "I"]
"#,
)?;
std::fs::write(subdir.join("test.py"), r#"import os"#).context("Failed to write test.py.")?;
insta::with_settings!({filters => vec![
(&*tempdir_filter(&project_dir), "<temp_dir>/"),
(r#"\\(\w\w|\s|\.|")"#, "/$1"),
]}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-settings", "subdir/test.py"])
.current_dir(&project_dir));
});
Ok(())
}
fn tempdir_filter(project_dir: &Path) -> String {
format!(r#"{}\\?/?"#, regex::escape(project_dir.to_str().unwrap()))
}

View File

@@ -3,11 +3,12 @@ source: crates/ruff/tests/integration_test.rs
info:
program: ruff
args:
- "-"
- "--isolated"
- "--no-cache"
- check
- "--output-format"
- json
- "--no-cache"
- "--isolated"
- "-"
- "--stdin-filename"
- F401.py
stdin: "import os\n"
@@ -51,4 +52,3 @@ exit_code: 1
}
]
----- stderr -----

View File

@@ -0,0 +1,410 @@
---
source: crates/ruff/tests/show_settings.rs
info:
program: ruff
args:
- check
- "--show-settings"
- subdir/test.py
---
success: true
exit_code: 0
----- stdout -----
Resolved settings for: "<temp_dir>/subdir/test.py"
Settings path: "<temp_dir>/subdir/pyproject.toml"
# General Settings
cache_dir = "<temp_dir>/subdir/.ruff_cache"
fix = false
fix_only = false
output_format = full
show_fixes = false
unsafe_fixes = hint
# File Resolver Settings
file_resolver.exclude = [
".bzr",
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".ipynb_checkpoints",
".mypy_cache",
".nox",
".pants.d",
".pyenv",
".pytest_cache",
".pytype",
".ruff_cache",
".svn",
".tox",
".venv",
".vscode",
"__pypackages__",
"_build",
"buck-out",
"dist",
"node_modules",
"site-packages",
"venv",
]
file_resolver.extend_exclude = []
file_resolver.force_exclude = false
file_resolver.include = [
"*.py",
"*.pyi",
"*.ipynb",
"**/pyproject.toml",
]
file_resolver.extend_include = []
file_resolver.respect_gitignore = true
file_resolver.project_root = "<temp_dir>/subdir"
# Linter Settings
linter.exclude = []
linter.project_root = "<temp_dir>/subdir"
linter.rules.enabled = [
unsorted-imports (I001),
missing-required-import (I002),
mixed-spaces-and-tabs (E101),
multiple-imports-on-one-line (E401),
module-import-not-at-top-of-file (E402),
line-too-long (E501),
multiple-statements-on-one-line-colon (E701),
multiple-statements-on-one-line-semicolon (E702),
useless-semicolon (E703),
none-comparison (E711),
true-false-comparison (E712),
not-in-test (E713),
not-is-test (E714),
type-comparison (E721),
bare-except (E722),
lambda-assignment (E731),
ambiguous-variable-name (E741),
ambiguous-class-name (E742),
ambiguous-function-name (E743),
io-error (E902),
unused-import (F401),
import-shadowed-by-loop-var (F402),
undefined-local-with-import-star (F403),
late-future-import (F404),
undefined-local-with-import-star-usage (F405),
undefined-local-with-nested-import-star-usage (F406),
future-feature-not-defined (F407),
percent-format-invalid-format (F501),
percent-format-expected-mapping (F502),
percent-format-expected-sequence (F503),
percent-format-extra-named-arguments (F504),
percent-format-missing-argument (F505),
percent-format-mixed-positional-and-named (F506),
percent-format-positional-count-mismatch (F507),
percent-format-star-requires-sequence (F508),
percent-format-unsupported-format-character (F509),
string-dot-format-invalid-format (F521),
string-dot-format-extra-named-arguments (F522),
string-dot-format-extra-positional-arguments (F523),
string-dot-format-missing-arguments (F524),
string-dot-format-mixing-automatic (F525),
f-string-missing-placeholders (F541),
multi-value-repeated-key-literal (F601),
multi-value-repeated-key-variable (F602),
expressions-in-star-assignment (F621),
multiple-starred-expressions (F622),
assert-tuple (F631),
is-literal (F632),
invalid-print-syntax (F633),
if-tuple (F634),
break-outside-loop (F701),
continue-outside-loop (F702),
yield-outside-function (F704),
return-outside-function (F706),
default-except-not-last (F707),
forward-annotation-syntax-error (F722),
redefined-while-unused (F811),
undefined-name (F821),
undefined-export (F822),
undefined-local (F823),
unused-variable (F841),
unused-annotation (F842),
raise-not-implemented (F901),
]
linter.rules.should_fix = [
unsorted-imports (I001),
missing-required-import (I002),
mixed-spaces-and-tabs (E101),
multiple-imports-on-one-line (E401),
module-import-not-at-top-of-file (E402),
line-too-long (E501),
multiple-statements-on-one-line-colon (E701),
multiple-statements-on-one-line-semicolon (E702),
useless-semicolon (E703),
none-comparison (E711),
true-false-comparison (E712),
not-in-test (E713),
not-is-test (E714),
type-comparison (E721),
bare-except (E722),
lambda-assignment (E731),
ambiguous-variable-name (E741),
ambiguous-class-name (E742),
ambiguous-function-name (E743),
io-error (E902),
unused-import (F401),
import-shadowed-by-loop-var (F402),
undefined-local-with-import-star (F403),
late-future-import (F404),
undefined-local-with-import-star-usage (F405),
undefined-local-with-nested-import-star-usage (F406),
future-feature-not-defined (F407),
percent-format-invalid-format (F501),
percent-format-expected-mapping (F502),
percent-format-expected-sequence (F503),
percent-format-extra-named-arguments (F504),
percent-format-missing-argument (F505),
percent-format-mixed-positional-and-named (F506),
percent-format-positional-count-mismatch (F507),
percent-format-star-requires-sequence (F508),
percent-format-unsupported-format-character (F509),
string-dot-format-invalid-format (F521),
string-dot-format-extra-named-arguments (F522),
string-dot-format-extra-positional-arguments (F523),
string-dot-format-missing-arguments (F524),
string-dot-format-mixing-automatic (F525),
f-string-missing-placeholders (F541),
multi-value-repeated-key-literal (F601),
multi-value-repeated-key-variable (F602),
expressions-in-star-assignment (F621),
multiple-starred-expressions (F622),
assert-tuple (F631),
is-literal (F632),
invalid-print-syntax (F633),
if-tuple (F634),
break-outside-loop (F701),
continue-outside-loop (F702),
yield-outside-function (F704),
return-outside-function (F706),
default-except-not-last (F707),
forward-annotation-syntax-error (F722),
redefined-while-unused (F811),
undefined-name (F821),
undefined-export (F822),
undefined-local (F823),
unused-variable (F841),
unused-annotation (F842),
raise-not-implemented (F901),
]
linter.per_file_ignores = {}
linter.safety_table.forced_safe = []
linter.safety_table.forced_unsafe = []
linter.unresolved_target_version = none
linter.per_file_target_version = {}
linter.preview = disabled
linter.explicit_preview_rules = false
linter.extension = ExtensionMapping({})
linter.allowed_confusables = []
linter.builtins = []
linter.dummy_variable_rgx = ^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$
linter.external = []
linter.ignore_init_module_imports = true
linter.logger_objects = []
linter.namespace_packages = []
linter.src = [
"<temp_dir>/subdir",
"<temp_dir>/subdir/src",
]
linter.tab_size = 4
linter.line_length = 120
linter.task_tags = [
TODO,
FIXME,
XXX,
]
linter.typing_modules = []
linter.typing_extensions = true
# Linter Plugins
linter.flake8_annotations.mypy_init_return = false
linter.flake8_annotations.suppress_dummy_args = false
linter.flake8_annotations.suppress_none_returning = false
linter.flake8_annotations.allow_star_arg_any = false
linter.flake8_annotations.ignore_fully_untyped = false
linter.flake8_bandit.hardcoded_tmp_directory = [
/tmp,
/var/tmp,
/dev/shm,
]
linter.flake8_bandit.check_typed_exception = false
linter.flake8_bandit.extend_markup_names = []
linter.flake8_bandit.allowed_markup_calls = []
linter.flake8_bugbear.extend_immutable_calls = []
linter.flake8_builtins.allowed_modules = []
linter.flake8_builtins.ignorelist = []
linter.flake8_builtins.strict_checking = false
linter.flake8_comprehensions.allow_dict_calls_with_keyword_arguments = false
linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|,\s)\d{4})*
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
]
linter.flake8_implicit_str_concat.allow_multiline = true
linter.flake8_import_conventions.aliases = {
altair = alt,
holoviews = hv,
matplotlib = mpl,
matplotlib.pyplot = plt,
networkx = nx,
numpy = np,
numpy.typing = npt,
pandas = pd,
panel = pn,
plotly.express = px,
polars = pl,
pyarrow = pa,
seaborn = sns,
tensorflow = tf,
tkinter = tk,
xml.etree.ElementTree = ET,
}
linter.flake8_import_conventions.banned_aliases = {}
linter.flake8_import_conventions.banned_from = []
linter.flake8_pytest_style.fixture_parentheses = false
linter.flake8_pytest_style.parametrize_names_type = tuple
linter.flake8_pytest_style.parametrize_values_type = list
linter.flake8_pytest_style.parametrize_values_row_type = tuple
linter.flake8_pytest_style.raises_require_match_for = [
BaseException,
Exception,
ValueError,
OSError,
IOError,
EnvironmentError,
socket.error,
]
linter.flake8_pytest_style.raises_extend_require_match_for = []
linter.flake8_pytest_style.mark_parentheses = false
linter.flake8_quotes.inline_quotes = double
linter.flake8_quotes.multiline_quotes = double
linter.flake8_quotes.docstring_quotes = double
linter.flake8_quotes.avoid_escape = true
linter.flake8_self.ignore_names = [
_make,
_asdict,
_replace,
_fields,
_field_defaults,
_name_,
_value_,
]
linter.flake8_tidy_imports.ban_relative_imports = "parents"
linter.flake8_tidy_imports.banned_api = {}
linter.flake8_tidy_imports.banned_module_level_imports = []
linter.flake8_type_checking.strict = false
linter.flake8_type_checking.exempt_modules = [
typing,
typing_extensions,
]
linter.flake8_type_checking.runtime_required_base_classes = []
linter.flake8_type_checking.runtime_required_decorators = []
linter.flake8_type_checking.quote_annotations = false
linter.flake8_unused_arguments.ignore_variadic_names = false
linter.isort.required_imports = []
linter.isort.combine_as_imports = false
linter.isort.force_single_line = false
linter.isort.force_sort_within_sections = false
linter.isort.detect_same_package = true
linter.isort.case_sensitive = false
linter.isort.force_wrap_aliases = false
linter.isort.force_to_top = []
linter.isort.known_modules = {}
linter.isort.order_by_type = true
linter.isort.relative_imports_order = furthest_to_closest
linter.isort.single_line_exclusions = []
linter.isort.split_on_trailing_comma = true
linter.isort.classes = []
linter.isort.constants = []
linter.isort.variables = []
linter.isort.no_lines_before = []
linter.isort.lines_after_imports = -1
linter.isort.lines_between_types = 0
linter.isort.forced_separate = []
linter.isort.section_order = [
known { type = future },
known { type = standard_library },
known { type = third_party },
known { type = first_party },
known { type = local_folder },
]
linter.isort.default_section = known { type = third_party }
linter.isort.no_sections = false
linter.isort.from_first = false
linter.isort.length_sort = false
linter.isort.length_sort_straight = false
linter.mccabe.max_complexity = 10
linter.pep8_naming.ignore_names = [
setUp,
tearDown,
setUpClass,
tearDownClass,
setUpModule,
tearDownModule,
asyncSetUp,
asyncTearDown,
setUpTestData,
failureException,
longMessage,
maxDiff,
]
linter.pep8_naming.classmethod_decorators = []
linter.pep8_naming.staticmethod_decorators = []
linter.pycodestyle.max_line_length = 120
linter.pycodestyle.max_doc_length = none
linter.pycodestyle.ignore_overlong_task_comments = false
linter.pyflakes.extend_generics = []
linter.pyflakes.allowed_unused_imports = []
linter.pylint.allow_magic_value_types = [
str,
bytes,
]
linter.pylint.allow_dunder_method_names = []
linter.pylint.max_args = 5
linter.pylint.max_positional_args = 5
linter.pylint.max_returns = 6
linter.pylint.max_bool_expr = 5
linter.pylint.max_branches = 12
linter.pylint.max_statements = 50
linter.pylint.max_public_methods = 20
linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []
formatter.unresolved_target_version = 3.10
formatter.per_file_target_version = {}
formatter.preview = disabled
formatter.line_width = 120
formatter.line_ending = auto
formatter.indent_style = space
formatter.indent_width = 4
formatter.quote_style = double
formatter.magic_trailing_comma = respect
formatter.docstring_code_format = disabled
formatter.docstring_code_line_width = dynamic
# Analyze Settings
analyze.exclude = []
analyze.preview = disabled
analyze.target_version = 3.10
analyze.string_imports = disabled
analyze.extension = ExtensionMapping({})
analyze.include_dependencies = {}
analyze.type_checking_imports = true
----- stderr -----

View File

@@ -16,7 +16,7 @@ const VERSION_FILTER: [(&str, &str); 1] = [(
fn version_basics() {
insta::with_settings!({filters => VERSION_FILTER.to_vec()}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).arg("version"), @r"
Command::new(get_cargo_bin(BIN_NAME)).arg("version"), @"
success: true
exit_code: 0
----- stdout -----
@@ -42,7 +42,7 @@ fn config_option_allowed_but_ignored() -> Result<()> {
.arg("version")
.arg("--config")
.arg(&ruff_dot_toml)
.args(["--config", "lint.isort.extra-standard-library = ['foo', 'bar']"]), @r"
.args(["--config", "lint.isort.extra-standard-library = ['foo', 'bar']"]), @"
success: true
exit_code: 0
----- stdout -----
@@ -60,7 +60,7 @@ fn config_option_ignored_but_validated() {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.arg("version")
.args(["--config", "foo = bar"]), @r"
.args(["--config", "foo = bar"]), @"
success: false
exit_code: 2
----- stdout -----
@@ -91,7 +91,7 @@ fn config_option_ignored_but_validated() {
fn isolated_option_allowed() {
insta::with_settings!({filters => VERSION_FILTER.to_vec()}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).arg("version").arg("--isolated"), @r"
Command::new(get_cargo_bin(BIN_NAME)).arg("version").arg("--isolated"), @"
success: true
exit_code: 0
----- stdout -----

View File

@@ -15,7 +15,7 @@ use ruff_db::files::{File, system_path_to_file};
use ruff_db::source::source_text;
use ruff_db::system::{InMemorySystem, MemoryFileSystem, SystemPath, SystemPathBuf, TestSystem};
use ruff_python_ast::PythonVersion;
use ty_project::metadata::options::{EnvironmentOptions, Options};
use ty_project::metadata::options::{AnalysisOptions, EnvironmentOptions, Options};
use ty_project::metadata::value::{RangedValue, RelativePathBuf};
use ty_project::watch::{ChangeEvent, ChangedKind};
use ty_project::{CheckMode, Db, ProjectDatabase, ProjectMetadata};
@@ -67,6 +67,7 @@ fn tomllib_path(file: &TestFile) -> SystemPathBuf {
SystemPathBuf::from("src").join(file.name())
}
#[expect(clippy::needless_update)]
fn setup_tomllib_case() -> Case {
let system = TestSystem::default();
let fs = system.memory_file_system().clone();
@@ -85,6 +86,10 @@ fn setup_tomllib_case() -> Case {
python_version: Some(RangedValue::cli(PythonVersion::PY312)),
..EnvironmentOptions::default()
}),
analysis: Some(AnalysisOptions {
respect_type_ignore_comments: Some(false),
..AnalysisOptions::default()
}),
..Options::default()
});
@@ -221,7 +226,7 @@ fn setup_micro_case(code: &str) -> Case {
let file_path = "src/test.py";
fs.write_file_all(
SystemPathBuf::from(file_path),
ruff_python_trivia::textwrap::dedent(code),
&*ruff_python_trivia::textwrap::dedent(code),
)
.unwrap();
@@ -557,6 +562,60 @@ fn benchmark_many_enum_members(criterion: &mut Criterion) {
});
}
fn benchmark_many_enum_members_2(criterion: &mut Criterion) {
const NUM_ENUM_MEMBERS: usize = 48;
setup_rayon();
let mut code = "\
from enum import Enum
from typing_extensions import assert_never
class E(Enum):
"
.to_string();
for i in 0..NUM_ENUM_MEMBERS {
writeln!(&mut code, " m{i} = {i}").ok();
}
code.push_str(
"
def method(self):
match self:",
);
for i in 0..NUM_ENUM_MEMBERS {
write!(
&mut code,
"
case E.m{i}:
pass"
)
.ok();
}
write!(
&mut code,
"
case _:
assert_never(self)"
)
.ok();
criterion.bench_function("ty_micro[many_enum_members_2]", |b| {
b.iter_batched_ref(
|| setup_micro_case(&code),
|case| {
let Case { db, .. } = case;
let result = db.check();
assert_eq!(result.len(), 0);
},
BatchSize::SmallInput,
);
});
}
struct ProjectBenchmark<'a> {
project: InstalledProject<'a>,
fs: MemoryFileSystem,
@@ -701,7 +760,7 @@ fn datetype(criterion: &mut Criterion) {
max_dep_date: "2025-07-04",
python_version: PythonVersion::PY313,
},
2,
4,
);
bench_project(&benchmark, criterion);
@@ -717,6 +776,7 @@ criterion_group!(
benchmark_complex_constrained_attributes_2,
benchmark_complex_constrained_attributes_3,
benchmark_many_enum_members,
benchmark_many_enum_members_2,
);
criterion_group!(project, anyio, attrs, hydra, datetype);
criterion_main!(check_file, micro, project);

View File

@@ -71,6 +71,8 @@ impl Display for Benchmark<'_> {
}
}
#[track_caller]
#[expect(clippy::cast_precision_loss)]
fn check_project(db: &ProjectDatabase, project_name: &str, max_diagnostics: usize) {
let result = db.check();
let diagnostics = result.len();
@@ -79,6 +81,12 @@ fn check_project(db: &ProjectDatabase, project_name: &str, max_diagnostics: usiz
diagnostics > 1 && diagnostics <= max_diagnostics,
"Expected between 1 and {max_diagnostics} diagnostics on project '{project_name}' but got {diagnostics}",
);
if (max_diagnostics - diagnostics) as f64 / max_diagnostics as f64 > 0.10 {
tracing::warn!(
"The expected diagnostics for project `{project_name}` can be reduced: expected {max_diagnostics} but got {diagnostics}"
);
}
}
static ALTAIR: Benchmark = Benchmark::new(
@@ -101,7 +109,7 @@ static ALTAIR: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
1000,
850,
);
static COLOUR_SCIENCE: Benchmark = Benchmark::new(
@@ -120,7 +128,7 @@ static COLOUR_SCIENCE: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY310,
},
1070,
350,
);
static FREQTRADE: Benchmark = Benchmark::new(
@@ -163,7 +171,7 @@ static PANDAS: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
4000,
3800,
);
static PYDANTIC: Benchmark = Benchmark::new(
@@ -181,7 +189,7 @@ static PYDANTIC: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY39,
},
7000,
3200,
);
static SYMPY: Benchmark = Benchmark::new(
@@ -194,7 +202,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13116,
13400,
);
static TANJUN: Benchmark = Benchmark::new(
@@ -207,7 +215,7 @@ static TANJUN: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
320,
110,
);
static STATIC_FRAME: Benchmark = Benchmark::new(
@@ -223,7 +231,7 @@ static STATIC_FRAME: Benchmark = Benchmark::new(
max_dep_date: "2025-08-09",
python_version: PythonVersion::PY311,
},
1100,
1700,
);
#[track_caller]

View File

@@ -1,3 +1,4 @@
use std::fmt::Formatter;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
@@ -49,3 +50,15 @@ impl CancellationToken {
self.cancelled.load(std::sync::atomic::Ordering::Relaxed)
}
}
/// The operation was canceled by the provided [`CancellationToken`].
#[derive(Debug)]
pub struct Canceled;
impl std::error::Error for Canceled {}
impl std::fmt::Display for Canceled {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str("operation was canceled")
}
}

View File

@@ -98,6 +98,44 @@ impl Diagnostic {
diag
}
/// Adds sub diagnostics that tell the user that this is a bug in ty
/// and asks them to open an issue on GitHub.
pub fn add_bug_sub_diagnostics(&mut self, url_encoded_title: &str) {
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
"This indicates a bug in ty.",
));
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format_args!(
"If you could open an issue at https://github.com/astral-sh/ty/issues/new?title={url_encoded_title}, we'd be very appreciative!"
),
));
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!(
"Platform: {os} {arch}",
os = std::env::consts::OS,
arch = std::env::consts::ARCH
),
));
if let Some(version) = crate::program_version() {
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!("Version: {version}"),
));
}
self.sub(SubDiagnostic::new(
SubDiagnosticSeverity::Info,
format!(
"Args: {args:?}",
args = std::env::args().collect::<Vec<_>>()
),
));
}
/// Add an annotation to this diagnostic.
///
/// Annotations for a diagnostic are optional, but if any are added,
@@ -1019,6 +1057,13 @@ impl DiagnosticId {
matches!(self, DiagnosticId::Lint(_))
}
pub const fn as_lint(&self) -> Option<LintName> {
match self {
DiagnosticId::Lint(name) => Some(*name),
_ => None,
}
}
/// Returns `true` if this `DiagnosticId` represents a lint with the given name.
pub fn is_lint_named(&self, name: &str) -> bool {
matches!(self, DiagnosticId::Lint(self_name) if self_name == name)

View File

@@ -1284,7 +1284,7 @@ watermelon
let diag = env.err().primary("animals", "5", "5", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -1308,7 +1308,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
warning[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -1328,7 +1328,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
info[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -1355,7 +1355,7 @@ watermelon
let diag = builder.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1374,7 +1374,7 @@ watermelon
let diag = builder.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1395,7 +1395,7 @@ watermelon
let diag = env.err().primary("non-ascii", "5", "5", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> non-ascii:5:1
|
@@ -1414,7 +1414,7 @@ watermelon
let diag = env.err().primary("non-ascii", "2:4", "2:8", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> non-ascii:2:2
|
@@ -1438,7 +1438,7 @@ watermelon
env.context(1);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -1455,7 +1455,7 @@ watermelon
env.context(0);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -1470,7 +1470,7 @@ watermelon
env.context(2);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1487,7 +1487,7 @@ watermelon
env.context(2);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:11:1
|
@@ -1504,7 +1504,7 @@ watermelon
env.context(200);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -1537,7 +1537,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1581,7 +1581,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1606,7 +1606,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1634,7 +1634,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1662,7 +1662,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1687,7 +1687,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1718,7 +1718,7 @@ watermelon
// window.
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
@@ -1756,7 +1756,7 @@ watermelon
let diag = env.err().primary("spacey-animals", "8", "8", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> spacey-animals:8:1
|
@@ -1773,7 +1773,7 @@ watermelon
let diag = env.err().primary("spacey-animals", "12", "12", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> spacey-animals:12:1
|
@@ -1791,7 +1791,7 @@ watermelon
let diag = env.err().primary("spacey-animals", "13", "13", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> spacey-animals:13:1
|
@@ -1831,7 +1831,7 @@ watermelon
// instead of special casing the snippet assembly.
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> spacey-animals:3:1
|
@@ -1860,7 +1860,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -1897,7 +1897,7 @@ watermelon
);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -1934,7 +1934,7 @@ watermelon
);
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -1962,7 +1962,7 @@ watermelon
diag.sub(env.sub_warn().primary("fruits", "3", "3", "").build());
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -1998,7 +1998,7 @@ watermelon
diag.sub(env.sub_warn().primary("animals", "11", "11", "").build());
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -2037,7 +2037,7 @@ watermelon
diag.sub(env.sub_warn().primary("fruits", "3", "3", "").build());
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -2085,7 +2085,7 @@ watermelon
diag.sub(env.sub_warn().secondary("animals", "3", "3", "").build());
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
@@ -2121,7 +2121,7 @@ watermelon
let diag = env.err().primary("animals", "5", "6", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2144,7 +2144,7 @@ watermelon
let diag = env.err().primary("animals", "5", "7:0", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2164,7 +2164,7 @@ watermelon
let diag = env.err().primary("animals", "5", "7:1", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2184,7 +2184,7 @@ watermelon
let diag = env.err().primary("animals", "5:3", "8:8", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:4
|
@@ -2206,7 +2206,7 @@ watermelon
let diag = env.err().secondary("animals", "5:3", "8:8", "").build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:4
|
@@ -2238,7 +2238,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:4:1
|
@@ -2267,7 +2267,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:4:1
|
@@ -2298,7 +2298,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2333,7 +2333,7 @@ watermelon
// better using only ASCII art.
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2361,7 +2361,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2393,7 +2393,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:3
|
@@ -2415,7 +2415,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:3
|
@@ -2448,7 +2448,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:8:1
|
@@ -2488,7 +2488,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
@@ -2532,7 +2532,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> fruits:1:1
|
@@ -2567,7 +2567,7 @@ watermelon
.build();
insta::assert_snapshot!(
env.render(&diag),
@r"
@"
error[test-diagnostic]: main diagnostic message
--> animals:11:1
|

View File

@@ -137,7 +137,7 @@ mod tests {
#[test]
fn output() {
let (env, diagnostics) = create_diagnostics(DiagnosticFormat::Concise);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @"
fib.py:1:8: error[unused-import] `os` imported but unused
fib.py:6:5: error[unused-variable] Local variable `x` is assigned to but never used
undef.py:1:4: error[undefined-name] Undefined name `a`
@@ -150,7 +150,7 @@ mod tests {
env.hide_severity(true);
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @"
fib.py:1:8: F401 [*] `os` imported but unused
fib.py:6:5: F841 [*] Local variable `x` is assigned to but never used
undef.py:1:4: F821 Undefined name `a`
@@ -164,7 +164,7 @@ mod tests {
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
env.preview(true);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @"
fib.py:1:8: F401 [*] `os` imported but unused
fib.py:6:5: F841 [*] Local variable `x` is assigned to but never used
undef.py:1:4: F821 Undefined name `a`
@@ -177,7 +177,7 @@ mod tests {
env.hide_severity(true);
env.show_fix_status(true);
env.fix_applicability(Applicability::DisplayOnly);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @"
syntax_errors.py:1:15: invalid-syntax: Expected one or more symbol names after import
syntax_errors.py:3:12: invalid-syntax: Expected ')', found newline
");
@@ -186,7 +186,7 @@ mod tests {
#[test]
fn syntax_errors() {
let (env, diagnostics) = create_syntax_error_diagnostics(DiagnosticFormat::Concise);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @"
syntax_errors.py:1:15: error[invalid-syntax] Expected one or more symbol names after import
syntax_errors.py:3:12: error[invalid-syntax] Expected ')', found newline
");
@@ -195,7 +195,7 @@ mod tests {
#[test]
fn notebook_output() {
let (env, diagnostics) = create_notebook_diagnostics(DiagnosticFormat::Concise);
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @r"
insta::assert_snapshot!(env.render_diagnostics(&diagnostics), @"
notebook.ipynb:cell 1:2:8: error[unused-import] `os` imported but unused
notebook.ipynb:cell 2:2:8: error[unused-import] `math` imported but unused
notebook.ipynb:cell 3:4:5: error[unused-variable] Local variable `x` is assigned to but never used

View File

@@ -14,6 +14,7 @@ use crate::diagnostic::{Span, UnifiedFile};
use crate::file_revision::FileRevision;
use crate::files::file_root::FileRoots;
use crate::files::private::FileStatus;
use crate::source::SourceText;
use crate::system::{SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf};
use crate::vendored::{VendoredPath, VendoredPathBuf};
use crate::{Db, FxDashMap, vendored};
@@ -323,6 +324,17 @@ pub struct File {
/// the file has been deleted is to change the status to `Deleted`.
#[default]
status: FileStatus,
/// Overrides the result of [`source_text`](crate::source::source_text).
///
/// This is useful when running queries after modifying a file's content but
/// before the content is written to disk. For example, to verify that the applied fixes
/// didn't introduce any new errors.
///
/// The override gets automatically removed the next time the file changes.
#[default]
#[returns(ref)]
pub source_text_override: Option<SourceText>,
}
// The Salsa heap is tracked separately.
@@ -444,20 +456,28 @@ impl File {
_ => (FileStatus::NotFound, FileRevision::zero(), None),
};
let mut clear_override = false;
if file.status(db) != status {
tracing::debug!("Updating the status of `{}`", file.path(db));
file.set_status(db).to(status);
clear_override = true;
}
if file.revision(db) != revision {
tracing::debug!("Updating the revision of `{}`", file.path(db));
file.set_revision(db).to(revision);
clear_override = true;
}
if file.permissions(db) != permission {
tracing::debug!("Updating the permissions of `{}`", file.path(db));
file.set_permissions(db).to(permission);
}
if clear_override && file.source_text_override(db).is_some() {
file.set_source_text_override(db).to(None);
}
}
/// Returns `true` if the file exists.
@@ -526,7 +546,7 @@ impl VirtualFile {
}
/// Increments the revision of the underlying [`File`].
fn sync(&self, db: &mut dyn Db) {
pub fn sync(&self, db: &mut dyn Db) {
let file = self.0;
tracing::debug!("Updating the revision of `{}`", file.path(db));
let current_revision = file.revision(db);

View File

@@ -85,6 +85,13 @@ pub fn max_parallelism() -> NonZeroUsize {
})
}
// Use a reasonably large stack size to avoid running into stack overflows too easily. The
// size was chosen in such a way as to still be able to handle large expressions involving
// binary operators (x + x + … + x) both during the AST walk in semantic index building as
// well as during type checking. Using this stack size, we can handle handle expressions
// that are several times larger than the corresponding limits in existing type checkers.
pub const STACK_SIZE: usize = 16 * 1024 * 1024;
/// Trait for types that can provide Rust documentation.
///
/// Use `derive(RustDoc)` to automatically implement this trait for types that have a static string documentation.

View File

@@ -1,6 +1,8 @@
use std::borrow::Cow;
use std::ops::Deref;
use std::sync::Arc;
use ruff_diagnostics::SourceMap;
use ruff_notebook::Notebook;
use ruff_python_ast::PySourceType;
use ruff_source_file::LineIndex;
@@ -16,6 +18,10 @@ pub fn source_text(db: &dyn Db, file: File) -> SourceText {
let _span = tracing::trace_span!("source_text", file = %path).entered();
let mut read_error = None;
if let Some(source) = file.source_text_override(db) {
return source.clone();
}
let kind = if is_notebook(db.system(), path) {
file.read_to_notebook(db)
.unwrap_or_else(|error| {
@@ -90,6 +96,45 @@ impl SourceText {
pub fn read_error(&self) -> Option<&SourceTextError> {
self.inner.read_error.as_ref()
}
/// Returns a new instance for this file with the updated source text (Python code).
///
/// Uses the `source_map` to preserve the cell-boundaries.
#[must_use]
pub fn with_text(&self, new_text: String, source_map: &SourceMap) -> Self {
let new_kind = match &self.inner.kind {
SourceTextKind::Text(_) => SourceTextKind::Text(new_text),
SourceTextKind::Notebook { notebook } => {
let mut new_notebook = notebook.as_ref().clone();
new_notebook.update(source_map, new_text);
SourceTextKind::Notebook {
notebook: new_notebook.into(),
}
}
};
Self {
inner: Arc::new(SourceTextInner {
kind: new_kind,
read_error: self.inner.read_error.clone(),
}),
}
}
pub fn to_bytes(&self) -> Cow<'_, [u8]> {
match &self.inner.kind {
SourceTextKind::Text(source) => Cow::Borrowed(source.as_bytes()),
SourceTextKind::Notebook { notebook } => {
let mut output: Vec<u8> = Vec::new();
notebook
.write(&mut output)
.expect("writing to a Vec should never fail");
Cow::Owned(output)
}
}
}
}
impl Deref for SourceText {
@@ -117,13 +162,13 @@ impl std::fmt::Debug for SourceText {
}
}
#[derive(Eq, PartialEq, get_size2::GetSize)]
#[derive(Eq, PartialEq, get_size2::GetSize, Clone)]
struct SourceTextInner {
kind: SourceTextKind,
read_error: Option<SourceTextError>,
}
#[derive(Eq, PartialEq, get_size2::GetSize)]
#[derive(Eq, PartialEq, get_size2::GetSize, Clone)]
enum SourceTextKind {
Text(String),
Notebook {

View File

@@ -271,7 +271,12 @@ pub trait WritableSystem: System {
fn create_new_file(&self, path: &SystemPath) -> Result<()>;
/// Writes the given content to the file at the given path.
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()>;
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
self.write_file_bytes(path, content.as_bytes())
}
/// Writes the given content to the file at the given path.
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()>;
/// Creates a directory at `path` as well as any intermediate directories.
fn create_directory_all(&self, path: &SystemPath) -> Result<()>;
@@ -311,6 +316,8 @@ pub trait WritableSystem: System {
Ok(Some(cache_path))
}
fn dyn_clone(&self) -> Box<dyn WritableSystem>;
}
#[derive(Clone, Debug, Eq, PartialEq)]

View File

@@ -122,7 +122,9 @@ impl MemoryFileSystem {
let entry = by_path.get(&normalized).ok_or_else(not_found)?;
match entry {
Entry::File(file) => Ok(file.content.clone()),
Entry::File(file) => {
String::from_utf8(file.content.to_vec()).map_err(|_| invalid_utf8())
}
Entry::Directory(_) => Err(is_a_directory()),
}
}
@@ -139,7 +141,7 @@ impl MemoryFileSystem {
.get(&path.as_ref().to_path_buf())
.ok_or_else(not_found)?;
Ok(file.content.clone())
String::from_utf8(file.content.to_vec()).map_err(|_| invalid_utf8())
}
pub fn exists(&self, path: &SystemPath) -> bool {
@@ -161,7 +163,7 @@ impl MemoryFileSystem {
match by_path.entry(normalized) {
btree_map::Entry::Vacant(entry) => {
entry.insert(Entry::File(File {
content: String::new(),
content: Box::default(),
last_modified: file_time_now(),
}));
@@ -177,13 +179,17 @@ impl MemoryFileSystem {
/// Stores a new file in the file system.
///
/// The operation overrides the content for an existing file with the same normalized `path`.
pub fn write_file(&self, path: impl AsRef<SystemPath>, content: impl ToString) -> Result<()> {
pub fn write_file(
&self,
path: impl AsRef<SystemPath>,
content: impl AsRef<[u8]>,
) -> Result<()> {
let mut by_path = self.inner.by_path.write().unwrap();
let normalized = self.normalize_path(path.as_ref());
let file = get_or_create_file(&mut by_path, &normalized)?;
file.content = content.to_string();
file.content = content.as_ref().to_vec().into_boxed_slice();
file.last_modified = file_time_now();
Ok(())
@@ -214,7 +220,7 @@ impl MemoryFileSystem {
pub fn write_file_all(
&self,
path: impl AsRef<SystemPath>,
content: impl ToString,
content: impl AsRef<[u8]>,
) -> Result<()> {
let path = path.as_ref();
@@ -228,19 +234,24 @@ impl MemoryFileSystem {
/// Stores a new virtual file in the file system.
///
/// The operation overrides the content for an existing virtual file with the same `path`.
pub fn write_virtual_file(&self, path: impl AsRef<SystemVirtualPath>, content: impl ToString) {
pub fn write_virtual_file(
&self,
path: impl AsRef<SystemVirtualPath>,
content: impl AsRef<[u8]>,
) {
let path = path.as_ref();
let mut virtual_files = self.inner.virtual_files.write().unwrap();
let content = content.as_ref().to_vec().into_boxed_slice();
match virtual_files.entry(path.to_path_buf()) {
std::collections::hash_map::Entry::Vacant(entry) => {
entry.insert(File {
content: content.to_string(),
content,
last_modified: file_time_now(),
});
}
std::collections::hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().content = content.to_string();
entry.get_mut().content = content;
}
}
}
@@ -468,7 +479,7 @@ impl Entry {
#[derive(Debug)]
struct File {
content: String,
content: Box<[u8]>,
last_modified: FileTime,
}
@@ -497,6 +508,13 @@ fn directory_not_empty() -> std::io::Error {
std::io::Error::other("directory not empty")
}
fn invalid_utf8() -> std::io::Error {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"stream did not contain valid UTF-8",
)
}
fn create_dir_all(
paths: &mut RwLockWriteGuard<BTreeMap<Utf8PathBuf, Entry>>,
normalized: &Utf8Path,
@@ -533,7 +551,7 @@ fn get_or_create_file<'a>(
let entry = paths.entry(normalized.to_path_buf()).or_insert_with(|| {
Entry::File(File {
content: String::new(),
content: Box::default(),
last_modified: file_time_now(),
})
});
@@ -844,7 +862,7 @@ mod tests {
let fs = with_files(["c.py"]);
let error = fs
.write_file(SystemPath::new("a/b.py"), "content".to_string())
.write_file(SystemPath::new("a/b.py"), "content")
.unwrap_err();
assert_eq!(error.kind(), ErrorKind::NotFound);
@@ -855,7 +873,7 @@ mod tests {
let fs = with_files(["a/b.py"]);
let error = fs
.write_file_all(SystemPath::new("a/b.py/c"), "content".to_string())
.write_file_all(SystemPath::new("a/b.py/c"), "content")
.unwrap_err();
assert_eq!(error.kind(), ErrorKind::Other);
@@ -878,7 +896,7 @@ mod tests {
let fs = MemoryFileSystem::new();
let path = SystemPath::new("a.py");
fs.write_file_all(path, "Test content".to_string())?;
fs.write_file_all(path, "Test content")?;
assert_eq!(fs.read_to_string(path)?, "Test content");
@@ -915,9 +933,7 @@ mod tests {
fs.create_directory_all("a")?;
let error = fs
.write_file(SystemPath::new("a"), "content".to_string())
.unwrap_err();
let error = fs.write_file(SystemPath::new("a"), "content").unwrap_err();
assert_eq!(error.kind(), ErrorKind::Other);

View File

@@ -361,13 +361,17 @@ impl WritableSystem for OsSystem {
std::fs::File::create_new(path).map(drop)
}
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()> {
std::fs::write(path.as_std_path(), content)
}
fn create_directory_all(&self, path: &SystemPath) -> Result<()> {
std::fs::create_dir_all(path.as_std_path())
}
fn dyn_clone(&self) -> Box<dyn WritableSystem> {
Box::new(self.clone())
}
}
impl Default for OsSystem {

View File

@@ -205,13 +205,17 @@ impl WritableSystem for TestSystem {
self.system().create_new_file(path)
}
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
self.system().write_file(path, content)
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()> {
self.system().write_file_bytes(path, content)
}
fn create_directory_all(&self, path: &SystemPath) -> Result<()> {
self.system().create_directory_all(path)
}
fn dyn_clone(&self) -> Box<dyn WritableSystem> {
Box::new(self.clone())
}
}
/// Extension trait for databases that use a [`WritableSystem`].
@@ -283,7 +287,11 @@ pub trait DbWithTestSystem: Db + Sized {
///
/// ## Panics
/// If the db isn't using the [`InMemorySystem`].
fn write_virtual_file(&mut self, path: impl AsRef<SystemVirtualPath>, content: impl ToString) {
fn write_virtual_file(
&mut self,
path: impl AsRef<SystemVirtualPath>,
content: impl AsRef<[u8]>,
) {
let path = path.as_ref();
self.test_system()
.memory_file_system()
@@ -322,23 +330,23 @@ where
}
}
#[derive(Default, Debug)]
#[derive(Clone, Default, Debug)]
pub struct InMemorySystem {
user_config_directory: Mutex<Option<SystemPathBuf>>,
user_config_directory: Arc<Mutex<Option<SystemPathBuf>>>,
memory_fs: MemoryFileSystem,
}
impl InMemorySystem {
pub fn new(cwd: SystemPathBuf) -> Self {
Self {
user_config_directory: Mutex::new(None),
user_config_directory: Mutex::new(None).into(),
memory_fs: MemoryFileSystem::with_current_directory(cwd),
}
}
pub fn from_memory_fs(memory_fs: MemoryFileSystem) -> Self {
Self {
user_config_directory: Mutex::new(None),
user_config_directory: Mutex::new(None).into(),
memory_fs,
}
}
@@ -440,10 +448,7 @@ impl System for InMemorySystem {
}
fn dyn_clone(&self) -> Box<dyn System> {
Box::new(Self {
user_config_directory: Mutex::new(self.user_config_directory.lock().unwrap().clone()),
memory_fs: self.memory_fs.clone(),
})
Box::new(self.clone())
}
}
@@ -452,11 +457,15 @@ impl WritableSystem for InMemorySystem {
self.memory_fs.create_new_file(path)
}
fn write_file(&self, path: &SystemPath, content: &str) -> Result<()> {
fn write_file_bytes(&self, path: &SystemPath, content: &[u8]) -> Result<()> {
self.memory_fs.write_file(path, content)
}
fn create_directory_all(&self, path: &SystemPath) -> Result<()> {
self.memory_fs.create_directory_all(path)
}
fn dyn_clone(&self) -> Box<dyn WritableSystem> {
Box::new(self.clone())
}
}

View File

@@ -619,19 +619,19 @@ pub(crate) mod tests {
fn read_directory_stdlib() {
let mock_typeshed = mock_typeshed();
assert_snapshot!(readdir_snapshot(&mock_typeshed, "stdlib"), @r"
assert_snapshot!(readdir_snapshot(&mock_typeshed, "stdlib"), @"
vendored://stdlib/asyncio/
vendored://stdlib/functools.pyi
");
assert_snapshot!(readdir_snapshot(&mock_typeshed, "stdlib/"), @r"
assert_snapshot!(readdir_snapshot(&mock_typeshed, "stdlib/"), @"
vendored://stdlib/asyncio/
vendored://stdlib/functools.pyi
");
assert_snapshot!(readdir_snapshot(&mock_typeshed, "./stdlib"), @r"
assert_snapshot!(readdir_snapshot(&mock_typeshed, "./stdlib"), @"
vendored://stdlib/asyncio/
vendored://stdlib/functools.pyi
");
assert_snapshot!(readdir_snapshot(&mock_typeshed, "./stdlib/"), @r"
assert_snapshot!(readdir_snapshot(&mock_typeshed, "./stdlib/"), @"
vendored://stdlib/asyncio/
vendored://stdlib/functools.pyi
");

View File

@@ -13,7 +13,10 @@ impl<T, C> AsFormat<C> for &T
where
T: AsFormat<C>,
{
type Format<'a> = T::Format<'a> where Self: 'a;
type Format<'a>
= T::Format<'a>
where
Self: 'a;
fn format(&self) -> Self::Format<'_> {
AsFormat::format(&**self)

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.10"
version = "0.14.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1,6 +1,5 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
None

View File

@@ -1,6 +1,5 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
None

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
Some(
ShebangDirective(

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
Some(
ShebangDirective(

View File

@@ -1,6 +1,5 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
None

View File

@@ -26,6 +26,7 @@ use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
use crate::fix::{FixResult, fix_file};
use crate::noqa::add_noqa;
use crate::package::PackageRoot;
use crate::preview::is_py315_support_enabled;
use crate::registry::Rule;
#[cfg(any(feature = "test-rules", test))]
use crate::rules::ruff::rules::test_rules::{self, TEST_RULES, TestRule};
@@ -33,7 +34,7 @@ use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, TargetVersion, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Locator, directives, fs};
use crate::{Locator, directives, fs, warn_user_once};
pub(crate) mod float;
@@ -450,6 +451,14 @@ pub fn lint_only(
) -> LinterResult {
let target_version = settings.resolve_target_version(path);
if matches!(target_version.linter_version(), PythonVersion::PY315)
&& !is_py315_support_enabled(settings)
{
warn_user_once!(
"Support for Python 3.15 is under development and may be unstable. Enable `preview` to remove this warning."
);
}
let parsed = source.into_parsed(source_kind, source_type, target_version.parser_version());
// Map row and column locations to byte slices (lazily).
@@ -555,6 +564,14 @@ pub fn lint_fix<'a>(
let target_version = settings.resolve_target_version(path);
if matches!(target_version.linter_version(), PythonVersion::PY315)
&& !is_py315_support_enabled(settings)
{
warn_user_once!(
"Support for Python 3.15 is under development and may be unstable. Enable `preview` to remove this warning."
);
}
// Continuously fix until the source code stabilizes.
loop {
// Parse once.

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_linter/src/message/grouped.rs
expression: content
snapshot_kind: text
---
fib.py:
1:8 F401 `os` imported but unused

View File

@@ -1326,7 +1326,7 @@ mod tests {
fn noqa_all() {
let source = "# noqa";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Ok(
Some(
NoqaLexerOutput {
@@ -1347,7 +1347,7 @@ mod tests {
fn noqa_no_code() {
let source = "# noqa:";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -1359,7 +1359,7 @@ mod tests {
fn noqa_no_code_invalid_suffix() {
let source = "# noqa: foo";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -1371,7 +1371,7 @@ mod tests {
fn noqa_no_code_trailing_content() {
let source = "# noqa: # Foo";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -1383,7 +1383,7 @@ mod tests {
fn malformed_code_1() {
let source = "# noqa: F";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -1422,7 +1422,7 @@ mod tests {
fn malformed_code_3() {
let source = "# noqa: RUF001F";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
InvalidCodeSuffix,
)
@@ -1492,7 +1492,7 @@ mod tests {
fn noqa_all_case_insensitive() {
let source = "# NOQA";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Ok(
Some(
NoqaLexerOutput {
@@ -1625,7 +1625,7 @@ mod tests {
fn noqa_all_no_space() {
let source = "#noqa";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Ok(
Some(
NoqaLexerOutput {
@@ -1704,7 +1704,7 @@ mod tests {
fn noqa_all_multi_space() {
let source = "# noqa";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Ok(
Some(
NoqaLexerOutput {
@@ -1837,7 +1837,7 @@ mod tests {
fn noqa_all_leading_comment() {
let source = "# Some comment describing the noqa # noqa";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Ok(
Some(
NoqaLexerOutput {
@@ -1916,7 +1916,7 @@ mod tests {
fn noqa_all_trailing_comment() {
let source = "# noqa # Some comment describing the noqa";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Ok(
Some(
NoqaLexerOutput {
@@ -1995,7 +1995,7 @@ mod tests {
fn noqa_invalid_codes() {
let source = "# noqa: unused-import, F401, some other code";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -2139,7 +2139,7 @@ mod tests {
fn noqa_code_invalid_code_suffix() {
let source = "# noqa: F401abc";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
InvalidCodeSuffix,
)
@@ -2151,7 +2151,7 @@ mod tests {
fn noqa_invalid_suffix() {
let source = "# noqa[F401]";
let directive = lex_inline_noqa(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
InvalidSuffix,
)
@@ -2163,7 +2163,7 @@ mod tests {
fn flake8_exemption_all() {
let source = "# flake8: noqa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2184,7 +2184,7 @@ mod tests {
fn flake8_noqa_no_code() {
let source = "# flake8: noqa:";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
MissingCodes,
)
@@ -2196,7 +2196,7 @@ mod tests {
fn flake8_noqa_no_code_invalid_suffix() {
let source = "# flake8: noqa: foo";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
MissingCodes,
)
@@ -2208,7 +2208,7 @@ mod tests {
fn flake8_noqa_no_code_trailing_content() {
let source = "# flake8: noqa: # Foo";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
MissingCodes,
)
@@ -2220,7 +2220,7 @@ mod tests {
fn flake8_malformed_code_1() {
let source = "# flake8: noqa: F";
let directive = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -2259,7 +2259,7 @@ mod tests {
fn flake8_malformed_code_3() {
let source = "# flake8: noqa: RUF001F";
let directive = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
InvalidCodeSuffix,
)
@@ -2271,7 +2271,7 @@ mod tests {
fn ruff_exemption_all() {
let source = "# ruff: noqa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2292,7 +2292,7 @@ mod tests {
fn ruff_noqa_no_code() {
let source = "# ruff: noqa:";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
MissingCodes,
)
@@ -2304,7 +2304,7 @@ mod tests {
fn ruff_noqa_no_code_invalid_suffix() {
let source = "# ruff: noqa: foo";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
MissingCodes,
)
@@ -2316,7 +2316,7 @@ mod tests {
fn ruff_noqa_no_code_trailing_content() {
let source = "# ruff: noqa: # Foo";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
MissingCodes,
)
@@ -2328,7 +2328,7 @@ mod tests {
fn ruff_malformed_code_1() {
let source = "# ruff: noqa: F";
let directive = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
MissingCodes,
)
@@ -2367,7 +2367,7 @@ mod tests {
fn ruff_malformed_code_3() {
let source = "# ruff: noqa: RUF001F";
let directive = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(directive, @r"
assert_debug_snapshot!(directive, @"
Err(
InvalidCodeSuffix,
)
@@ -2379,7 +2379,7 @@ mod tests {
fn flake8_exemption_all_no_space() {
let source = "#flake8:noqa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2400,7 +2400,7 @@ mod tests {
fn ruff_exemption_all_no_space() {
let source = "#ruff:noqa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2619,7 +2619,7 @@ mod tests {
fn ruff_exemption_invalid_code_suffix() {
let source = "# ruff: noqa: F401abc";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Err(
InvalidCodeSuffix,
)
@@ -2685,7 +2685,7 @@ mod tests {
fn ruff_exemption_all_leading_comment() {
let source = "# Leading comment # ruff: noqa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2706,7 +2706,7 @@ mod tests {
fn ruff_exemption_all_trailing_comment() {
let source = "# ruff: noqa # Trailing comment";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2754,7 +2754,7 @@ mod tests {
fn ruff_exemption_all_trailing_comment_no_space() {
let source = "# ruff: noqa# Trailing comment";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2775,7 +2775,7 @@ mod tests {
fn ruff_exemption_all_trailing_comment_no_hash() {
let source = "# ruff: noqa Trailing comment";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2823,7 +2823,7 @@ mod tests {
fn flake8_exemption_all_case_insensitive() {
let source = "# flake8: NoQa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {
@@ -2844,7 +2844,7 @@ mod tests {
fn ruff_exemption_all_case_insensitive() {
let source = "# ruff: NoQa";
let exemption = lex_file_exemption(TextRange::up_to(source.text_len()), source);
assert_debug_snapshot!(exemption, @r"
assert_debug_snapshot!(exemption, @"
Ok(
Some(
NoqaLexerOutput {

View File

@@ -296,3 +296,8 @@ pub(crate) const fn is_s310_resolve_string_literal_bindings_enabled(
pub(crate) const fn is_range_suppressions_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/22419
pub(crate) const fn is_py315_support_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_annotations/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_annotations/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_async/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

View File

@@ -1,5 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
snapshot_kind: text
---

Some files were not shown because too many files have changed in this diff Show More