Compare commits

..

209 Commits

Author SHA1 Message Date
David Peter
9220598fc8 [ty] Experiment: half-baked typing.TypeAlias support 2025-05-20 11:33:59 +02:00
Micha Reiser
f9ca6eb63e Fix rendering of admonition in docs (#18163) 2025-05-20 08:22:06 +02:00
Adam Aaronson
8729cb208f [ty] Raise invalid-exception-caught even when exception is not captured (#18202)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-05-19 18:13:34 -04:00
Emily B. Zhang
a2c87c2bc1 [ty] Add note to unresolved-import hinting to users to configure their Python environment (#18207)
Closes https://github.com/astral-sh/ty/issues/453.

## Summary

Add an additional info diagnostic to `unresolved-import` check to hint
to users that they should make sure their Python environment is properly
configured for ty, linking them to the corresponding doc. This
diagnostic is only shown when an import is not relative, e.g., `import
maturin` not `import .maturin`.

## Test Plan

Updated snapshots with new info message and reran tests.
2025-05-19 17:24:25 -04:00
Vasco Schiavo
b302d89da3 [flake8-simplify] add fix safety section (SIM110) (#18114)
The PR add the `fix safety` section for rule `SIM110` (#15584 )

### Unsafe Fix Example

```python
def predicate(item):
    global called
    called += 1
    if called == 1:
    # after first call we change the method
        def new_predicate(_): return False
        globals()['predicate'] = new_predicate
    return True

def foo():
    for item in range(10):
        if predicate(item):
            return True
    return False

def foo_gen():
    return any(predicate(item) for item in range(10))

called = 0
print(foo())      # true – returns immediately on first call

called = 0
print(foo_gen())  # false – second call uses new `predicate`
```

### Note

I notice that
[here](46be305ad2/crates/ruff_linter/src/rules/flake8_simplify/rules/reimplemented_builtin.rs (L60))
we have two rules, `SIM110` & `SIM111`. The second one seems not anymore
active. Should I delete `SIM111`?
2025-05-19 16:38:08 -04:00
Douglas Creager
ce43dbab58 [ty] Promote literals when inferring class specializations from constructors (#18102)
This implements the stopgap approach described in
https://github.com/astral-sh/ty/issues/336#issuecomment-2880532213 for
handling literal types in generic class specializations.

With this approach, we will promote any literal to its instance type,
but _only_ when inferring a generic class specialization from a
constructor call:

```py
class C[T]:
    def __init__(self, x: T) -> None: ...

reveal_type(C("string"))  # revealed: C[str]
```

If you specialize the class explicitly, we still use whatever type you
provide, even if it's a literal:

```py
from typing import Literal

reveal_type(C[Literal[5]](5))  # revealed: C[Literal[5]]
```

And this doesn't apply at all to generic functions:

```py
def f[T](x: T) -> T:
    return x

reveal_type(f(5))  # revealed: Literal[5]
```

---

As part of making this happen, we also generalize the `TypeMapping`
machinery. This provides a way to apply a function to type, returning a
new type. Complicating matters is that for function literals, we have to
apply the mapping lazily, since the function's signature is not created
until (and if) someone calls its `signature` method. That means we have
to stash away the mappings that we want to apply to the signatures
parameter/return annotations once we do create it. This requires some
minor `Cow` shenanigans to continue working for partial specializations.
2025-05-19 15:42:54 -04:00
Felix Scherz
fb589730ef [ty]: Consider a class with a dynamic element in its MRO assignable to any subtype of type (#18205) 2025-05-19 19:30:30 +00:00
Douglas Creager
4fad15805b [ty] Use first matching constructor overload when inferring specializations (#18204)
This is a follow-on to #18155. For the example raised in
https://github.com/astral-sh/ty/issues/370:

```py
import tempfile

with tempfile.TemporaryDirectory() as tmp: ...
```

the new logic would notice that both overloads of `TemporaryDirectory`
match, and combine their specializations, resulting in an inferred type
of `str | bytes`.

This PR updates the logic to match our other handling of other calls,
where we only keep the _first_ matching overload. The result for this
example then becomes `str`, matching the runtime behavior. (We still do
not implement the full [overload resolution
algorithm](https://typing.python.org/en/latest/spec/overload.html#overload-call-evaluation)
from the spec.)
2025-05-19 15:12:28 -04:00
David Peter
0ede831a3f [ty] Add hint that PEP 604 union syntax is only available in 3.10+ (#18192)
## Summary

Add a new diagnostic hint if you try to use PEP 604 `X | Y` union syntax
in a non-type-expression before 3.10.

closes https://github.com/astral-sh/ty/issues/437

## Test Plan

New snapshot test
2025-05-19 19:47:31 +02:00
Brent Westbrook
d6009eb942 Unify Message variants (#18051)
## Summary

This PR unifies the ruff `Message` enum variants for syntax errors and
rule violations into a single `Message` struct consisting of a shared
`db::Diagnostic` and some additional, optional fields used for some rule
violations.

This version of `Message` is nearly a drop-in replacement for
`ruff_diagnostics::Diagnostic`, which is the next step I have in mind
for the refactor.

I think this is also a useful checkpoint because we could possibly add
some of these optional fields to the new `Diagnostic` type. I think
we've previously discussed wanting support for `Fix`es, but the other
fields seem less relevant, so we may just need to preserve the `Message`
wrapper for a bit longer.

## Test plan

Existing tests

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-19 13:34:04 -04:00
Wei Lee
236633cd42 [airflow] Update AIR301 and AIR311 with the latest Airflow implementations (#17985)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->



* Remove the following rules
    * name
* `airflow.auth.managers.base_auth_manager.is_authorized_dataset` →
`airflow.api_fastapi.auth.managers.base_auth_manager.is_authorized_asset`
*
`airflow.providers.fab.auth_manager.fab_auth_manager.is_authorized_dataset`
→
`airflow.providers.fab.auth_manager.fab_auth_manager.is_authorized_asset`
* Update the following rules
    * name
* `airflow.models.baseoperatorlink.BaseOperatorLink` →
`airflow.sdk.BaseOperatorLink`
* `airflow.api_connexion.security.requires_access` → "Use
`airflow.api_fastapi.core_api.security.requires_access_*` instead`"
* `airflow.api_connexion.security.requires_access_dataset`→
`airflow.api_fastapi.core_api.security.requires_access_asset`
* `airflow.notifications.basenotifier.BaseNotifier` →
`airflow.sdk.bases.notifier.BaseNotifier`
        * `airflow.www.auth.has_access`  → None
        * `airflow.www.auth.has_access_dataset` → None
        * `airflow.www.utils.get_sensitive_variables_fields`→ None
        * `airflow.www.utils.should_hide_value_for_key`→ None
    * class attribute
        * `airflow..sensors.weekday.DayOfWeekSensor`
            * `use_task_execution_day` removed
*
`airflow.providers.amazon.aws.auth_manager.aws_auth_manager.AwsAuthManager`
            * `is_authorized_dataset`
* Add the following rules
    * class attribute
* `airflow.auth.managers.base_auth_manager.BaseAuthManager` |
`airflow.providers.fab.auth_manager.fab_auth_manager.FabAuthManager`
     * name
* `airflow.auth.managers.base_auth_manager.BaseAuthManager` →
`airflow.api_fastapi.auth.managers.base_auth_manager.BaseAuthManager` *
`is_authorized_dataset` → `is_authorized_asset`
* refactor
    * simplify unnecessary match with if else
    * rename Replacement::Name as Replacement::AttrName

## Test Plan

<!-- How was it tested? -->
The test fixtures have been revised and updated.
2025-05-19 13:28:04 -04:00
Wei Lee
99cb89f90f [airflow] Move rules from AIR312 to AIR302 (#17940)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

In the later development of Airflow 3.0, backward compatibility was not
added for some cases. Thus, the following rules are moved back to AIR302

* airflow.hooks.subprocess.SubprocessResult →
airflow.providers.standard.hooks.subprocess.SubprocessResult
* airflow.hooks.subprocess.working_directory →
airflow.providers.standard.hooks.subprocess.working_directory
* airflow.operators.datetime.target_times_as_dates →
airflow.providers.standard.operators.datetime.target_times_as_dates
* airflow.operators.trigger_dagrun.TriggerDagRunLink →
airflow.providers.standard.operators.trigger_dagrun.TriggerDagRunLink
* airflow.sensors.external_task.ExternalTaskSensorLink →
airflow.providers.standard.sensors.external_task.ExternalDagLink (**This
one contains a minor change**)
* airflow.sensors.time_delta.WaitSensor →
airflow.providers.standard.sensors.time_delta.WaitSensor

## Test Plan

<!-- How was it tested? -->
2025-05-19 13:20:21 -04:00
Micha Reiser
ac5df56aa3 [ty] Small LSP cleanups (#18201) 2025-05-19 17:08:59 +00:00
Micha Reiser
6985de4c40 [ty] Show related information in diagnostic (#17359) 2025-05-19 18:52:12 +02:00
Micha Reiser
55a410a885 Default src.root to ['.', '<project_name>'] if the directory exists (#18141) 2025-05-19 18:11:27 +02:00
Douglas Creager
97058e8093 [ty] Infer function call typevars in both directions (#18155)
This primarily comes up with annotated `self` parameters in
constructors:

```py
class C[T]:
    def __init__(self: C[int]): ...
```

Here, we want infer a specialization of `{T = int}` for a call that hits
this overload.

Normally when inferring a specialization of a function call, typevars
appear in the parameter annotations, and not in the argument types. In
this case, this is reversed: we need to verify that the `self` argument
(`C[T]`, as we have not yet completed specialization inference) is
assignable to the parameter type `C[int]`.

To do this, we simply look for a typevar/type in both directions when
performing inference, and apply the inferred specialization to argument
types as well as parameter types before verifying assignability.

As a wrinkle, this exposed that we were not checking
subtyping/assignability for function literals correctly. Our function
literal representation includes an optional specialization that should
be applied to the signature. Before, function literals were considered
subtypes of (assignable to) each other only if they were identical Salsa
objects. Two function literals with different specializations should
still be considered subtypes of (assignable to) each other if those
specializations result in the same function signature (typically because
the function doesn't use the typevars in the specialization).

Closes https://github.com/astral-sh/ty/issues/370
Closes https://github.com/astral-sh/ty/issues/100
Closes https://github.com/astral-sh/ty/issues/258

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-05-19 11:45:40 -04:00
Douglas Creager
569c94b71b Add rustfmt.toml file (#18197)
My editor runs `rustfmt` on save to format Rust code, not `cargo fmt`.

With our recent bump to the Rust 2024 edition, the formatting that
`rustfmt`/`cargo fmt` applies changed. Unfortunately, `rustfmt` and
`cargo fmt` have different behaviors for determining which edition to
use when formatting: `cargo fmt` looks for the Rust edition in
`Cargo.toml`, whereas `rustfmt` looks for it in `rustfmt.toml`. As a
result, whenever I save, I have to remember to manually run `cargo fmt`
before committing/pushing.

There is an open issue asking for `rustfmt` to also look at `Cargo.toml`
when it's present (https://github.com/rust-lang/rust.vim/issues/368),
but it seems like they "closed" that issue just by bumping the default
edition (six years ago, from 2015 to 2018).

In the meantime, this PR adds a `rustfmt.toml` file with our current
Rust edition so that both invocation have the same behavior. I don't
love that this duplicates information in `Cargo.toml`, but I've added a
reminder comment there to hopefully ensure that we bump the edition in
both places three years from now.
2025-05-19 11:40:58 -04:00
Micha Reiser
59d80aff9f [ty] Update mypy primer (#18196) 2025-05-19 17:35:48 +02:00
David Peter
b913f568c4 [ty] Mark generated files as such in .gitattributes (#18195)
## Summary

See comment here:
https://github.com/astral-sh/ruff/pull/18156#discussion_r2095850586
2025-05-19 16:50:48 +02:00
David Peter
4c889d5251 [ty] Support typing.TypeAliasType (#18156)
## Summary

Support direct uses of `typing.TypeAliasType`, as in:

```py
from typing import TypeAliasType

IntOrStr = TypeAliasType("IntOrStr", int | str)

def f(x: IntOrStr) -> None:
    reveal_type(x)  # revealed: int | str
```

closes https://github.com/astral-sh/ty/issues/392

## Ecosystem

The new false positive here:
```diff
+ error[invalid-type-form] altair/utils/core.py:49:53: The first argument to `Callable` must be either a list of types, ParamSpec, Concatenate, or `...`
```
comes from the fact that we infer the second argument as a type
expression now. We silence false positives for PEP695 `ParamSpec`s, but
not for `P = ParamSpec("P")` inside `Callable[P, ...]`.

## Test Plan

New Markdown tests
2025-05-19 16:36:49 +02:00
Micha Reiser
220137ca7b Cargo update (#18191) 2025-05-19 09:14:11 +02:00
renovate[bot]
34337fb8ba Update NPM Development dependencies (#18187)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:57:45 +02:00
renovate[bot]
38c332fe23 Update Rust crate bincode to v2 (#18188)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-19 08:57:09 +02:00
renovate[bot]
9f743d1b9f Update astral-sh/setup-uv action to v6 (#18184)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:46:40 +02:00
renovate[bot]
405544cc8f Update dependency react-resizable-panels to v3 (#18185)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 06:29:31 +00:00
renovate[bot]
ab96adbcd1 Update dependency ruff to v0.11.10 (#18171)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:29:03 +02:00
renovate[bot]
a761b8cfa2 Update pre-commit dependencies (#18172)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-19 08:28:43 +02:00
renovate[bot]
8c020cc2e9 Update docker/build-push-action action to v6.17.0 (#18174)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:28:31 +02:00
renovate[bot]
c67aa0cce2 Update uraimo/run-on-arch-action action to v3 (#18190)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:28:22 +02:00
renovate[bot]
b00e390f3a Update docker/metadata-action action to v5.7.0 (#18175)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:28:09 +02:00
renovate[bot]
1f9df0c8f0 Update docker/setup-buildx-action action to v3.10.0 (#18176)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:27:55 +02:00
renovate[bot]
9dd9227bca Update taiki-e/install-action action to v2.51.2 (#18183)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:27:44 +02:00
renovate[bot]
181a380ee0 Update extractions/setup-just action to v3 (#18186)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:26:34 +02:00
renovate[bot]
12f5e99389 Update Rust crate jod-thread to v1 (#18189)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:25:46 +02:00
renovate[bot]
d9cd6399e6 Update Rust crate insta to v1.43.1 (#18180)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 06:24:40 +00:00
renovate[bot]
c40a801002 Update dependency pyodide to v0.27.6 (#18170)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:23:21 +02:00
renovate[bot]
04168cf1ce Update react monorepo to v19.1.0 (#18178)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:13:33 +02:00
renovate[bot]
6d0703ae78 Update taiki-e/install-action digest to 941e8a4 (#18168)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:10:30 +02:00
renovate[bot]
f7691a79a0 Update peter-evans/find-comment action to v3.1.0 (#18177)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:10:14 +02:00
renovate[bot]
6e7340c68b Update cargo-bins/cargo-binstall action to v1.12.5 (#18169)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:09:54 +02:00
renovate[bot]
5095248b7e Update Rust crate bitflags to v2.9.1 (#18173)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:09:42 +02:00
renovate[bot]
b1e6c6edce Update Rust crate criterion to 0.6.0 (#18179)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:09:18 +02:00
renovate[bot]
8ee92c6c77 Update Rust crate tempfile to v3.20.0 (#18182)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:07:22 +02:00
renovate[bot]
d6709abd94 Update Rust crate quickcheck_macros to v1.1.0 (#18181)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-19 08:06:32 +02:00
Dragon
660375d429 T201/T203 Improve print/pprint docs (#18130)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-18 18:40:42 +02:00
Alex Waygood
dd04ca7f58 [ty] Add regression test for fixed pyvenv.cfg parsing bug (#18157) 2025-05-17 21:10:15 +00:00
Chandra Kiran G
b86960f18c [ty] Add rule link to server diagnostics (#18128)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-17 17:27:59 +00:00
Carl Meyer
2abcd86c57 Revert "[ty] Better control flow for boolean expressions that are inside if (#18010)" (#18150)
This reverts commit 9910ec700c.

## Summary

This change introduced a serious performance regression. Revert it while
we investigate.

Fixes https://github.com/astral-sh/ty/issues/431

## Test Plan

Timing on the snippet in https://github.com/astral-sh/ty/issues/431
again shows times similar to before the regression.
2025-05-17 08:27:32 -04:00
Matthew Mckee
c6e55f673c Remove pyvenv.cfg validation check for lines with multiple = (#18144) 2025-05-17 08:42:39 +02:00
Micha Reiser
3d55a16c91 [ty] Migrate the namespace package module resolver tests to mdtests (#18133)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-05-16 19:56:33 +02:00
Micha Reiser
e21972a79b Fix test scripts CI job (#18140) 2025-05-16 17:49:27 +00:00
Alex Waygood
0adbb3d600 [ty] Fix assignability checks for invariant generics parameterized by gradual types (#18138) 2025-05-16 13:37:07 -04:00
Alex Waygood
28fb802467 [ty] Merge SemanticIndexBuilder impl blocks (#18135)
## Summary

just a minor nit followup to
https://github.com/astral-sh/ruff/pull/18010 -- put all the
non-`Visitor` methods of `SemanticIndexBuilder` in the same impl block
rather than having multiple impl blocks

## Test Plan

`cargo build`
2025-05-16 11:05:02 -04:00
Brent Westbrook
a1d007c37c Use insta settings instead of cfg (#18134)
Summary
--

I noticed these `cfg` directives while working on diagnostics. I think
it makes more sense to apply an `insta` filter in the test instead. I
copied this filter from a CLI test for the same rule.

Test Plan
--

Existing tests, especially Windows CI on this PR
2025-05-16 10:55:33 -04:00
Micha Reiser
1ba56b4bc6 [ty] Fix relative imports in stub packages (#18132) 2025-05-16 15:30:10 +02:00
David Peter
e677cabd69 [ty] Reduce size of the many-tuple-assignments benchmark (#18131)
## Summary

The previous version took several minute to complete on codspeed.
2025-05-16 15:28:23 +02:00
TomerBin
9910ec700c [ty] Better control flow for boolean expressions that are inside if (#18010)
## Summary
With this PR we now detect that x is always defined in `use`:
```py
if flag and (x := number):
    use(x)
```

When outside if, it's still detected as possibly not defined
```py
flag and (x := number)
# error: [possibly-unresolved-reference]
use(x)
```
In order to achieve that, I had to find a way to get access to the
flow-snapshots of the boolean expression when analyzing the flow of the
if statement. I did it by special casing the visitor of boolean
expression to return flow control information, exporting two snapshots -
`maybe_short_circuit` and `no_short_circuit`. When indexing
boolean expression itself we must assume all possible flows, but when
it's inside if statement, we can be smarter than that.

## Test Plan
Fixed existing and added new mdtests.
I went through some of mypy primer results and they look fine

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-05-16 11:59:21 +00:00
Micha Reiser
9ae698fe30 Switch to Rust 2024 edition (#18129) 2025-05-16 13:25:28 +02:00
David Peter
e67b35743a [ty] NamedTuple 'fallback' attributes (#18127)
## Summary

Add various attributes to `NamedTuple` classes/instances that are
available at runtime.

closes https://github.com/astral-sh/ty/issues/417

## Test Plan

New Markdown tests
2025-05-16 12:56:43 +02:00
David Peter
8644c9da43 [ty] Regression test for relative import in stubs package (#18123)
## Summary

Regression test for https://github.com/astral-sh/ty/issues/408
2025-05-16 12:49:35 +02:00
Micha Reiser
196e4befba Update MSRV to 1.85 and toolchain to 1.87 (#18126) 2025-05-16 09:19:55 +02:00
David Peter
6e39250015 [ty] Allow unions including Any/Unknown as bases (#18094)
## Summary

Alternative fix for https://github.com/astral-sh/ty/issues/312

## Test Plan

New Markdown test
2025-05-16 06:57:26 +02:00
Matthew Mckee
7dc4fefb47 Remove ty property tests (#18124) 2025-05-15 20:57:00 -04:00
Vasco Schiavo
e5435eb106 [flake8-simplify] add fix safety section (SIM210) (#18100)
The PR add the `fix safety` section for rule `SIM210` (#15584 )

It is a little cheating, as the Fix safety section is copy/pasted by
#18086 as the problem is the same.

### Unsafe Fix Example

```python
class Foo():
    def __eq__(self, other):
        return 0

def foo():
    return True if Foo() == 0 else False

def foo_fix():
    return Foo() == 0

print(foo()) # False
print(foo_fix()) # 0
```
2025-05-15 16:26:10 -04:00
Victor Hugo Gomes
f53c580c53 [pylint] Fix PLW1514 not recognizing the encoding positional argument of codecs.open (#18109)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Fixes #18107
<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

Snapshot tests
<!-- How was it tested? -->
2025-05-15 16:17:07 -04:00
Wei Lee
2ceba6ae67 [airflow] Add autofixes for AIR302 and AIR312 (#17942)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

`ProviderReplacement::Name` was designed back when we only wanted to do
linting. Now we also want to fix the user code. It would be easier for
us to replace them with better AutoImport struct.

## Test Plan

<!-- How was it tested? -->

The test fixture has been updated as some cases can now be fixed
2025-05-15 16:03:02 -04:00
Felix Scherz
d3a7cb3fe4 [ty] support accessing __builtins__ global (#18118)
## Summary

The PR adds an explicit check for `"__builtins__"` during name lookup,
similar to how `"__file__"` is implemented. The inferred type is
`Any`.

closes https://github.com/astral-sh/ty/issues/393

## Test Plan

Added a markdown test for `__builtins__`.

---------

Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
2025-05-15 22:01:38 +02:00
Andrew Gallant
69393b2e6e [ty] Improve invalid method calls for unmatched overloads (#18122)
This makes an easy tweak to allow our diagnostics for unmatched
overloads to apply to method calls. Previously, they only worked for
function calls.

There is at least one other case worth addressing too, namely, class
literals. e.g., `type()`. We had a diagnostic snapshot test case to
track it.

Closes astral-sh/ty#274
2025-05-15 11:39:14 -04:00
David Peter
c066bf0127 [ty] type[…] is always assignable to type (#18121)
## Summary

Model that `type[C]` is always assignable to `type`, even if `C` is not
fully static.

closes https://github.com/astral-sh/ty/issues/312

## Test Plan

* New Markdown tests
* Property tests
2025-05-15 17:13:47 +02:00
Max Mynter
a5ee1a3bb1 Bump py-fuzzer Dependencies (#18113) 2025-05-15 10:47:37 -04:00
Brent Westbrook
e2c5b83fe1 Inline DiagnosticKind into other diagnostic types (#18074)
## Summary

This PR deletes the `DiagnosticKind` type by inlining its three fields
(`name`, `body`, and `suggestion`) into three other diagnostic types:
`Diagnostic`, `DiagnosticMessage`, and `CacheMessage`.

Instead of deferring to an internal `DiagnosticKind`, both `Diagnostic`
and `DiagnosticMessage` now have their own macro-generated `AsRule`
implementations.

This should make both https://github.com/astral-sh/ruff/pull/18051 and
another follow-up PR changing the type of `name` on `CacheMessage`
easier since its type will be able to change separately from
`Diagnostic` and `DiagnosticMessage`.

## Test Plan

Existing tests
2025-05-15 10:27:21 -04:00
Brent Westbrook
b35bf8ae07 Bump 0.11.10 (#18120) 2025-05-15 09:54:08 -04:00
David Peter
279dac1c0e [ty] Make dataclass instances adhere to DataclassInstance (#18115)
## Summary

Make dataclass instances adhere to the `DataclassInstance` protocol.

fixes astral-sh/ty#400

## Test Plan

New Markdown tests
2025-05-15 14:27:23 +02:00
Micha Reiser
57617031de [ty] Enable optimizations for salsa in debug profile (#18117) 2025-05-15 12:31:46 +02:00
Micha Reiser
28b5a868d3 [ty] Enable 'ansi' feature to fix compile error (#18116) 2025-05-15 11:43:15 +02:00
Micha Reiser
b6b7caa023 [ty] Change layout of extra verbose output and respect --color for verbose output (#18089) 2025-05-15 09:57:59 +02:00
InSync
46be305ad2 [ty] Include synthesized arguments in displayed counts for too-many-positional-arguments (#18098)
## Summary

Resolves [#290](https://github.com/astral-sh/ty/issues/290).

All arguments, synthesized or not, are now accounted for in
`too-many-positional-arguments`'s error message.

For example, consider this example:

```python
class C:
	def foo(self): ...

C().foo(1)  # !!!
```

Previously, ty would say:

> Too many positional arguments to bound method foo: expected 0, got 1

After this change, it will say:

> Too many positional arguments to bound method foo: expected 1, got 2

This is what Python itself does too:

```text
Traceback (most recent call last):
  File "<python-input-0>", line 3, in <module>
    C().foo()
    ~~~~~~~^^
TypeError: C.foo() takes 0 positional arguments but 1 was given
```

## Test Plan

Markdown tests.
2025-05-14 22:51:23 -04:00
Alex Waygood
c3a4992ae9 [ty] Fix normalization of unions containing instances parameterized with unions (#18112) 2025-05-14 22:48:33 -04:00
Alex Waygood
9aa6330bb1 [ty] Fix redundant-cast false positives when casting to Unknown (#18111) 2025-05-14 22:38:53 -04:00
github-actions[bot]
b600ff106a Sync vendored typeshed stubs (#18110)
Close and reopen this PR to trigger CI

---------

Co-authored-by: typeshedbot <>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-05-14 22:14:52 -04:00
Vasco Schiavo
466021d5e1 [flake8-simplify] add fix safety section (SIM112) (#18099)
The PR add the `fix safety` section for rule `SIM112` (#15584 ).
2025-05-14 17:16:20 -04:00
Simon Sawert
33e14c5963 Update Neovim setup docs (#18108)
## Summary

Nvim 0.11+ uses the builtin `vim.lsp.enable` and `vim.lsp.config` to
enable and configure LSP clients. This adds the new non legacy way of
configuring Nvim with `nvim-lspconfig` according to the upstream
documentation.

Update documentation for Nvim LSP configuration according to
`nvim-lspconfig` and Nvim 0.11+

## Test Plan

Tested locally on macOS with Nvim 0.11.1 and `nvim-lspconfig`
master/[ac1dfbe](ac1dfbe3b6).
2025-05-14 20:54:24 +00:00
David Peter
6800a9f6f3 [ty] Add type-expression syntax link to invalid-type-expression (#18104)
## Summary

Add a link to [this
page](https://typing.python.org/en/latest/spec/annotations.html#type-and-annotation-expressions)
when emitting `invalid-type-expression` diagnostics.
2025-05-14 18:56:44 +00:00
Vasco Schiavo
68559fc17d [flake8-simplify] add fix safety section (SIM103) (#18086)
The PR add the `fix safety` section for rule `SIM103` (#15584 )

### Unsafe Fix Example

```python
class Foo:
    def __eq__(self, other):
        return 1
    
def foo():
    if Foo() == 1:
        return True
    return False

def foo_fix():
    return Foo() == 1
    
print(foo()) # True
print(foo_fix()) # 1
```

### Note

I updated the code snippet example, because I thought it was cool to
have a correct example, i.e., that I can paste inside the playground and
it works :-)
2025-05-14 14:24:15 -04:00
David Peter
2a217e80ca [ty] mypy_primer: fix static-frame setup (#18103)
## Summary

Pull in https://github.com/hauntsaninja/mypy_primer/pull/169
2025-05-14 20:23:53 +02:00
Dan Parizher
030a16cb5f [flake8-simplify] Correct behavior for str.split/rsplit with maxsplit=0 (SIM905) (#18075)
Fixes #18069

<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR addresses a bug in the `flake8-simplify` rule `SIM905`
(split-static-string) where `str.split(maxsplit=0)` and
`str.rsplit(maxsplit=0)` produced incorrect results for empty strings or
strings starting/ending with whitespace. The fix ensures that the
linting rule's suggested replacements now align with Python's native
behavior for these specific `maxsplit=0` scenarios.

## Test Plan

1. Added new test cases to the existing
`crates/ruff_linter/resources/test/fixtures/flake8_simplify/SIM905.py`
fixture to cover the scenarios described in issue #18069.
2.  Ran `cargo test -p ruff_linter`.
3. Verified and accepted the updated snapshots for `SIM905.py` using
`cargo insta review`. The new snapshots confirm the corrected behavior
for `maxsplit=0`.
2025-05-14 14:20:18 -04:00
Alex Waygood
0590b38214 [ty] Fix more generics-related TODOs (#18062) 2025-05-14 12:26:52 -04:00
Usul-Dev
8104b1e83b [ty] fix missing '>' in HTML anchor tags in CLI reference (#18096)
Co-authored-by: Usul <Usul-Dev@users.noreply.github.com>
2025-05-14 15:50:35 +00:00
Dhruv Manilawala
cf70c7863c Remove symlinks from the fuzz directory (#18095)
## Summary

This PR does the following:
1. Remove the symlinks from the `fuzz/` directory
2. Update `init-fuzzer.sh` script to create those symlinks
3. Update `fuzz/.gitignore` to ignore those corpus directories

## Test Plan

Initialize the fuzzer:

```sh
./fuzz/init-fuzzer.sh
```

And, run a fuzz target:

```sh
cargo +nightly fuzz run ruff_parse_simple -- -timeout=1 -only_ascii=1
```
2025-05-14 21:05:52 +05:30
Andrew Gallant
faf54c0181 ty_python_semantic: improve failed overloaded function call
The diagnostic now includes a pointer to the implementation definition
along with each possible overload.

This doesn't include information about *why* each overload failed. But
given the emphasis on concise output (since there can be *many*
unmatched overloads), it's not totally clear how to include that
additional information.

Fixes #274
2025-05-14 11:13:41 -04:00
Andrew Gallant
451c5db7a3 ty_python_semantic: move some routines to FunctionType
These are, after all, specific to function types. The methods on `Type`
are more like conveniences that return something when the type *happens*
to be a function. But defining them on `FunctionType` itself makes it
easy to call them when you have a `FunctionType` instead of a `Type`.
2025-05-14 11:13:41 -04:00
Andrew Gallant
bd5b7f415f ty_python_semantic: rejigger handling of overload error conditions
I found the previous code somewhat harder to read. Namely, a `for`
loop was being used to encode "execute zero or one times, but not
more." Which is sometimes okay, but it seemed clearer to me to use
more explicit case analysis here.

This should have no behavioral changes.
2025-05-14 11:13:41 -04:00
Andrew Gallant
0230cbac2c ty_python_semantic: update "no matching overload" diagnostic test
It looks like support for `@overload` has been added since this test was
created, so we remove the TODO and add a snippet (from #274).
2025-05-14 11:13:41 -04:00
Wei Lee
2e94d37275 [airflow] Get rid of Replacement::Name and replace them with Replacement::AutoImport for enabling auto fixing (AIR301, AIR311) (#17941)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Similiar to https://github.com/astral-sh/ruff/pull/17941.

`Replacement::Name` was designed for linting only. Now, we also want to
fix the user code. It would be easier to replace it with a better
AutoImport struct whenever possible.

On the other hand, `AIR301` and `AIR311` contain attribute changes that
can still use a struct like `Replacement::Name`. To reduce the
confusion, I also updated it as `Replacement::AttrName`

Some of the original `Replacement::Name` has been replaced as
`Replacement::Message` as they're not directly mapping and the message
has now been moved to `help`


## Test Plan

<!-- How was it tested? -->

The test fixtures have been updated
2025-05-14 11:10:15 -04:00
Vasco Schiavo
1e4377c9c6 [ruff] add fix safety section (RUF007) (#17755)
The PR add the `fix safety` section for rule `RUF007` (#15584 )

It seems that the fix was always marked as unsafe #14401

## Unsafety example

This first example is a little extreme. In fact, the class `Foo`
overrides the `__getitem__` method but in a very special, way. The
difference lies in the fact that `zip(letters, letters[1:])` call the
slice `letters[1:]` which is behaving weird in this case, while
`itertools.pairwise(letters)` call just `__getitem__(0), __getitem__(1),
...` and so on.

Note that the diagnostic is emitted: [playground](https://play.ruff.rs)

I don't know if we want to mention this problem, as there is a subtile
bug in the python implementation of `Foo` which make the rule unsafe.

```python
from dataclasses import dataclass
import itertools

@dataclass
class Foo:
    letters: str
    
    def __getitem__(self, index):
        return self.letters[index] + "_foo"


letters = Foo("ABCD")
zip_ = zip(letters, letters[1:])
for a, b in zip_:
    print(a, b) # A_foo B, B_foo C, C_foo D, D_foo _
    
pair = itertools.pairwise(letters)
for a, b in pair:
    print(a, b) # A_foo B_foo, B_foo C_foo, C_foo D_foo
```

This other example is much probable.
here, `itertools.pairwise` was shadowed by a costume function
[(playground)](https://play.ruff.rs)

```python
from dataclasses import dataclass
from itertools import pairwise

def pairwise(a):
    return []
    
letters = "ABCD"
zip_ = zip(letters, letters[1:])
print([(a, b) for a, b in zip_]) # [('A', 'B'), ('B', 'C'), ('C', 'D')]

pair = pairwise(letters)
print(pair) # []
```
2025-05-14 11:07:11 -04:00
Dimitri Papadopoulos Orfanos
1b4f7de840 [pyupgrade] Add resource.error as deprecated alias of OSError (UP024) (#17933)
## Summary

Partially addresses #17935.


[`resource.error`](https://docs.python.org/3/library/resource.html#resource.error)
is a deprecated alias of
[`OSError`](https://docs.python.org/3/library/exceptions.html#OSError).
> _Changed in version 3.3:_ Following [**PEP
3151**](https://peps.python.org/pep-3151/), this class was made an alias
of
[`OSError`](https://docs.python.org/3/library/exceptions.html#OSError).

Add it to the list of `OSError` aliases found by [os-error-alias
(UP024)](https://docs.astral.sh/ruff/rules/os-error-alias/#os-error-alias-up024).

## Test Plan

Sorry, I usually don't program in Rust. Could you at least point me to
the test I would need to modify?
2025-05-14 10:37:25 -04:00
Victor Hugo Gomes
9b52ae8991 [flake8-pytest-style] Don't recommend usefixtures for parametrize values in PT019 (#17650)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary
Fixes #17599.

## Test Plan

Snapshot tests.

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-05-14 10:31:42 -04:00
David Peter
97d7b46936 [ty] Do not look up __init__ on instances (#18092)
## Summary

Dunder methods are never looked up on instances. We do this implicitly
in `try_call_dunder`, but the corresponding flag was missing in the
instance-construction code where we use `member_lookup_with_policy`
directly.

fixes https://github.com/astral-sh/ty/issues/322

## Test Plan

Added regression test.
2025-05-14 15:33:42 +02:00
Luke
1eab59e681 Remove double whitespace (#18090) 2025-05-14 11:20:19 +02:00
Micha Reiser
e7f97a3e4b [ty] Reduce log level of 'symbol .. (via star import) not found' log message (#18087) 2025-05-14 09:20:23 +02:00
Chandra Kiran G
d17557f0ae [ty] Fix Inconsistent casing in diagnostic (#18084) 2025-05-14 08:26:48 +02:00
Dhruv Manilawala
8cbd433a31 [ty] Add cycle handling for unpacking targets (#18078)
## Summary

This PR adds cycle handling for `infer_unpack_types` based on the
analysis in astral-sh/ty#364.

Fixes: astral-sh/ty#364

## Test Plan

Add a cycle handling test for unpacking in `cycle.md`
2025-05-13 21:27:48 +00:00
Alex Waygood
65e48cb439 [ty] Check assignments to implicit global symbols are assignable to the types declared on types.ModuleType (#18077) 2025-05-13 16:37:20 -04:00
David Peter
301d9985d8 [ty] Add benchmark for union of tuples (#18076)
## Summary

Add a micro-benchmark for the code pattern observed in
https://github.com/astral-sh/ty/issues/362.

This currently takes around 1 second on my machine.

## Test Plan

```bash
cargo bench -p ruff_benchmark -- 'ty_micro\[many_tuple' --sample-size 10
```
2025-05-13 22:14:30 +02:00
Micha Reiser
cfbb914100 Use https://ty.dev/rules when linking to the rules table (#18072) 2025-05-13 19:21:06 +02:00
Douglas Creager
fe653de3dd [ty] Infer parameter specializations of explicitly implemented generic protocols (#18054)
Follows on from (and depends on)
https://github.com/astral-sh/ruff/pull/18021.

This updates our function specialization inference to infer type
mappings from parameters that are generic protocols.

For now, this only works when the argument _explicitly_ implements the
protocol by listing it as a base class. (We end up using exactly the
same logic as for generic classes in #18021.) For this to work with
classes that _implicitly_ implement the protocol, we will have to check
the types of the protocol members (which we are not currently doing), so
that we can infer the specialization of the protocol that the class
implements.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-05-13 13:13:00 -04:00
InSync
a9f7521944 [ty] Shorten snapshot names (#18039)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-13 18:43:19 +02:00
Carl Meyer
f8890b70c3 [ty] __file__ is always a string inside a Python module (#18071)
## Summary

Understand that `__file__` is always set and a `str` when looked up as
an implicit global from a Python file we are type checking.

## Test Plan

mdtests
2025-05-13 08:20:43 -07:00
David Peter
142c1bc760 [ty] Recognize submodules in self-referential imports (#18005)
## Summary

Fix the lookup of `submodule`s in cases where the `parent` module has a
self-referential import like `from parent import submodule`. This allows
us to infer proper types for many symbols where we previously inferred
`Never`. This leads to many new false (and true) positives across the
ecosystem because the fact that we previously inferred `Never` shadowed
a lot of problems. For example, we inferred `Never` for `os.path`, which
is why we now see a lot of new diagnostics related to `os.path.abspath`
and similar.

```py
import os

reveal_type(os.path)  # previously: Never, now: <module 'os.path'>
```

closes https://github.com/astral-sh/ty/issues/261
closes https://github.com/astral-sh/ty/issues/307

## Ecosystem analysis

```
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
┃ Diagnostic ID                 ┃ Severity ┃ Removed ┃ Added ┃ Net Change ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
│ call-non-callable             │ error    │       1 │     5 │         +4 │
│ call-possibly-unbound-method  │ warning  │       6 │    26 │        +20 │
│ invalid-argument-type         │ error    │      26 │    94 │        +68 │
│ invalid-assignment            │ error    │      18 │    46 │        +28 │
│ invalid-context-manager       │ error    │       9 │     4 │         -5 │
│ invalid-raise                 │ error    │       1 │     1 │          0 │
│ invalid-return-type           │ error    │       3 │    20 │        +17 │
│ invalid-super-argument        │ error    │       4 │     0 │         -4 │
│ invalid-type-form             │ error    │     573 │     0 │       -573 │
│ missing-argument              │ error    │       2 │    10 │         +8 │
│ no-matching-overload          │ error    │       0 │   715 │       +715 │
│ non-subscriptable             │ error    │       0 │    35 │        +35 │
│ not-iterable                  │ error    │       6 │     7 │         +1 │
│ possibly-unbound-attribute    │ warning  │      14 │    31 │        +17 │
│ possibly-unbound-import       │ warning  │      13 │     0 │        -13 │
│ possibly-unresolved-reference │ warning  │       0 │     8 │         +8 │
│ redundant-cast                │ warning  │       1 │     0 │         -1 │
│ too-many-positional-arguments │ error    │       2 │     0 │         -2 │
│ unknown-argument              │ error    │       2 │     0 │         -2 │
│ unresolved-attribute          │ error    │     583 │   304 │       -279 │
│ unresolved-import             │ error    │       0 │    96 │        +96 │
│ unsupported-operator          │ error    │       0 │    17 │        +17 │
│ unused-ignore-comment         │ warning  │      29 │     2 │        -27 │
├───────────────────────────────┼──────────┼─────────┼───────┼────────────┤
│ TOTAL                         │          │    1293 │  1421 │       +128 │
└───────────────────────────────┴──────────┴─────────┴───────┴────────────┘

Analysis complete. Found 23 unique diagnostic IDs.
Total diagnostics removed: 1293
Total diagnostics added: 1421
Net change: +128
```

* We see a lot of new errors (`no-matching-overload`) related to
`os.path.dirname` and other `os.path` operations because we infer `str |
None` for `__file__`, but many projects use something like
`os.path.dirname(__file__)`.
* We also see many new `unresolved-attribute` errors related to the fact
that we now infer proper module types for some imports (e.g. `import
kornia.augmentation as K`), but we don't allow implicit imports (e.g.
accessing `K.auto.operations` without also importing `K.auto`). See
https://github.com/astral-sh/ty/issues/133.
* Many false positive `invalid-type-form` are removed because we now
infer the correct type for some type expression instead of `Never`,
which is not valid in a type annotation/expression context.

## Test Plan

Added new Markdown tests
2025-05-13 16:59:11 +02:00
Alex Waygood
c0f22928bd [ty] Add a note to the diagnostic if a new builtin is used on an old Python version (#18068)
## Summary

If the user tries to use a new builtin on an old Python version, tell
them what Python version the builtin was added on, what our inferred
Python version is for their project, and what configuration settings
they can tweak to fix the error.

## Test Plan

Snapshots and screenshots:


![image](https://github.com/user-attachments/assets/767d570e-7af1-4e1f-98cf-50e4311db511)
2025-05-13 10:08:04 -04:00
Alex Waygood
5bf5f3682a [ty] Add tests for else branches of hasattr() narrowing (#18067)
## Summary

This addresses @sharkdp's post-merge review in
https://github.com/astral-sh/ruff/pull/18053#discussion_r2086190617

## Test Plan

`cargo test -p ty_python_semantic`
2025-05-13 09:57:53 -04:00
Alex Waygood
5913997c72 [ty] Improve diagnostics for assert_type and assert_never (#18050) 2025-05-13 13:00:20 +00:00
Carl Meyer
00f672a83b [ty] contribution guide (#18061)
First take on a contributing guide for `ty`. Lots of it is copied from
the existing Ruff contribution guide.

I've put this in Ruff repo, since I think a contributing guide belongs
where the code is. I also updated the Ruff contributing guide to link to
the `ty` one.

Once this is merged, we can also add a link from the `CONTRIBUTING.md`
in ty repo (which focuses on making contributions to things that are
actually in the ty repo), to this guide.

I also updated the pull request template to mention that it might be a
ty PR, and mention the `[ty]` PR title prefix.

Feel free to update/modify/merge this PR before I'm awake tomorrow.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Co-authored-by: David Peter <mail@david-peter.de>
2025-05-13 10:55:01 +02:00
Abhijeet Prasad Bodas
68b0386007 [ty] Implement DataClassInstance protocol for dataclasses. (#18018)
Fixes: https://github.com/astral-sh/ty/issues/92

## Summary

We currently get a `invalid-argument-type` error when using
`dataclass.fields` on a dataclass, because we do not synthesize the
`__dataclass_fields__` member.

This PR fixes this diagnostic.

Note that we do not yet model the `Field` type correctly. After that is
done, we can assign a more precise `tuple[Field, ...]` type to this new
member.

## Test Plan
New mdtest.

---------

Co-authored-by: David Peter <mail@david-peter.de>
2025-05-13 10:31:26 +02:00
ZGY
0ae07cdd1f [ruff_python_ast] Fix redundant visitation of test expressions in elif clause statements (#18064) 2025-05-13 07:10:23 +00:00
Douglas Creager
0fb94c052e [ty] Infer parameter specializations of generic aliases (#18021)
This updates our function specialization inference to infer type
mappings from parameters that are generic aliases, e.g.:

```py
def f[T](x: list[T]) -> T: ...

reveal_type(f(["a", "b"]))  # revealed: str
```

Though note that we're still inferring the type of list literals as
`list[Unknown]`, so for now we actually need something like the
following in our tests:

```py
def _(x: list[str]):
    reveal_type(f(x))  # revealed: str
```
2025-05-12 22:12:44 -04:00
Alex Waygood
55df9271ba [ty] Understand homogeneous tuple annotations (#17998) 2025-05-12 22:02:25 -04:00
Douglas Creager
f301931159 [ty] Induct into instances and subclasses when finding and applying generics (#18052)
We were not inducting into instance types and subclass-of types when
looking for legacy typevars, nor when apply specializations.

This addresses
https://github.com/astral-sh/ruff/pull/17832#discussion_r2081502056

```py
from __future__ import annotations
from typing import TypeVar, Any, reveal_type

S = TypeVar("S")

class Foo[T]:
    def method(self, other: Foo[S]) -> Foo[T | S]: ...  # type: ignore[invalid-return-type]

def f(x: Foo[Any], y: Foo[Any]):
    reveal_type(x.method(y))  # revealed: `Foo[Any | S]`, but should be `Foo[Any]`
```

We were not detecting that `S` made `method` generic, since we were not
finding it when searching the function signature for legacy typevars.
2025-05-12 21:53:11 -04:00
Alex Waygood
7e9b0df18a [ty] Allow classes to inherit from type[Any] or type[Unknown] (#18060) 2025-05-12 20:30:21 -04:00
Alex Waygood
41fa082414 [ty] Allow a class to inherit from an intersection if the intersection contains a dynamic type and the intersection is not disjoint from type (#18055) 2025-05-12 23:07:11 +00:00
Alex Waygood
c7b6108cb8 [ty] Narrowing for hasattr() (#18053) 2025-05-12 18:58:14 -04:00
Zanie Blue
a97e72fb5e Update reference documentation for --python-version (#18056)
Adding more detail here
2025-05-12 22:31:04 +00:00
Victor Hugo Gomes
0d6fafd0f9 [flake8-bugbear] Ignore B028 if skip_file_prefixes is present (#18047)
## Summary

Fixes #18011
2025-05-12 17:06:51 -05:00
Wei Lee
2eb2d5359b [airflow] Apply try-catch guard to all AIR3 rules (AIR3) (#17887)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

If a try-catch block guards the names, we don't raise warnings. During
this change, I discovered that some of the replacement types were
missed. Thus, I extend the fix to types other than AutoImport as well

## Test Plan

<!-- How was it tested? -->

Test fixtures are added and updated.
2025-05-12 17:13:41 -04:00
Yunchi Pang
f549dfe39d [pylint] add fix safety section (PLW3301) (#17878)
parent: #15584 
issue: #16163

---------

Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-05-12 20:51:05 +00:00
Zanie Blue
6b64630635 Update --python to accept paths to executables in virtual environments (#17954)
## Summary

Updates the `--python` flag to accept Python executables in virtual
environments. Notably, we do not query the executable and it _must_ be
in a canonical location in a virtual environment. This is pretty naive,
but solves for the trivial case of `ty check --python .venv/bin/python3`
which will be a common mistake (and `ty check --python $(which python)`)

I explored this while trying to understand Python discovery in ty in
service of https://github.com/astral-sh/ty/issues/272, I'm not attached
to it, but figure it's worth sharing.

As an alternative, we can add more variants to the
`SearchPathValidationError` and just improve the _error_ message, i.e.,
by hinting that this looks like a virtual environment and suggesting the
concrete alternative path they should provide. We'll probably want to do
that for some other cases anyway (e.g., `3.13` as described in the
linked issue)

This functionality is also briefly mentioned in
https://github.com/astral-sh/ty/issues/193

Closes https://github.com/astral-sh/ty/issues/318

## Test Plan

e.g.,

```
uv run ty check --python .venv/bin/python3
```

needs test coverage still
2025-05-12 15:39:04 -05:00
Yunchi Pang
d545b5bfd2 [pylint] add fix safety section (PLE4703) (#17824)
This PR adds a fix safety section in comment for rule PLE4703.

parent: #15584 
impl was introduced at #970 (couldn't find newer PRs sorry!)
2025-05-12 16:27:54 -04:00
Marcus Näslund
b2d9f59937 [ruff] Implement a recursive check for RUF060 (#17976)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
The existing implementation of RUF060 (InEmptyCollection) is not
recursive, meaning that although set([]) results in an empty collection,
the existing code fails it because set is taking an argument.

The updated implementation allows set and frozenset to take empty
collection as positional argument (which results in empty
set/frozenset).

## Test Plan

Added test cases for recursive cases + updated snapshot (see RUF060.py).

---------

Co-authored-by: Marcus Näslund <marcus.naslund@kognity.com>
2025-05-12 16:17:13 -04:00
Victor Hugo Gomes
d7ef01401c [flake8-use-pathlib] PTH* suppress diagnostic for all os.* functions that have the dir_fd parameter (#17968)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Fixes #17776.

This PR also handles all other `PTH*` rules that don't support file
descriptors.

## Test Plan

<!-- How was it tested? -->

Update existing tests.
2025-05-12 16:11:56 -04:00
Victor Hugo Gomes
c9031ce59f [refurb] Mark autofix as safe only for number literals in FURB116 (#17692)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary
We can only guarantee the safety of the autofix for number literals, all
other cases may change the runtime behaviour of the program or introduce
a syntax error. For the cases reported in the issue that would result in
a syntax error, I disabled the autofix.

Follow-up of #17661. 

Fixes #16472.
<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

Snapshot tests.
<!-- How was it tested? -->
2025-05-12 16:08:12 -04:00
Victor Hugo Gomes
138ab91def [flake8-simplify] Fix SIM905 autofix for rsplit creating a reversed list literal (#18045)
## Summary

Fixes #18042
2025-05-12 14:53:08 -05:00
Ibraheem Ahmed
550b8be552 Avoid initializing progress bars early (#18049)
## Summary

Resolves https://github.com/astral-sh/ty/issues/324.
2025-05-12 15:07:55 -04:00
Douglas Creager
bdccb37b4a [ty] Apply function specialization to all overloads (#18020)
Function literals have an optional specialization, which is applied to
the parameter/return type annotations lazily when the function's
signature is requested. We were previously only applying this
specialization to the final overload of an overloaded function.

This manifested most visibly for `list.__add__`, which has an overloaded
definition in the typeshed:


b398b83631/crates/ty_vendored/vendor/typeshed/stdlib/builtins.pyi (L1069-L1072)

Closes https://github.com/astral-sh/ty/issues/314
2025-05-12 13:48:54 -04:00
Charlie Marsh
3ccc0edfe4 Add comma to panic message (#18048)
## Summary

Consistent with other variants of this, separate the conditional clause.
2025-05-12 11:52:55 -04:00
Victor Hugo Gomes
6b3ff6f5b8 [flake8-pie] Mark autofix for PIE804 as unsafe if the dictionary contains comments (#18046)
## Summary

Fixes #18036
2025-05-12 10:16:59 -05:00
Shunsuke Shibayama
6f8f7506b4 [ty] fix infinite recursion bug in is_disjoint_from (#18043)
## Summary

I found this bug while working on #18041. The following code leads to
infinite recursion.

```python
from ty_extensions import is_disjoint_from, static_assert, TypeOf

class C:
    @property
    def prop(self) -> int:
        return 1

static_assert(not is_disjoint_from(int, TypeOf[C.prop]))
```

The cause is a trivial missing binding in `is_disjoint_from`. This PR
fixes the bug and adds a test case (this is a simple fix and may not
require a new test case?).

## Test Plan

A new test case is added to
`mdtest/type_properties/is_disjoint_from.md`.
2025-05-12 09:44:00 -04:00
Micha Reiser
797eb70904 disable jemalloc on android (#18033) 2025-05-12 14:41:00 +02:00
Micha Reiser
be6ec613db [ty] Fix incorrect type of src.root in documentation (#18040) 2025-05-12 12:28:14 +00:00
Micha Reiser
fcd858e0c8 [ty] Refine message for why a rule is enabled (#18038) 2025-05-12 13:31:42 +02:00
Micha Reiser
d944a1397e [ty] Remove brackets around option names (#18037) 2025-05-12 11:16:03 +00:00
renovate[bot]
d3f3d92df3 Update pre-commit dependencies (#18025)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:31:23 +02:00
renovate[bot]
38c00dfad5 Update docker/build-push-action action to v6.16.0 (#18030)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:26:41 +02:00
renovate[bot]
d6280c5aea Update docker/login-action action to v3.4.0 (#18031)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:26:22 +02:00
renovate[bot]
a34240a3f0 Update taiki-e/install-action digest to 83254c5 (#18022)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:26:03 +02:00
renovate[bot]
b86c7bbf7c Update cargo-bins/cargo-binstall action to v1.12.4 (#18023)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:25:28 +02:00
renovate[bot]
d7c54ba8c4 Update Rust crate ctrlc to v3.4.7 (#18027)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:25:08 +02:00
renovate[bot]
c38d6e8045 Update Rust crate clap to v4.5.38 (#18026)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:24:40 +02:00
renovate[bot]
2bfd7b1816 Update Rust crate jiff to v0.2.13 (#18029)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:24:16 +02:00
renovate[bot]
c1cfb43bf0 Update Rust crate getrandom to v0.3.3 (#18028)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-12 08:23:49 +02:00
renovate[bot]
99555b775c Update dependency ruff to v0.11.9 (#18024)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-05-11 22:03:55 -04:00
Yunchi Pang
b398b83631 [pylint] add fix safety section (PLW1514) (#17932)
parent #15584 
fix was made unsafe at #8928
2025-05-11 12:25:07 -05:00
Rogdham
bc7b30364d python_stdlib: update for 3.14 (#18014)
## Summary

Added version 3.14 to the script generating the `known_stdlib.rs` file.

Rebuilt the known stdlibs with latest version (2025.5.10) of [stdlibs
Python lib](https://pypi.org/project/stdlibs/) (which added support for
3.14.0b1).

_Note: Python 3.14 is now in [feature
freeze](https://peps.python.org/pep-0745/) so the modules in stdlib
should be stable._

_See also: #15506_

## Test Plan

The following command has been run. Using for tests the `compression`
module which been introduced with Python 3.14.
```sh
ruff check --no-cache --select I001 --target-version py314 --fix
```

With ruff 0.11.9:
```python
import base64
import datetime

import compression

print(base64, compression, datetime)
```

With this PR:
```python
import base64
import compression
import datetime   

print(base64, compression, datetime)
```
2025-05-11 11:25:54 -05:00
Vasco Schiavo
5792ed15da [ruff] add fix safety section (RUF033) (#17760)
This PR adds the fix safety section for rule `RUF033`
(https://github.com/astral-sh/ruff/issues/15584 ).
2025-05-11 11:15:15 -05:00
Yunchi Pang
8845a13efb [pylint] add fix safety section (PLC0414) (#17802)
This PR adds a fix safety section in comment for rule `PLC0414`.

parent: #15584 
discussion: #6294
2025-05-11 11:01:26 -05:00
Alex Waygood
669855d2b5 [ty] Remove unused variants from various Known* enums (#18015)
## Summary

`KnownClass::Range`, `KnownInstanceType::Any` and `ClassBase::any()` are
no longer used or useful: all our tests pass with them removed.
`KnownModule::Abc` _is_ now used outside of tests, however, so I removed
the `#[allow(dead_code)]` branch above that variant.

## Test Plan

`cargo test -p ty_python_semantic`
2025-05-11 11:18:55 +01:00
Alex Waygood
ff7ebecf89 [ty] Remove generic types from the daily property test run (for now) (#18004) 2025-05-11 09:27:27 +00:00
Zanie Blue
7e8ba2b68e [ty] Remove vestigial pyvenv.cfg creation in mdtest (#18006)
Following #17991, removes some of
https://github.com/astral-sh/ruff/pull/17222 which is no longer strictly
necessary. I don't actually think it's that ugly to have around? no
strong feelings on retaining it or not.
2025-05-10 20:52:49 +00:00
Zanie Blue
0bb8cbdf07 [ty] Do not allow invalid virtual environments from discovered .venv or VIRTUAL_ENV (#18003)
Follow-up to https://github.com/astral-sh/ruff/pull/17991 ensuring we do
not allow detection of system environments when the origin is
`VIRTUAL_ENV` or a discovered `.venv` directory — i.e., those always
require a `pyvenv.cfg` file.
2025-05-10 20:36:12 +00:00
Zanie Blue
2923c55698 [ty] Add test coverage for PythonEnvironment::System variants (#17996)
Adds test coverage for https://github.com/astral-sh/ruff/pull/17991,
which includes some minor refactoring of the virtual environment test
infrastructure.

I tried to minimize stylistic changes, but there are still a few because
I was a little confused by the setup. I could see this evolving more in
the future, as I don't think the existing model can capture all the test
coverage I'm looking for.
2025-05-10 20:28:15 +00:00
Zanie Blue
316e406ca4 [ty] Add basic support for non-virtual Python environments (#17991)
This adds basic support for non-virtual Python environments by accepting
a directory without a `pyvenv.cfg` which allows existing, subsequent
site-packages discovery logic to succeed. We can do better here in the
long-term, by adding more eager validation (for error messages) and
parsing the Python version from the discovered site-packages directory
(which isn't relevant yet, because we don't use the discovered Python
version from virtual environments as the default `--python-version` yet
either).

Related

- https://github.com/astral-sh/ty/issues/265
- https://github.com/astral-sh/ty/issues/193

You can review this commit by commit if it makes you happy.

I tested this manually; I think refactoring the test setup is going to
be a bit more invasive so I'll stack it on top (see
https://github.com/astral-sh/ruff/pull/17996).

```
❯ uv run ty check --python /Users/zb/.local/share/uv/python/cpython-3.10.17-macos-aarch64-none/ -vv example
2025-05-09 12:06:33.685911 DEBUG Version: 0.0.0-alpha.7 (f9c4c8999 2025-05-08)
2025-05-09 12:06:33.685987 DEBUG Architecture: aarch64, OS: macos, case-sensitive: case-insensitive
2025-05-09 12:06:33.686002 DEBUG Searching for a project in '/Users/zb/workspace/ty'
2025-05-09 12:06:33.686123 DEBUG Resolving requires-python constraint: `>=3.8`
2025-05-09 12:06:33.686129 DEBUG Resolved requires-python constraint to: 3.8
2025-05-09 12:06:33.686142 DEBUG Project without `tool.ty` section: '/Users/zb/workspace/ty'
2025-05-09 12:06:33.686147 DEBUG Searching for a user-level configuration at `/Users/zb/.config/ty/ty.toml`
2025-05-09 12:06:33.686156 INFO Defaulting to python-platform `darwin`
2025-05-09 12:06:33.68636 INFO Python version: Python 3.8, platform: darwin
2025-05-09 12:06:33.686375 DEBUG Adding first-party search path '/Users/zb/workspace/ty'
2025-05-09 12:06:33.68638 DEBUG Using vendored stdlib
2025-05-09 12:06:33.686634 DEBUG Discovering site-packages paths from sys-prefix `/Users/zb/.local/share/uv/python/cpython-3.10.17-macos-aarch64-none` (`--python` argument')
2025-05-09 12:06:33.686667 DEBUG Attempting to parse virtual environment metadata at '/Users/zb/.local/share/uv/python/cpython-3.10.17-macos-aarch64-none/pyvenv.cfg'
2025-05-09 12:06:33.686671 DEBUG Searching for site-packages directory in `sys.prefix` path `/Users/zb/.local/share/uv/python/cpython-3.10.17-macos-aarch64-none`
2025-05-09 12:06:33.686702 DEBUG Resolved site-packages directories for this environment are: ["/Users/zb/.local/share/uv/python/cpython-3.10.17-macos-aarch64-none/lib/python3.10/site-packages"]
2025-05-09 12:06:33.686706 DEBUG Adding site-packages search path '/Users/zb/.local/share/uv/python/cpython-3.10.17-macos-aarch64-none/lib/python3.10/site-packages'
...

❯ uv run ty check --python /tmp -vv example
2025-05-09 15:36:10.819416 DEBUG Version: 0.0.0-alpha.7 (f9c4c8999 2025-05-08)
2025-05-09 15:36:10.819708 DEBUG Architecture: aarch64, OS: macos, case-sensitive: case-insensitive
2025-05-09 15:36:10.820118 DEBUG Searching for a project in '/Users/zb/workspace/ty'
2025-05-09 15:36:10.821652 DEBUG Resolving requires-python constraint: `>=3.8`
2025-05-09 15:36:10.821667 DEBUG Resolved requires-python constraint to: 3.8
2025-05-09 15:36:10.8217 DEBUG Project without `tool.ty` section: '/Users/zb/workspace/ty'
2025-05-09 15:36:10.821888 DEBUG Searching for a user-level configuration at `/Users/zb/.config/ty/ty.toml`
2025-05-09 15:36:10.822072 INFO Defaulting to python-platform `darwin`
2025-05-09 15:36:10.822439 INFO Python version: Python 3.8, platform: darwin
2025-05-09 15:36:10.822773 DEBUG Adding first-party search path '/Users/zb/workspace/ty'
2025-05-09 15:36:10.822929 DEBUG Using vendored stdlib
2025-05-09 15:36:10.829872 DEBUG Discovering site-packages paths from sys-prefix `/tmp` (`--python` argument')
2025-05-09 15:36:10.829911 DEBUG Attempting to parse virtual environment metadata at '/private/tmp/pyvenv.cfg'
2025-05-09 15:36:10.829917 DEBUG Searching for site-packages directory in `sys.prefix` path `/private/tmp`
ty failed
  Cause: Invalid search path settings
  Cause: Failed to discover the site-packages directory: Failed to search the `lib` directory of the Python installation at `sys.prefix` path `/private/tmp` for `site-packages`
```
2025-05-10 20:17:47 +00:00
Micha Reiser
5ecd560c6f Link to the rules.md in the ty repository (#17979) 2025-05-10 11:40:40 +01:00
Max Mynter
b765dc48e9 Skip S608 for expressionless f-strings (#17999) 2025-05-10 11:37:58 +01:00
David Peter
cd1d906ffa [ty] Silence false positives for PEP-695 ParamSpec annotations (#18001)
## Summary

Suppress false positives for uses of PEP-695 `ParamSpec` in `Callable`
annotations:
```py
from typing_extensions import Callable

def f[**P](c: Callable[P, int]):
    pass
```

addresses a comment here:
https://github.com/astral-sh/ty/issues/157#issuecomment-2859284721

## Test Plan

Adapted Markdown tests
2025-05-10 11:59:25 +02:00
Abhijeet Prasad Bodas
235b74a310 [ty] Add more tests for NamedTuples (#17975)
## Summary

Add more tests and TODOs for `NamedTuple` support, based on the typing
spec: https://typing.python.org/en/latest/spec/namedtuples.html

## Test Plan

This PR adds new tests.
2025-05-10 10:46:08 +02:00
Brent Westbrook
40fd52dde0 Exclude broken symlinks from meson-python ecosystem check (#17993)
Summary
--

This should resolve the formatter ecosystem errors we've been seeing
lately. https://github.com/mesonbuild/meson-python/pull/728 added the
links, which I think are intentionally broken for testing purposes.

Test Plan
--

Ecosystem check on this PR
2025-05-09 16:59:00 -04:00
Carl Meyer
fd1eb3d801 add test for typing_extensions.Self (#17995)
Using `typing_extensions.Self` already worked, but we were lacking a
test for it.
2025-05-09 20:29:13 +00:00
omahs
882a1a702e Fix typos (#17988)
Fix typos

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
2025-05-09 14:57:14 -04:00
Max Mynter
b4a1ebdfe3 [semantic-syntax-tests] IrrefutableCasePattern, SingleStarredAssignment, WriteToDebug, InvalidExpression (#17748)
Re: #17526 

## Summary

Add integration test for semantic syntax for `IrrefutableCasePattern`,
`SingleStarredAssignment`, `WriteToDebug`, and `InvalidExpression`.

## Notes
- Following @ntBre's suggestion, I will keep the test coming in batches
like this over the next few days in separate PRs to keep the review load
per PR manageable while also not spamming too many.

- I did not add a test for `del __debug__` which is one of the examples
in `crates/ruff_python_parser/src/semantic_errors.rs:1051`.
For python version `<= 3.8` there is no error and for `>=3.9` the error
is not `WriteToDebug` but `SyntaxError: cannot delete __debug__ on
Python 3.9 (syntax was removed in 3.9)`.

- The `blacken-docs` bypass is necessary because otherwise the test does
not pass pre-commit checks; but we want to check for this faulty syntax.

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan
This is a test.
2025-05-09 14:54:05 -04:00
Brent Westbrook
7a48477c67 [ty] Add a warning about pre-release status to the CLI (#17983)
Summary
--

This was suggested on Discord, I hope this is roughly what we had in
mind. I took the message from the ty README, but I'm more than happy to
update it. Otherwise I just tried to mimic the appearance of the `ruff
analyze graph` warning (although I'm realizing now the whole text is
bold for ruff).

Test Plan
--

New warnings in the CLI tests. I thought this might be undesirable but
it looks like uv did the same thing
(https://github.com/astral-sh/uv/pull/6166).


![image](https://github.com/user-attachments/assets/e5e56a49-02ab-4c5f-9c38-716e4008d6e6)
2025-05-09 13:42:36 -04:00
Andrew Gallant
346e82b572 ty_python_semantic: add union type context to function call type errors
This context gets added only when calling a function through a union
type.
2025-05-09 13:40:51 -04:00
Andrew Gallant
5ea3a52c8a ty_python_semantic: report all union diagnostic
This makes one very simple change: we report all call binding
errors from each union variant.

This does result in duplicate-seeming diagnostics. For example,
when two union variants are invalid for the same reason.
2025-05-09 13:40:51 -04:00
Andrew Gallant
90272ad85a ty_python_semantic: add snapshot tests for existing union function type diagnostics
This is just capturing the status quo so that we can better see the
changes. I took these tests from the (now defunct) PR #17959.
2025-05-09 13:40:51 -04:00
Ibraheem Ahmed
e9da1750a1 Add progress bar for ty check (#17965)
## Summary

Adds a simple progress bar for the `ty check` CLI command. The style is
taken from uv, and like uv the bar is always shown - for smaller
projects it is fast enough that it isn't noticeable. We could
alternatively hide it completely based on some heuristic for the number
of files, or only show it after some amount of time.

I also disabled it when `--watch` is passed, cancelling inflight checks
was leading to zombie progress bars. I think we can fix this by using
[`MultiProgress`](https://docs.rs/indicatif/latest/indicatif/struct.MultiProgress.html)
and managing all the bars globally, but I left that out for now.

Resolves https://github.com/astral-sh/ty/issues/98.
2025-05-09 13:32:27 -04:00
Wei Lee
25e13debc0 [airflow] extend AIR311 rules (#17913)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

* `airflow.models.Connection` → `airflow.sdk.Connection`
* `airflow.models.Variable` → `airflow.sdk.Variable`

## Test Plan

<!-- How was it tested? -->

The test fixtures has been updated (see the first commit for easier
review)
2025-05-09 13:08:37 -04:00
InSync
249a852a6e [ty] Document nearly all lints (#17981) 2025-05-09 18:06:56 +01:00
Andrew Gallant
861ef2504e ty: add more snapshot updates 2025-05-09 12:42:14 -04:00
Andrew Gallant
b71ef8a26e ruff_db: completely rip lint: prefix out
This does a deeper removal of the `lint:` prefix by removing the
`DiagnosticId::as_str` method and replacing it with `as_concise_str`. We
remove the associated error type and simplify the `Display` impl for
`DiagnosticId` as well.

This turned out to catch a `lint:` that was still in the diagnostic
output: the part that says why a lint is enabled.
2025-05-09 12:42:14 -04:00
Andrew Gallant
50c780fc8b ty: switch to use annotate-snippets ID functionality
We just set the ID on the `Message` and it just does what we want in
this case. I think I didn't do this originally because I was trying to
preserve the existing rendering? I'm not sure. I might have just missed
this method.
2025-05-09 12:42:14 -04:00
Andrew Gallant
244ea27d5f ruff_db: a small tweak to remove empty message case
In a subsequent commit, we're going to start using `annotate-snippets`'s
functionality for diagnostic IDs in the rendering. As part of doing
that, I wanted to remove this special casing of an empty message. I did
that independently to see what, if anything, would change. (The changes
look fine to me. They'll be tweaked again in the next commit along with
a bunch of others.)
2025-05-09 12:42:14 -04:00
Andrew Gallant
2c4cbb6e29 ty: get rid of lint: prefix in ID for diagnostic rendering
In #289, we seem to have consensus that this prefix isn't really pulling
its weight.

Ref #289
2025-05-09 12:42:14 -04:00
Alex Waygood
d1bb10a66b [ty] Understand classes that inherit from subscripted Protocol[] as generic (#17832) 2025-05-09 17:39:15 +01:00
Dylan
2370297cde Bump 0.11.9 (#17986) 2025-05-09 10:43:27 -05:00
Alex Waygood
a137cb18d4 [ty] Display "All checks passed!" message in green (#17982) 2025-05-09 14:29:43 +01:00
Alex Waygood
03a4d56624 [ty] Change range of revealed-type diagnostic to be the range of the argument passed in, not the whole call (#17980) 2025-05-09 14:15:39 +01:00
David Peter
642eac452d [ty] Recursive protocols (#17929)
## Summary

Use a self-reference "marker" ~~and fixpoint iteration~~ to solve the
stack overflow problems with recursive protocols. This is not pretty and
somewhat tedious, but seems to work fine. Much better than all my
fixpoint-iteration attempts anyway.

closes https://github.com/astral-sh/ty/issues/93

## Test Plan

New Markdown tests.
2025-05-09 14:54:02 +02:00
Micha Reiser
c1b875799b [ty] CLI reference (#17978) 2025-05-09 14:23:24 +02:00
Micha Reiser
6cd8a49638 [ty] Update salsa (#17964) 2025-05-09 11:54:07 +02:00
Micha Reiser
12ce445ff7 [ty] Document configuration schema (#17950) 2025-05-09 10:47:45 +02:00
justin
f46ed8d410 [ty] Add --config CLI arg (#17697) 2025-05-09 08:38:37 +02:00
Carl Meyer
6c177e2bbe [ty] primer updates (#17903)
## Summary

Update ecosystem project lists in light of
https://github.com/astral-sh/ruff/pull/17758

## Test Plan

CI on this PR.
2025-05-08 20:43:31 -07:00
Carl Meyer
3d2485eb1b [ty] fix more ecosystem/fuzzer panics with fixpoint (#17758)
## Summary

Add cycle handling for `try_metaclass` and `pep695_generic_context`
queries, as well as adjusting the cycle handling for `try_mro` to ensure
that it short-circuits on cycles and won't grow MROs indefinitely.

This reduces the number of failing fuzzer seeds from 68 to 17. The
latter count includes fuzzer seeds 120, 160, and 335, all of which
previously panicked but now either hang or are very slow; I've
temporarily skipped those seeds in the fuzzer until I can dig into that
slowness further.

This also allows us to move some more ecosystem projects from `bad.txt`
to `good.txt`, which I've done in
https://github.com/astral-sh/ruff/pull/17903

## Test Plan

Added mdtests.
2025-05-08 20:36:20 -07:00
Douglas Creager
f78367979e [ty] Remove SliceLiteral type variant (#17958)
@AlexWaygood pointed out that the `SliceLiteral` type variant was
originally created to handle slices before we had generics.
https://github.com/astral-sh/ruff/pull/17927#discussion_r2078115787

Now that we _do_ have generics, we can use a specialization of the
`slice` builtin type for slice literals.

This depends on https://github.com/astral-sh/ruff/pull/17956, since we
need to make sure that all typevar defaults are fully substituted when
specializing `slice`.
2025-05-08 20:16:41 -04:00
Douglas Creager
b705664d49 [ty] Handle typevars that have other typevars as a default (#17956)
It's possible for a typevar to list another typevar as its default
value:

```py
class C[T, U = T]: ...
```

When specializing this class, if a type isn't provided for `U`, we would
previously use the default as-is, leaving an unspecialized `T` typevar
in the specialization. Instead, we want to use what `T` is mapped to as
the type of `U`.

```py
reveal_type(C())  # revealed: C[Unknown, Unknown]
reveal_type(C[int]())  # revealed: C[int, int]
reveal_type(C[int, str]())  # revealed: C[int, str]
```

This is especially important for the `slice` built-in type.
2025-05-08 19:01:27 -04:00
Alex Waygood
f51f1f7153 [ty] Support extending __all__ from an imported module even when the module is not an ExprName node (#17947) 2025-05-08 23:54:19 +01:00
Alex Waygood
9b694ada82 [ty] Report duplicate Protocol or Generic base classes with [duplicate-base], not [inconsistent-mro] (#17971) 2025-05-08 23:41:22 +01:00
Alex Waygood
4d81a41107 [ty] Respect the gradual guarantee when reporting errors in resolving MROs (#17962) 2025-05-08 22:57:39 +01:00
Brent Westbrook
981bd70d39 Convert Message::SyntaxError to use Diagnostic internally (#17784)
## Summary

This PR is a first step toward integration of the new `Diagnostic` type
into ruff. There are two main changes:
- A new `UnifiedFile` enum wrapping `File` for red-knot and a
`SourceFile` for ruff
- ruff's `Message::SyntaxError` variant is now a `Diagnostic` instead of
a `SyntaxErrorMessage`

The second of these changes was mostly just a proof of concept for the
first, and it went pretty smoothly. Converting `DiagnosticMessage`s will
be most of the work in replacing `Message` entirely.

## Test Plan

Existing tests, which show no changes.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-08 12:45:51 -04:00
Alex Waygood
0763331f7f [ty] Support extending __all__ with a literal tuple or set as well as a literal list (#17948) 2025-05-08 17:37:25 +01:00
Alex Waygood
da8540862d [ty] Make unused-ignore-comment disabled by default for now (#17955) 2025-05-08 17:21:34 +01:00
Micha Reiser
6a5533c44c [ty] Change default severity for unbound-reference to error (#17936) 2025-05-08 17:54:46 +02:00
Micha Reiser
d608eae126 [ty] Ignore possibly-unresolved-reference by default (#17934) 2025-05-08 17:44:56 +02:00
Micha Reiser
067a8ac574 [ty] Default to latest supported python version (#17938) 2025-05-08 16:58:35 +02:00
Micha Reiser
5eb215e8e5 [ty] Generate and add rules table (#17953) 2025-05-08 16:55:39 +02:00
Zanie Blue
91aa853b9c Update the schemastore script to match changes in ty (#17952)
See https://github.com/astral-sh/ty/pull/273
2025-05-08 09:31:52 -05:00
Brent Westbrook
57bf7dfbd9 [ty] Implement global handling and load-before-global-declaration syntax error (#17637)
Summary
--

This PR resolves both the typing-related and syntax error TODOs added in
#17563 by tracking a set of `global` bindings for each scope. As
discussed below, we avoid the additional AST traversal from ruff by
collecting `Name`s from `global` statements while building the semantic
index and emit a syntax error if the `Name` is already bound in the
current scope at the point of the `global` statement. This has the
downside of separating the error from the `SemanticSyntaxChecker`, but I
plan to explore using this approach in the `SemanticSyntaxChecker`
itself as a follow-up. It seems like this may be a better approach for
ruff as well.

Test Plan
--

Updated all of the related mdtests to remove the TODOs (and add quotes I
forgot on the messages).

There is one remaining TODO, but it requires `nonlocal` support, which
isn't even incorporated into the `SemanticSyntaxChecker` yet.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-05-08 10:30:04 -04:00
Alex Waygood
67cd94ed64 [ty] Add missing bitwise-operator branches for boolean and integer arithmetic (#17949) 2025-05-08 14:10:35 +01:00
Wei Lee
aac862822f [airflow] Fix SQLTableCheckOperator typo (AIR302) (#17946) 2025-05-08 14:34:55 +02:00
Micha Reiser
3755ac9fac Update ty metadata (#17943) 2025-05-08 13:24:31 +02:00
David Peter
4f890b2867 [ty] Update salsa (#17937)
## Summary

* Update salsa to pull in https://github.com/salsa-rs/salsa/pull/850.
* Some refactoring of salsa event callbacks in various `Db`'s due to
https://github.com/salsa-rs/salsa/pull/849

closes https://github.com/astral-sh/ty/issues/108

## Test Plan

Ran `cargo run --bin ty -- -vvv` on a test file to make sure that salsa
Events are still logged.
2025-05-08 12:02:53 +02:00
1528 changed files with 26430 additions and 11950 deletions

1
.gitattributes vendored
View File

@@ -21,6 +21,7 @@ crates/ruff_linter/resources/test/fixtures/pyupgrade/UP018_LF.py text eol=lf
crates/ruff_python_parser/resources/inline linguist-generated=true
ruff.schema.json -diff linguist-generated=true text=auto eol=lf
ty.schema.json -diff linguist-generated=true text=auto eol=lf
crates/ruff_python_ast/src/generated.rs -diff linguist-generated=true text=auto eol=lf
crates/ruff_python_formatter/src/generated.rs -diff linguist-generated=true text=auto eol=lf
*.md.snap linguist-language=Markdown

View File

@@ -1,8 +1,9 @@
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
Thank you for contributing to Ruff/ty! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include a descriptive title? (Please prefix with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->

7
.github/mypy-primer-ty.toml vendored Normal file
View File

@@ -0,0 +1,7 @@
#:schema ../ty.schema.json
# Configuration overrides for the mypy primer run
# Enable off-by-default rules.
[rules]
possibly-unresolved-reference = "warn"
unused-ignore-comment = "warn"

View File

@@ -310,7 +310,7 @@ jobs:
manylinux: auto
docker-options: ${{ matrix.platform.maturin_docker_options }}
args: --release --locked --out dist
- uses: uraimo/run-on-arch-action@ac33288c3728ca72563c97b8b88dda5a65a84448 # v2
- uses: uraimo/run-on-arch-action@d94c13912ea685de38fccc1109385b83fd79427d # v3.0.1
if: ${{ matrix.platform.arch != 'ppc64' && matrix.platform.arch != 'ppc64le'}}
name: Test wheel
with:
@@ -441,7 +441,7 @@ jobs:
manylinux: musllinux_1_2
args: --release --locked --out dist
docker-options: ${{ matrix.platform.maturin_docker_options }}
- uses: uraimo/run-on-arch-action@ac33288c3728ca72563c97b8b88dda5a65a84448 # v2
- uses: uraimo/run-on-arch-action@d94c13912ea685de38fccc1109385b83fd79427d # v3.0.1
name: Test wheel
with:
arch: ${{ matrix.platform.arch }}

View File

@@ -38,9 +38,9 @@ jobs:
submodules: recursive
persist-credentials: false
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -63,7 +63,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ${{ env.RUFF_BASE_IMG }}
# Defining this makes sure the org.opencontainers.image.version OCI label becomes the actual release version and not the branch name
@@ -79,7 +79,7 @@ jobs:
# Adapted from https://docs.docker.com/build/ci/github-actions/multi-platform/
- name: Build and push by digest
id: build
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
with:
context: .
platforms: ${{ matrix.platform }}
@@ -119,11 +119,11 @@ jobs:
pattern: digests-*
merge-multiple: true
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ${{ env.RUFF_BASE_IMG }}
# Order is on purpose such that the label org.opencontainers.image.version has the first pattern with the full version
@@ -131,7 +131,7 @@ jobs:
type=pep440,pattern={{ version }},value=${{ fromJson(inputs.plan).announcement_tag }}
type=pep440,pattern={{ major }}.{{ minor }},value=${{ fromJson(inputs.plan).announcement_tag }}
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -167,9 +167,9 @@ jobs:
- debian:bookworm-slim,bookworm-slim,debian-slim
- buildpack-deps:bookworm,bookworm,debian
steps:
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -219,7 +219,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
# ghcr.io prefers index level annotations
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
@@ -231,7 +231,7 @@ jobs:
${{ env.TAG_PATTERNS }}
- name: Build and push
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
with:
context: .
platforms: linux/amd64,linux/arm64
@@ -262,11 +262,11 @@ jobs:
pattern: digests-*
merge-multiple: true
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
@@ -276,7 +276,7 @@ jobs:
type=pep440,pattern={{ version }},value=${{ fromJson(inputs.plan).announcement_tag }}
type=pep440,pattern={{ major }}.{{ minor }},value=${{ fromJson(inputs.plan).announcement_tag }}
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}

View File

@@ -239,11 +239,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@e16410e7f8d9e167b74ad5697a9089a35126eb50 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-insta
- name: ty mdtests (GitHub annotations)
@@ -297,11 +297,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@e16410e7f8d9e167b74ad5697a9089a35126eb50 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-insta
- name: "Run tests"
@@ -324,7 +324,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-nextest
- name: "Run tests"
@@ -407,11 +407,11 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@e16410e7f8d9e167b74ad5697a9089a35126eb50 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-insta
- name: "Run tests"
@@ -437,7 +437,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@63aaa5c1932cebabc34eceda9d92a70215dcead6 # v1.12.3
uses: cargo-bins/cargo-binstall@5cbf019d8cb9b9d5b086218c41458ea35d817691 # v1.12.5
with:
tool: cargo-fuzz@0.11.2
- name: "Install cargo-fuzz"
@@ -459,7 +459,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
- uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
name: Download Ruff binary to test
id: download-cached-binary
@@ -504,12 +504,10 @@ jobs:
# Verify that adding a plugin or rule produces clean code.
- run: ./scripts/add_rule.py --name DoTheThing --prefix F --code 999 --linter pyflakes
- run: cargo check
- run: cargo fmt --all --check
- run: |
./scripts/add_plugin.py test --url https://pypi.org/project/-test/0.1.0/ --prefix TST
./scripts/add_rule.py --name FirstRule --prefix TST --code 001 --linter test
- run: cargo check
- run: cargo fmt --all --check
ecosystem:
name: "ecosystem"
@@ -662,7 +660,7 @@ jobs:
branch: ${{ github.event.pull_request.base.ref }}
workflow: "ci.yaml"
check_artifacts: true
- uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
- uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- name: Fuzz
env:
FORCE_COLOR: 1
@@ -692,7 +690,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@63aaa5c1932cebabc34eceda9d92a70215dcead6 # v1.12.3
- uses: cargo-bins/cargo-binstall@5cbf019d8cb9b9d5b086218c41458ea35d817691 # v1.12.5
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
@@ -732,7 +730,11 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
- uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: 22
- name: "Cache pre-commit"
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
@@ -771,7 +773,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: Install uv
uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: uv pip install -r docs/requirements-insiders.txt --system
@@ -820,7 +822,7 @@ jobs:
- determine_changes
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && (needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main') }}
steps:
- uses: extractions/setup-just@dd310ad5a97d8e7b41793f8ef055398d51ad4de6 # v2
- uses: extractions/setup-just@e33e0265a09d6d736e2ee1e0eb685ef1de4669ff # v3.0.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -908,7 +910,7 @@ jobs:
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@86c23eed46c17b80677df6d8151545ce3e236c61 # v2
uses: taiki-e/install-action@941e8a4d9d7cdb696bd4f017cf54aca281f8ffff # v2.51.2
with:
tool: cargo-codspeed

View File

@@ -34,7 +34,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
- uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"

View File

@@ -1,72 +0,0 @@
name: Daily property test run
on:
workflow_dispatch:
schedule:
- cron: "0 12 * * *"
pull_request:
paths:
- ".github/workflows/daily_property_tests.yaml"
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
FORCE_COLOR: 1
jobs:
property_tests:
name: Property tests
runs-on: ubuntu-latest
timeout-minutes: 20
# Don't run the cron job on forks:
if: ${{ github.repository == 'astral-sh/ruff' || github.event_name != 'schedule' }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@e16410e7f8d9e167b74ad5697a9089a35126eb50 # v1
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Build ty
# A release build takes longer (2 min vs 1 min), but the property tests run much faster in release
# mode (1.5 min vs 14 min), so the overall time is shorter with a release build.
run: cargo build --locked --release --package ty_python_semantic --tests
- name: Run property tests
shell: bash
run: |
export QUICKCHECK_TESTS=100000
for _ in {1..5}; do
cargo test --locked --release --package ty_python_semantic -- --ignored list::property_tests
cargo test --locked --release --package ty_python_semantic -- --ignored types::property_tests::stable
done
create-issue-on-failure:
name: Create an issue if the daily property test run surfaced any bugs
runs-on: ubuntu-latest
needs: property_tests
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && needs.property_tests.result == 'failure' }}
permissions:
issues: write
steps:
- uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.create({
owner: "astral-sh",
repo: "ruff",
title: `Daily property test run failed on ${new Date().toDateString()}`,
body: "Run listed here: https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}",
labels: ["bug", "ty", "testing"],
})

View File

@@ -36,7 +36,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
with:
@@ -50,6 +50,10 @@ jobs:
run: |
cd ruff
echo "Enabling mypy primer specific configuration overloads (see .github/mypy-primer-ty.toml)"
mkdir -p ~/.config/ty
cp .github/mypy-primer-ty.toml ~/.config/ty/ty.toml
PRIMER_SELECTOR="$(paste -s -d'|' crates/ty_python_semantic/resources/primer/good.txt)"
echo "new commit"
@@ -65,7 +69,7 @@ jobs:
echo "Project selector: $PRIMER_SELECTOR"
# Allow the exit code to be 0 or 1, only fail for actual mypy_primer crashes/bugs
uvx \
--from="git+https://github.com/hauntsaninja/mypy_primer@4b15cf3b07db69db67bbfaebfffb2a8a28040933" \
--from="git+https://github.com/hauntsaninja/mypy_primer@01a7ca325f674433c58e02416a867178d1571128" \
mypy_primer \
--repo ruff \
--type-checker ty \

View File

@@ -79,7 +79,7 @@ jobs:
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:

View File

@@ -70,7 +70,7 @@ jobs:
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:

View File

@@ -22,7 +22,7 @@ jobs:
id-token: write
steps:
- name: "Install uv"
uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5.4.2
uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: wheels-*

View File

@@ -29,3 +29,7 @@ MD024:
#
# Ref: https://github.com/astral-sh/ruff/pull/15011#issuecomment-2544790854
MD046: false
# Link text should be descriptive
# Disallows link text like *here* which is annoying.
MD059: false

View File

@@ -5,6 +5,7 @@ exclude: |
.github/workflows/release.yml|
crates/ty_vendored/vendor/.*|
crates/ty_project/resources/.*|
crates/ty/docs/(configuration|rules|cli).md|
crates/ruff_benchmark/resources/.*|
crates/ruff_linter/resources/.*|
crates/ruff_linter/src/rules/.*/snapshots/.*|
@@ -42,7 +43,7 @@ repos:
)$
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.44.0
rev: v0.45.0
hooks:
- id: markdownlint-fix
exclude: |
@@ -79,7 +80,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.8
rev: v0.11.10
hooks:
- id: ruff-format
- id: ruff
@@ -97,7 +98,7 @@ repos:
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/woodruffw/zizmor-pre-commit
rev: v1.6.0
rev: v1.7.0
hooks:
- id: zizmor

View File

@@ -1,5 +1,77 @@
# Changelog
## 0.11.10
### Preview features
- \[`ruff`\] Implement a recursive check for `RUF060` ([#17976](https://github.com/astral-sh/ruff/pull/17976))
- \[`airflow`\] Enable autofixes for `AIR301` and `AIR311` ([#17941](https://github.com/astral-sh/ruff/pull/17941))
- \[`airflow`\] Apply try catch guard to all `AIR3` rules ([#17887](https://github.com/astral-sh/ruff/pull/17887))
- \[`airflow`\] Extend `AIR311` rules ([#17913](https://github.com/astral-sh/ruff/pull/17913))
### Bug fixes
- \[`flake8-bugbear`\] Ignore `B028` if `skip_file_prefixes` is present ([#18047](https://github.com/astral-sh/ruff/pull/18047))
- \[`flake8-pie`\] Mark autofix for `PIE804` as unsafe if the dictionary contains comments ([#18046](https://github.com/astral-sh/ruff/pull/18046))
- \[`flake8-simplify`\] Correct behavior for `str.split`/`rsplit` with `maxsplit=0` (`SIM905`) ([#18075](https://github.com/astral-sh/ruff/pull/18075))
- \[`flake8-simplify`\] Fix `SIM905` autofix for `rsplit` creating a reversed list literal ([#18045](https://github.com/astral-sh/ruff/pull/18045))
- \[`flake8-use-pathlib`\] Suppress diagnostics for all `os.*` functions that have the `dir_fd` parameter (`PTH`) ([#17968](https://github.com/astral-sh/ruff/pull/17968))
- \[`refurb`\] Mark autofix as safe only for number literals (`FURB116`) ([#17692](https://github.com/astral-sh/ruff/pull/17692))
### Rule changes
- \[`flake8-bandit`\] Skip `S608` for expressionless f-strings ([#17999](https://github.com/astral-sh/ruff/pull/17999))
- \[`flake8-pytest-style`\] Don't recommend `usefixtures` for `parametrize` values (`PT019`) ([#17650](https://github.com/astral-sh/ruff/pull/17650))
- \[`pyupgrade`\] Add `resource.error` as deprecated alias of `OSError` (`UP024`) ([#17933](https://github.com/astral-sh/ruff/pull/17933))
### CLI
- Disable jemalloc on Android ([#18033](https://github.com/astral-sh/ruff/pull/18033))
### Documentation
- Update Neovim setup docs ([#18108](https://github.com/astral-sh/ruff/pull/18108))
- \[`flake8-simplify`\] Add fix safety section (`SIM103`) ([#18086](https://github.com/astral-sh/ruff/pull/18086))
- \[`flake8-simplify`\] Add fix safety section (`SIM112`) ([#18099](https://github.com/astral-sh/ruff/pull/18099))
- \[`pylint`\] Add fix safety section (`PLC0414`) ([#17802](https://github.com/astral-sh/ruff/pull/17802))
- \[`pylint`\] Add fix safety section (`PLE4703`) ([#17824](https://github.com/astral-sh/ruff/pull/17824))
- \[`pylint`\] Add fix safety section (`PLW1514`) ([#17932](https://github.com/astral-sh/ruff/pull/17932))
- \[`pylint`\] Add fix safety section (`PLW3301`) ([#17878](https://github.com/astral-sh/ruff/pull/17878))
- \[`ruff`\] Add fix safety section (`RUF007`) ([#17755](https://github.com/astral-sh/ruff/pull/17755))
- \[`ruff`\] Add fix safety section (`RUF033`) ([#17760](https://github.com/astral-sh/ruff/pull/17760))
## 0.11.9
### Preview features
- Default to latest supported Python version for version-related syntax errors ([#17529](https://github.com/astral-sh/ruff/pull/17529))
- Implement deferred annotations for Python 3.14 ([#17658](https://github.com/astral-sh/ruff/pull/17658))
- \[`airflow`\] Fix `SQLTableCheckOperator` typo (`AIR302`) ([#17946](https://github.com/astral-sh/ruff/pull/17946))
- \[`airflow`\] Remove `airflow.utils.dag_parsing_context.get_parsing_context` (`AIR301`) ([#17852](https://github.com/astral-sh/ruff/pull/17852))
- \[`airflow`\] Skip attribute check in try catch block (`AIR301`) ([#17790](https://github.com/astral-sh/ruff/pull/17790))
- \[`flake8-bandit`\] Mark tuples of string literals as trusted input in `S603` ([#17801](https://github.com/astral-sh/ruff/pull/17801))
- \[`isort`\] Check full module path against project root(s) when categorizing first-party imports ([#16565](https://github.com/astral-sh/ruff/pull/16565))
- \[`ruff`\] Add new rule `in-empty-collection` (`RUF060`) ([#16480](https://github.com/astral-sh/ruff/pull/16480))
### Bug fixes
- Fix missing `combine` call for `lint.typing-extensions` setting ([#17823](https://github.com/astral-sh/ruff/pull/17823))
- \[`flake8-async`\] Fix module name in `ASYNC110`, `ASYNC115`, and `ASYNC116` fixes ([#17774](https://github.com/astral-sh/ruff/pull/17774))
- \[`pyupgrade`\] Add spaces between tokens as necessary to avoid syntax errors in `UP018` autofix ([#17648](https://github.com/astral-sh/ruff/pull/17648))
- \[`refurb`\] Fix false positive for float and complex numbers in `FURB116` ([#17661](https://github.com/astral-sh/ruff/pull/17661))
- [parser] Flag single unparenthesized generator expr with trailing comma in arguments. ([#17893](https://github.com/astral-sh/ruff/pull/17893))
### Documentation
- Add instructions on how to upgrade to a newer Rust version ([#17928](https://github.com/astral-sh/ruff/pull/17928))
- Update code of conduct email address ([#17875](https://github.com/astral-sh/ruff/pull/17875))
- Add fix safety sections to `PLC2801`, `PLR1722`, and `RUF013` ([#17825](https://github.com/astral-sh/ruff/pull/17825), [#17826](https://github.com/astral-sh/ruff/pull/17826), [#17759](https://github.com/astral-sh/ruff/pull/17759))
- Add link to `check-typed-exception` from `S110` and `S112` ([#17786](https://github.com/astral-sh/ruff/pull/17786))
### Other changes
- Allow passing a virtual environment to `ruff analyze graph` ([#17743](https://github.com/astral-sh/ruff/pull/17743))
## 0.11.8
### Preview features

View File

@@ -2,6 +2,11 @@
Welcome! We're happy to have you here. Thank you in advance for your contribution to Ruff.
> [!NOTE]
>
> This guide is for Ruff. If you're looking to contribute to ty, please see [the ty contributing
> guide](https://github.com/astral-sh/ruff/blob/main/crates/ty/CONTRIBUTING.md).
## The Basics
Ruff welcomes contributions in the form of pull requests.
@@ -406,7 +411,7 @@ cargo install hyperfine
To benchmark the release build:
```shell
cargo build --release && hyperfine --warmup 10 \
cargo build --release --bin ruff && hyperfine --warmup 10 \
"./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ --no-cache -e" \
"./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ -e"
@@ -605,8 +610,7 @@ Then convert the recorded profile
perf script -F +pid > /tmp/test.perf
```
You can now view the converted file with [firefox profiler](https://profiler.firefox.com/), with a
more in-depth guide [here](https://profiler.firefox.com/docs/#/./guide-perf-profiling)
You can now view the converted file with [firefox profiler](https://profiler.firefox.com/). To learn more about Firefox profiler, read the [Firefox profiler profiling-guide](https://profiler.firefox.com/docs/#/./guide-perf-profiling).
An alternative is to convert the perf data to `flamegraph.svg` using
[flamegraph](https://github.com/flamegraph-rs/flamegraph) (`cargo install flamegraph`):

833
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,8 +3,9 @@ members = ["crates/*"]
resolver = "2"
[workspace.package]
edition = "2021"
rust-version = "1.84"
# Please update rustfmt.toml when bumping the Rust edition
edition = "2024"
rust-version = "1.85"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -23,6 +24,7 @@ ruff_index = { path = "crates/ruff_index" }
ruff_linter = { path = "crates/ruff_linter" }
ruff_macros = { path = "crates/ruff_macros" }
ruff_notebook = { path = "crates/ruff_notebook" }
ruff_options_metadata = { path = "crates/ruff_options_metadata" }
ruff_python_ast = { path = "crates/ruff_python_ast" }
ruff_python_codegen = { path = "crates/ruff_python_codegen" }
ruff_python_formatter = { path = "crates/ruff_python_formatter" }
@@ -37,6 +39,7 @@ ruff_source_file = { path = "crates/ruff_source_file" }
ruff_text_size = { path = "crates/ruff_text_size" }
ruff_workspace = { path = "crates/ruff_workspace" }
ty = { path = "crates/ty" }
ty_ide = { path = "crates/ty_ide" }
ty_project = { path = "crates/ty_project", default-features = false }
ty_python_semantic = { path = "crates/ty_python_semantic" }
@@ -50,7 +53,7 @@ anstyle = { version = "1.0.10" }
anyhow = { version = "1.0.80" }
assert_fs = { version = "1.1.0" }
argfile = { version = "0.2.0" }
bincode = { version = "1.3.3" }
bincode = { version = "2.0.0" }
bitflags = { version = "2.5.0" }
bstr = { version = "1.9.1" }
cachedir = { version = "0.3.1" }
@@ -64,7 +67,7 @@ console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
compact_str = "0.9.0"
criterion = { version = "0.5.1", default-features = false }
criterion = { version = "0.6.0", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
dir-test = { version = "0.4.0" }
@@ -83,6 +86,7 @@ hashbrown = { version = "0.15.0", default-features = false, features = [
"equivalent",
"inline-more",
] }
heck = "0.5.0"
ignore = { version = "0.4.22" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
@@ -96,7 +100,7 @@ is-wsl = { version = "0.4.0" }
itertools = { version = "0.14.0" }
jiff = { version = "0.2.0" }
js-sys = { version = "0.3.69" }
jod-thread = { version = "0.1.2" }
jod-thread = { version = "1.0.0" }
libc = { version = "0.2.153" }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
@@ -123,8 +127,9 @@ rand = { version = "0.9.0" }
rayon = { version = "1.10.0" }
regex = { version = "1.10.2" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "ef93d3830ffb8dc621fef1ba7b234ccef9dc3639" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "7edce6e248f35c8114b4b021cdb474a3fb2813b3" }
schemars = { version = "0.8.16" }
seahash = { version = "4.1.0" }
serde = { version = "1.0.197", features = ["derive"] }
@@ -159,8 +164,9 @@ tracing-log = { version = "0.2.0" }
tracing-subscriber = { version = "0.3.18", default-features = false, features = [
"env-filter",
"fmt",
"ansi",
"smallvec"
] }
tracing-tree = { version = "0.4.0" }
tryfn = { version = "0.2.1" }
typed-arena = { version = "2.0.2" }
unic-ucd-category = { version = "0.9" }
@@ -182,7 +188,7 @@ wild = { version = "2" }
zip = { version = "0.6.6", default-features = false }
[workspace.metadata.cargo-shear]
ignored = ["getrandom"]
ignored = ["getrandom", "ruff_options_metadata"]
[workspace.lints.rust]
@@ -210,6 +216,7 @@ similar_names = "allow"
single_match_else = "allow"
too_many_lines = "allow"
needless_continue = "allow" # An explicit continue can be more readable, especially if the alternative is an empty block.
unnecessary_debug_formatting = "allow" # too many instances, the display also doesn't quote the path which is often desired in logs where we use them the most often.
# Without the hashes we run into a `rustfmt` bug in some snapshot tests, see #13250
needless_raw_string_hashes = "allow"
# Disallowed restriction lints
@@ -254,6 +261,9 @@ opt-level = 3
[profile.dev.package.similar]
opt-level = 3
[profile.dev.package.salsa]
opt-level = 3
# Reduce complexity of a parser function that would trigger a locals limit in a wasm tool.
# https://github.com/bytecodealliance/wasm-tools/blob/b5c3d98e40590512a3b12470ef358d5c7b983b15/crates/wasmparser/src/limits.rs#L29
[profile.dev.package.ruff_python_parser]

View File

@@ -149,8 +149,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.11.8/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.11.8/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.11.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.11.10/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -183,7 +183,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.11.8
rev: v0.11.10
hooks:
# Run the linter.
- id: ruff
@@ -255,7 +255,7 @@ indent-width = 4
target-version = "py39"
[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
select = ["E4", "E7", "E9", "F"]
ignore = []

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.11.8"
version = "0.11.10"
publish = true
authors = { workspace = true }
edition = { workspace = true }
@@ -20,6 +20,7 @@ ruff_graph = { workspace = true, features = ["serde", "clap"] }
ruff_linter = { workspace = true, features = ["clap"] }
ruff_macros = { workspace = true }
ruff_notebook = { workspace = true }
ruff_options_metadata = { workspace = true, features = ["serde"] }
ruff_python_ast = { workspace = true }
ruff_python_formatter = { workspace = true }
ruff_python_parser = { workspace = true }
@@ -30,7 +31,7 @@ ruff_workspace = { workspace = true }
anyhow = { workspace = true }
argfile = { workspace = true }
bincode = { workspace = true }
bincode = { workspace = true, features = ["serde"] }
bitflags = { workspace = true }
cachedir = { workspace = true }
clap = { workspace = true, features = ["derive", "env", "wrap_help"] }
@@ -83,7 +84,7 @@ dist = true
[target.'cfg(target_os = "windows")'.dependencies]
mimalloc = { workspace = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), not(target_os = "aix"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64")))'.dependencies]
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), not(target_os = "aix"), not(target_os = "android"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64")))'.dependencies]
tikv-jemallocator = { workspace = true }
[lints]

View File

@@ -8,7 +8,7 @@ use std::sync::Arc;
use crate::commands::completions::config::{OptionString, OptionStringParser};
use anyhow::bail;
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{command, Parser, Subcommand};
use clap::{Parser, Subcommand, command};
use colored::Colorize;
use itertools::Itertools;
use path_absolutize::path_dedot;
@@ -22,12 +22,12 @@ use ruff_linter::settings::types::{
PythonVersion, UnsafeFixes,
};
use ruff_linter::{RuleParser, RuleSelector, RuleSelectorParser};
use ruff_options_metadata::{OptionEntry, OptionsMetadata};
use ruff_python_ast as ast;
use ruff_source_file::{LineIndex, OneIndexed, PositionEncoding};
use ruff_text_size::TextRange;
use ruff_workspace::configuration::{Configuration, RuleSelection};
use ruff_workspace::options::{Options, PycodestyleOptions};
use ruff_workspace::options_base::{OptionEntry, OptionsMetadata};
use ruff_workspace::resolver::ConfigurationTransformer;
use rustc_hash::FxHashMap;
use toml;
@@ -1126,10 +1126,10 @@ impl std::fmt::Display for FormatRangeParseError {
write!(
f,
"the start position '{start_invalid}' is greater than the end position '{end_invalid}'.\n {tip} Try switching start and end: '{end}-{start}'",
start_invalid=start.to_string().bold().yellow(),
end_invalid=end.to_string().bold().yellow(),
start=start.to_string().green().bold(),
end=end.to_string().green().bold()
start_invalid = start.to_string().bold().yellow(),
end_invalid = end.to_string().bold().yellow(),
start = start.to_string().green().bold(),
end = end.to_string().green().bold()
)
}
FormatRangeParseError::InvalidStart(inner) => inner.write(f, true),
@@ -1230,30 +1230,36 @@ impl LineColumnParseError {
match self {
LineColumnParseError::ColumnParseError(inner) => {
write!(f, "the {range}s column is not a valid number ({inner})'\n {tip} The format is 'line:column'.")
write!(
f,
"the {range}s column is not a valid number ({inner})'\n {tip} The format is 'line:column'."
)
}
LineColumnParseError::LineParseError(inner) => {
write!(f, "the {range} line is not a valid number ({inner})\n {tip} The format is 'line:column'.")
write!(
f,
"the {range} line is not a valid number ({inner})\n {tip} The format is 'line:column'."
)
}
LineColumnParseError::ZeroColumnIndex { line } => {
write!(
f,
"the {range} column is 0, but it should be 1 or greater.\n {tip} The column numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion=format!("{line}:1").green().bold()
suggestion = format!("{line}:1").green().bold()
)
}
LineColumnParseError::ZeroLineIndex { column } => {
write!(
f,
"the {range} line is 0, but it should be 1 or greater.\n {tip} The line numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion=format!("1:{column}").green().bold()
suggestion = format!("1:{column}").green().bold()
)
}
LineColumnParseError::ZeroLineAndColumnIndex => {
write!(
f,
"the {range} line and column are both 0, but they should be 1 or greater.\n {tip} The line and column numbers start at 1.\n {tip} Try {suggestion} instead.",
suggestion="1:1".to_string().green().bold()
suggestion = "1:1".to_string().green().bold()
)
}
}

View File

@@ -3,8 +3,8 @@ use std::fs::{self, File};
use std::hash::Hasher;
use std::io::{self, BufReader, Write};
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Mutex;
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::{Duration, SystemTime};
use anyhow::{Context, Result};
@@ -13,21 +13,21 @@ use itertools::Itertools;
use log::{debug, error};
use rayon::iter::ParallelIterator;
use rayon::iter::{IntoParallelIterator, ParallelBridge};
use ruff_linter::codes::Rule;
use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize};
use tempfile::NamedTempFile;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_diagnostics::{DiagnosticKind, Fix};
use ruff_linter::message::{DiagnosticMessage, Message};
use ruff_diagnostics::Fix;
use ruff_linter::message::Message;
use ruff_linter::package::PackageRoot;
use ruff_linter::{warn_user, VERSION};
use ruff_linter::{VERSION, warn_user};
use ruff_macros::CacheKey;
use ruff_notebook::NotebookIndex;
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
use ruff_workspace::resolver::Resolver;
use ruff_text_size::{Ranged, TextRange, TextSize};
use ruff_workspace::Settings;
use ruff_workspace::resolver::Resolver;
use crate::diagnostics::Diagnostics;
@@ -117,13 +117,14 @@ impl Cache {
}
};
let mut package: PackageCache = match bincode::deserialize_from(BufReader::new(file)) {
Ok(package) => package,
Err(err) => {
warn_user!("Failed parse cache file `{}`: {err}", path.display());
return Cache::empty(path, package_root);
}
};
let mut package: PackageCache =
match bincode::decode_from_reader(BufReader::new(file), bincode::config::standard()) {
Ok(package) => package,
Err(err) => {
warn_user!("Failed parse cache file `{}`: {err}", path.display());
return Cache::empty(path, package_root);
}
};
// Sanity check.
if package.package_root != package_root {
@@ -175,8 +176,8 @@ impl Cache {
// Serialize to in-memory buffer because hyperfine benchmark showed that it's faster than
// using a `BufWriter` and our cache files are small enough that streaming isn't necessary.
let serialized =
bincode::serialize(&self.package).context("Failed to serialize cache data")?;
let serialized = bincode::encode_to_vec(&self.package, bincode::config::standard())
.context("Failed to serialize cache data")?;
temp_file
.write_all(&serialized)
.context("Failed to write serialized cache to temporary file.")?;
@@ -311,7 +312,7 @@ impl Cache {
}
/// On disk representation of a cache of a package.
#[derive(Deserialize, Debug, Serialize)]
#[derive(bincode::Encode, Debug, bincode::Decode)]
struct PackageCache {
/// Path to the root of the package.
///
@@ -323,7 +324,7 @@ struct PackageCache {
}
/// On disk representation of the cache per source file.
#[derive(Deserialize, Debug, Serialize)]
#[derive(bincode::Decode, Debug, bincode::Encode)]
pub(crate) struct FileCache {
/// Key that determines if the cached item is still valid.
key: u64,
@@ -347,14 +348,16 @@ impl FileCache {
lint.messages
.iter()
.map(|msg| {
Message::Diagnostic(DiagnosticMessage {
kind: msg.kind.clone(),
range: msg.range,
fix: msg.fix.clone(),
file: file.clone(),
noqa_offset: msg.noqa_offset,
parent: msg.parent,
})
Message::diagnostic(
msg.rule.into(),
msg.body.clone(),
msg.suggestion.clone(),
msg.range,
msg.fix.clone(),
msg.parent,
file.clone(),
msg.noqa_offset,
)
})
.collect()
};
@@ -368,7 +371,7 @@ impl FileCache {
}
}
#[derive(Debug, Default, Deserialize, Serialize)]
#[derive(Debug, Default, bincode::Decode, bincode::Encode)]
struct FileCacheData {
lint: Option<LintCacheData>,
formatted: bool,
@@ -406,7 +409,7 @@ pub(crate) fn init(path: &Path) -> Result<()> {
Ok(())
}
#[derive(Deserialize, Debug, Serialize, PartialEq)]
#[derive(bincode::Decode, Debug, bincode::Encode, PartialEq)]
pub(crate) struct LintCacheData {
/// Imports made.
// pub(super) imports: ImportMap,
@@ -419,6 +422,7 @@ pub(crate) struct LintCacheData {
/// This will be empty if `messages` is empty.
pub(super) source: String,
/// Notebook index if this file is a Jupyter Notebook.
#[bincode(with_serde)]
pub(super) notebook_index: Option<NotebookIndex>,
}
@@ -435,20 +439,22 @@ impl LintCacheData {
let messages = messages
.iter()
.filter_map(|message| message.as_diagnostic_message())
.map(|msg| {
.filter_map(|msg| msg.to_rule().map(|rule| (rule, msg)))
.map(|(rule, msg)| {
// Make sure that all message use the same source file.
assert_eq!(
&msg.file,
msg.source_file(),
messages.first().unwrap().source_file(),
"message uses a different source file"
);
CacheMessage {
kind: msg.kind.clone(),
range: msg.range,
rule,
body: msg.body().to_string(),
suggestion: msg.suggestion().map(ToString::to_string),
range: msg.range(),
parent: msg.parent,
fix: msg.fix.clone(),
noqa_offset: msg.noqa_offset,
fix: msg.fix().cloned(),
noqa_offset: msg.noqa_offset(),
}
})
.collect();
@@ -462,14 +468,24 @@ impl LintCacheData {
}
/// On disk representation of a diagnostic message.
#[derive(Deserialize, Debug, Serialize, PartialEq)]
#[derive(bincode::Decode, Debug, bincode::Encode, PartialEq)]
pub(super) struct CacheMessage {
kind: DiagnosticKind,
/// The rule for the cached diagnostic.
#[bincode(with_serde)]
rule: Rule,
/// The message body to display to the user, to explain the diagnostic.
body: String,
/// The message to display to the user, to explain the suggested fix.
suggestion: Option<String>,
/// Range into the message's [`FileCache::source`].
#[bincode(with_serde)]
range: TextRange,
#[bincode(with_serde)]
parent: Option<TextSize>,
#[bincode(with_serde)]
fix: Option<Fix>,
noqa_offset: TextSize,
#[bincode(with_serde)]
noqa_offset: Option<TextSize>,
}
pub(crate) trait PackageCaches {
@@ -587,7 +603,7 @@ mod tests {
use std::time::SystemTime;
use anyhow::Result;
use filetime::{set_file_mtime, FileTime};
use filetime::{FileTime, set_file_mtime};
use itertools::Itertools;
use ruff_linter::settings::LinterSettings;
use test_case::test_case;
@@ -602,8 +618,8 @@ mod tests {
use crate::cache::{self, FileCache, FileCacheData, FileCacheKey};
use crate::cache::{Cache, RelativePathBuf};
use crate::commands::format::{format_path, FormatCommandError, FormatMode, FormatResult};
use crate::diagnostics::{lint_path, Diagnostics};
use crate::commands::format::{FormatCommandError, FormatMode, FormatResult, format_path};
use crate::diagnostics::{Diagnostics, lint_path};
#[test_case("../ruff_linter/resources/test/fixtures", "ruff_tests/cache_same_results_ruff_linter"; "ruff_linter_fixtures")]
#[test_case("../ruff_notebook/resources/test/fixtures", "ruff_tests/cache_same_results_ruff_notebook"; "ruff_notebook_fixtures")]

View File

@@ -11,7 +11,7 @@ use ruff_linter::source_kind::SourceKind;
use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{
match_exclusion, python_files_in_path, PyprojectConfig, ResolvedFile,
PyprojectConfig, ResolvedFile, match_exclusion, python_files_in_path,
};
use crate::args::ConfigArguments;

View File

@@ -1,6 +1,6 @@
use crate::args::{AnalyzeGraphArgs, ConfigArguments};
use crate::resolve::resolve;
use crate::{resolve_default_files, ExitStatus};
use crate::{ExitStatus, resolve_default_files};
use anyhow::Result;
use log::{debug, warn};
use path_absolutize::CWD;
@@ -9,7 +9,7 @@ use ruff_graph::{Direction, ImportMap, ModuleDb, ModuleImports};
use ruff_linter::package::PackageRoot;
use ruff_linter::{warn_user, warn_user_once};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{match_exclusion, python_files_in_path, ResolvedFile};
use ruff_workspace::resolver::{ResolvedFile, match_exclusion, python_files_in_path};
use rustc_hash::FxHashMap;
use std::io::Write;
use std::path::{Path, PathBuf};

View File

@@ -17,12 +17,12 @@ use ruff_linter::message::Message;
use ruff_linter::package::PackageRoot;
use ruff_linter::registry::Rule;
use ruff_linter::settings::types::UnsafeFixes;
use ruff_linter::settings::{flags, LinterSettings};
use ruff_linter::{fs, warn_user_once, IOError};
use ruff_linter::settings::{LinterSettings, flags};
use ruff_linter::{IOError, fs, warn_user_once};
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
use ruff_text_size::TextRange;
use ruff_workspace::resolver::{
match_exclusion, python_files_in_path, PyprojectConfig, ResolvedFile,
PyprojectConfig, ResolvedFile, match_exclusion, python_files_in_path,
};
use crate::args::ConfigArguments;
@@ -133,7 +133,7 @@ pub(crate) fn check(
vec![Message::from_diagnostic(
Diagnostic::new(IOError { message }, TextRange::default()),
dummy,
TextSize::default(),
None,
)],
FxHashMap::default(),
)
@@ -228,9 +228,9 @@ mod test {
use ruff_linter::message::{Emitter, EmitterContext, TextEmitter};
use ruff_linter::registry::Rule;
use ruff_linter::settings::types::UnsafeFixes;
use ruff_linter::settings::{flags, LinterSettings};
use ruff_workspace::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use ruff_linter::settings::{LinterSettings, flags};
use ruff_workspace::Settings;
use ruff_workspace::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use crate::args::ConfigArguments;

View File

@@ -4,10 +4,10 @@ use anyhow::Result;
use ruff_linter::package::PackageRoot;
use ruff_linter::packaging;
use ruff_linter::settings::flags;
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, PyprojectConfig, Resolver};
use ruff_workspace::resolver::{PyprojectConfig, Resolver, match_exclusion, python_file_at_path};
use crate::args::ConfigArguments;
use crate::diagnostics::{lint_stdin, Diagnostics};
use crate::diagnostics::{Diagnostics, lint_stdin};
use crate::stdin::{parrot_stdin, read_from_stdin};
/// Run the linter over a single file, read from `stdin`.

View File

@@ -2,10 +2,8 @@ use clap::builder::{PossibleValue, TypedValueParser, ValueParserFactory};
use itertools::Itertools;
use std::str::FromStr;
use ruff_workspace::{
options::Options,
options_base::{OptionField, OptionSet, OptionsMetadata, Visit},
};
use ruff_options_metadata::{OptionField, OptionSet, OptionsMetadata, Visit};
use ruff_workspace::options::Options;
#[derive(Default)]
struct CollectOptionsVisitor {

View File

@@ -1,9 +1,9 @@
use anyhow::{anyhow, Result};
use anyhow::{Result, anyhow};
use crate::args::HelpFormat;
use ruff_options_metadata::OptionsMetadata;
use ruff_workspace::options::Options;
use ruff_workspace::options_base::OptionsMetadata;
#[expect(clippy::print_stdout)]
pub(crate) fn config(key: Option<&str>, format: HelpFormat) -> Result<()> {

View File

@@ -1,7 +1,7 @@
use std::fmt::{Display, Formatter};
use std::fs::File;
use std::io;
use std::io::{stderr, stdout, Write};
use std::io::{Write, stderr, stdout};
use std::path::{Path, PathBuf};
use std::time::Instant;
@@ -16,7 +16,7 @@ use rustc_hash::FxHashSet;
use thiserror::Error;
use tracing::debug;
use ruff_db::panic::{catch_unwind, PanicError};
use ruff_db::panic::{PanicError, catch_unwind};
use ruff_diagnostics::SourceMap;
use ruff_linter::fs;
use ruff_linter::logging::{DisplayParseError, LogLevel};
@@ -26,16 +26,16 @@ use ruff_linter::rules::flake8_quotes::settings::Quote;
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_python_formatter::{format_module_source, format_range, FormatModuleError, QuoteStyle};
use ruff_python_formatter::{FormatModuleError, QuoteStyle, format_module_source, format_range};
use ruff_source_file::LineIndex;
use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_workspace::resolver::{match_exclusion, python_files_in_path, ResolvedFile, Resolver};
use ruff_workspace::FormatterSettings;
use ruff_workspace::resolver::{ResolvedFile, Resolver, match_exclusion, python_files_in_path};
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::cache::{Cache, FileCacheKey, PackageCacheMap, PackageCaches};
use crate::resolve::resolve;
use crate::{resolve_default_files, ExitStatus};
use crate::{ExitStatus, resolve_default_files};
#[derive(Debug, Copy, Clone, is_macro::Is)]
pub(crate) enum FormatMode {
@@ -821,9 +821,14 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
.collect();
rule_names.sort();
if let [rule] = rule_names.as_slice() {
warn_user_once!("The following rule may cause conflicts when used with the formatter: {rule}. To avoid unexpected behavior, we recommend disabling this rule, either by removing it from the `select` or `extend-select` configuration, or adding it to the `ignore` configuration.");
warn_user_once!(
"The following rule may cause conflicts when used with the formatter: {rule}. To avoid unexpected behavior, we recommend disabling this rule, either by removing it from the `select` or `extend-select` configuration, or adding it to the `ignore` configuration."
);
} else {
warn_user_once!("The following rules may cause conflicts when used with the formatter: {}. To avoid unexpected behavior, we recommend disabling these rules, either by removing them from the `select` or `extend-select` configuration, or adding them to the `ignore` configuration.", rule_names.join(", "));
warn_user_once!(
"The following rules may cause conflicts when used with the formatter: {}. To avoid unexpected behavior, we recommend disabling these rules, either by removing them from the `select` or `extend-select` configuration, or adding them to the `ignore` configuration.",
rule_names.join(", ")
);
}
}
@@ -833,7 +838,9 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
if setting.linter.rules.enabled(Rule::TabIndentation)
&& setting.formatter.indent_style.is_tab()
{
warn_user_once!("The `format.indent-style=\"tab\"` option is incompatible with `W191`, which lints against all uses of tabs. We recommend disabling these rules when using the formatter, which enforces a consistent indentation style. Alternatively, set the `format.indent-style` option to `\"space\"`.");
warn_user_once!(
"The `format.indent-style=\"tab\"` option is incompatible with `W191`, which lints against all uses of tabs. We recommend disabling these rules when using the formatter, which enforces a consistent indentation style. Alternatively, set the `format.indent-style` option to `\"space\"`."
);
}
if !setting
@@ -846,14 +853,18 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
.enabled(Rule::MultiLineImplicitStringConcatenation)
&& !setting.linter.flake8_implicit_str_concat.allow_multiline
{
warn_user_once!("The `lint.flake8-implicit-str-concat.allow-multiline = false` option is incompatible with the formatter unless `ISC001` is enabled. We recommend enabling `ISC001` or setting `allow-multiline=true`.");
warn_user_once!(
"The `lint.flake8-implicit-str-concat.allow-multiline = false` option is incompatible with the formatter unless `ISC001` is enabled. We recommend enabling `ISC001` or setting `allow-multiline=true`."
);
}
// Validate all rules that rely on tab styles.
if setting.linter.rules.enabled(Rule::DocstringTabIndentation)
&& setting.formatter.indent_style.is_tab()
{
warn_user_once!("The `format.indent-style=\"tab\"` option is incompatible with `D206`, with requires space-based indentation. We recommend disabling these rules when using the formatter, which enforces a consistent indentation style. Alternatively, set the `format.indent-style` option to `\"space\"`.");
warn_user_once!(
"The `format.indent-style=\"tab\"` option is incompatible with `D206`, with requires space-based indentation. We recommend disabling these rules when using the formatter, which enforces a consistent indentation style. Alternatively, set the `format.indent-style` option to `\"space\"`."
);
}
// Validate all rules that rely on custom indent widths.
@@ -862,7 +873,9 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
Rule::IndentationWithInvalidMultipleComment,
]) && setting.formatter.indent_width.value() != 4
{
warn_user_once!("The `format.indent-width` option with a value other than 4 is incompatible with `E111` and `E114`. We recommend disabling these rules when using the formatter, which enforces a consistent indentation width. Alternatively, set the `format.indent-width` option to `4`.");
warn_user_once!(
"The `format.indent-width` option with a value other than 4 is incompatible with `E111` and `E114`. We recommend disabling these rules when using the formatter, which enforces a consistent indentation width. Alternatively, set the `format.indent-width` option to `4`."
);
}
// Validate all rules that rely on quote styles.
@@ -876,10 +889,14 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
setting.formatter.quote_style,
) {
(Quote::Double, QuoteStyle::Single) => {
warn_user_once!("The `flake8-quotes.inline-quotes=\"double\"` option is incompatible with the formatter's `format.quote-style=\"single\"`. We recommend disabling `Q000` and `Q003` when using the formatter, which enforces a consistent quote style. Alternatively, set both options to either `\"single\"` or `\"double\"`.");
warn_user_once!(
"The `flake8-quotes.inline-quotes=\"double\"` option is incompatible with the formatter's `format.quote-style=\"single\"`. We recommend disabling `Q000` and `Q003` when using the formatter, which enforces a consistent quote style. Alternatively, set both options to either `\"single\"` or `\"double\"`."
);
}
(Quote::Single, QuoteStyle::Double) => {
warn_user_once!("The `flake8-quotes.inline-quotes=\"single\"` option is incompatible with the formatter's `format.quote-style=\"double\"`. We recommend disabling `Q000` and `Q003` when using the formatter, which enforces a consistent quote style. Alternatively, set both options to either `\"single\"` or `\"double\"`.");
warn_user_once!(
"The `flake8-quotes.inline-quotes=\"single\"` option is incompatible with the formatter's `format.quote-style=\"double\"`. We recommend disabling `Q000` and `Q003` when using the formatter, which enforces a consistent quote style. Alternatively, set both options to either `\"single\"` or `\"double\"`."
);
}
_ => {}
}
@@ -892,7 +909,9 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
QuoteStyle::Single | QuoteStyle::Double
)
{
warn_user_once!("The `flake8-quotes.multiline-quotes=\"single\"` option is incompatible with the formatter. We recommend disabling `Q001` when using the formatter, which enforces double quotes for multiline strings. Alternatively, set the `flake8-quotes.multiline-quotes` option to `\"double\"`.`");
warn_user_once!(
"The `flake8-quotes.multiline-quotes=\"single\"` option is incompatible with the formatter. We recommend disabling `Q001` when using the formatter, which enforces double quotes for multiline strings. Alternatively, set the `flake8-quotes.multiline-quotes` option to `\"double\"`.`"
);
}
if setting.linter.rules.enabled(Rule::BadQuotesDocstring)
@@ -902,7 +921,9 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
QuoteStyle::Single | QuoteStyle::Double
)
{
warn_user_once!("The `flake8-quotes.docstring-quotes=\"single\"` option is incompatible with the formatter. We recommend disabling `Q002` when using the formatter, which enforces double quotes for docstrings. Alternatively, set the `flake8-quotes.docstring-quotes` option to `\"double\"`.`");
warn_user_once!(
"The `flake8-quotes.docstring-quotes=\"single\"` option is incompatible with the formatter. We recommend disabling `Q002` when using the formatter, which enforces double quotes for docstrings. Alternatively, set the `flake8-quotes.docstring-quotes` option to `\"double\"`.`"
);
}
// Validate all isort settings.
@@ -910,12 +931,16 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
// The formatter removes empty lines if the value is larger than 2 but always inserts a empty line after imports.
// Two empty lines are okay because `isort` only uses this setting for top-level imports (not in nested blocks).
if !matches!(setting.linter.isort.lines_after_imports, 1 | 2 | -1) {
warn_user_once!("The isort option `isort.lines-after-imports` with a value other than `-1`, `1` or `2` is incompatible with the formatter. To avoid unexpected behavior, we recommend setting the option to one of: `2`, `1`, or `-1` (default).");
warn_user_once!(
"The isort option `isort.lines-after-imports` with a value other than `-1`, `1` or `2` is incompatible with the formatter. To avoid unexpected behavior, we recommend setting the option to one of: `2`, `1`, or `-1` (default)."
);
}
// Values larger than two get reduced to one line by the formatter if the import is in a nested block.
if setting.linter.isort.lines_between_types > 1 {
warn_user_once!("The isort option `isort.lines-between-types` with a value greater than 1 is incompatible with the formatter. To avoid unexpected behavior, we recommend setting the option to one of: `1` or `0` (default).");
warn_user_once!(
"The isort option `isort.lines-between-types` with a value greater than 1 is incompatible with the formatter. To avoid unexpected behavior, we recommend setting the option to one of: `1` or `0` (default)."
);
}
// isort inserts a trailing comma which the formatter preserves, but only if `skip-magic-trailing-comma` isn't false.
@@ -924,11 +949,15 @@ pub(super) fn warn_incompatible_formatter_settings(resolver: &Resolver) {
&& !setting.linter.isort.force_single_line
{
if setting.linter.isort.force_wrap_aliases {
warn_user_once!("The isort option `isort.force-wrap-aliases` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.force-wrap-aliases=false` or `format.skip-magic-trailing-comma=false`.");
warn_user_once!(
"The isort option `isort.force-wrap-aliases` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.force-wrap-aliases=false` or `format.skip-magic-trailing-comma=false`."
);
}
if setting.linter.isort.split_on_trailing_comma {
warn_user_once!("The isort option `isort.split-on-trailing-comma` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.split-on-trailing-comma=false` or `format.skip-magic-trailing-comma=false`.");
warn_user_once!(
"The isort option `isort.split-on-trailing-comma` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.split-on-trailing-comma=false` or `format.skip-magic-trailing-comma=false`."
);
}
}
}

View File

@@ -6,17 +6,17 @@ use log::error;
use ruff_linter::source_kind::SourceKind;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, Resolver};
use ruff_workspace::FormatterSettings;
use ruff_workspace::resolver::{Resolver, match_exclusion, python_file_at_path};
use crate::ExitStatus;
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::commands::format::{
format_source, warn_incompatible_formatter_settings, FormatCommandError, FormatMode,
FormatResult, FormattedSource,
FormatCommandError, FormatMode, FormatResult, FormattedSource, format_source,
warn_incompatible_formatter_settings,
};
use crate::resolve::resolve;
use crate::stdin::{parrot_stdin, read_from_stdin};
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
pub(crate) fn format_stdin(

View File

@@ -5,7 +5,7 @@ use anyhow::Result;
use itertools::Itertools;
use ruff_linter::warn_user_once;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use ruff_workspace::resolver::{PyprojectConfig, ResolvedFile, python_files_in_path};
use crate::args::ConfigArguments;

View File

@@ -1,10 +1,10 @@
use std::io::Write;
use std::path::PathBuf;
use anyhow::{bail, Result};
use anyhow::{Result, bail};
use itertools::Itertools;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use ruff_workspace::resolver::{PyprojectConfig, ResolvedFile, python_files_in_path};
use crate::args::ConfigArguments;

View File

@@ -14,18 +14,18 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::Diagnostic;
use ruff_linter::codes::Rule;
use ruff_linter::linter::{lint_fix, lint_only, FixTable, FixerResult, LinterResult, ParseSource};
use ruff_linter::message::{Message, SyntaxErrorMessage};
use ruff_linter::linter::{FixTable, FixerResult, LinterResult, ParseSource, lint_fix, lint_only};
use ruff_linter::message::Message;
use ruff_linter::package::PackageRoot;
use ruff_linter::pyproject_toml::lint_pyproject_toml;
use ruff_linter::settings::types::UnsafeFixes;
use ruff_linter::settings::{flags, LinterSettings};
use ruff_linter::settings::{LinterSettings, flags};
use ruff_linter::source_kind::{SourceError, SourceKind};
use ruff_linter::{fs, IOError};
use ruff_linter::{IOError, fs};
use ruff_notebook::{Notebook, NotebookError, NotebookIndex};
use ruff_python_ast::{PySourceType, SourceType, TomlSourceType};
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
use ruff_text_size::TextRange;
use ruff_workspace::Settings;
use crate::cache::{Cache, FileCacheKey, LintCacheData};
@@ -71,7 +71,7 @@ impl Diagnostics {
TextRange::default(),
),
source_file,
TextSize::default(),
None,
)],
FxHashMap::default(),
)
@@ -102,11 +102,7 @@ impl Diagnostics {
let name = path.map_or_else(|| "-".into(), Path::to_string_lossy);
let dummy = SourceFileBuilder::new(name, "").finish();
Self::new(
vec![Message::SyntaxError(SyntaxErrorMessage {
message: err.to_string(),
range: TextRange::default(),
file: dummy,
})],
vec![Message::syntax_error(err, TextRange::default(), dummy)],
FxHashMap::default(),
)
}

View File

@@ -1,7 +1,7 @@
#![allow(clippy::print_stdout)]
use std::fs::File;
use std::io::{self, stdout, BufWriter, Write};
use std::io::{self, BufWriter, Write, stdout};
use std::num::NonZeroUsize;
use std::path::{Path, PathBuf};
use std::process::ExitCode;
@@ -11,10 +11,10 @@ use anyhow::Result;
use clap::CommandFactory;
use colored::Colorize;
use log::warn;
use notify::{recommended_watcher, RecursiveMode, Watcher};
use notify::{RecursiveMode, Watcher, recommended_watcher};
use args::{GlobalConfigArgs, ServerCommand};
use ruff_linter::logging::{set_up_logging, LogLevel};
use ruff_linter::logging::{LogLevel, set_up_logging};
use ruff_linter::settings::flags::FixMode;
use ruff_linter::settings::types::OutputFormat;
use ruff_linter::{fs, warn_user, warn_user_once};
@@ -488,7 +488,7 @@ pub fn check(args: CheckCommand, global_options: GlobalConfigArgs) -> Result<Exi
mod test_file_change_detector {
use std::path::PathBuf;
use crate::{change_detected, ChangeKind};
use crate::{ChangeKind, change_detected};
#[test]
fn detect_correct_file_change() {

View File

@@ -5,7 +5,7 @@ use clap::Parser;
use colored::Colorize;
use ruff::args::Args;
use ruff::{run, ExitStatus};
use ruff::{ExitStatus, run};
#[cfg(target_os = "windows")]
#[global_allocator]
@@ -15,6 +15,7 @@ static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
not(target_os = "windows"),
not(target_os = "openbsd"),
not(target_os = "aix"),
not(target_os = "android"),
any(
target_arch = "x86_64",
target_arch = "aarch64",

View File

@@ -1,23 +1,22 @@
use std::cmp::Reverse;
use std::fmt::Display;
use std::hash::Hash;
use std::io::Write;
use anyhow::Result;
use bitflags::bitflags;
use colored::Colorize;
use itertools::{iterate, Itertools};
use itertools::{Itertools, iterate};
use ruff_linter::codes::NoqaCode;
use serde::Serialize;
use ruff_linter::fs::relativize_path;
use ruff_linter::logging::LogLevel;
use ruff_linter::message::{
AzureEmitter, Emitter, EmitterContext, GithubEmitter, GitlabEmitter, GroupedEmitter,
JsonEmitter, JsonLinesEmitter, JunitEmitter, Message, MessageKind, PylintEmitter,
RdjsonEmitter, SarifEmitter, TextEmitter,
JsonEmitter, JsonLinesEmitter, JunitEmitter, Message, PylintEmitter, RdjsonEmitter,
SarifEmitter, TextEmitter,
};
use ruff_linter::notify_user;
use ruff_linter::registry::Rule;
use ruff_linter::settings::flags::{self};
use ruff_linter::settings::types::{OutputFormat, UnsafeFixes};
@@ -37,59 +36,12 @@ bitflags! {
#[derive(Serialize)]
struct ExpandedStatistics {
code: Option<SerializeRuleAsCode>,
name: SerializeMessageKindAsTitle,
code: Option<NoqaCode>,
name: &'static str,
count: usize,
fixable: bool,
}
#[derive(Copy, Clone)]
struct SerializeRuleAsCode(Rule);
impl Serialize for SerializeRuleAsCode {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(&self.0.noqa_code().to_string())
}
}
impl Display for SerializeRuleAsCode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.0.noqa_code())
}
}
impl From<Rule> for SerializeRuleAsCode {
fn from(rule: Rule) -> Self {
Self(rule)
}
}
struct SerializeMessageKindAsTitle(MessageKind);
impl Serialize for SerializeMessageKindAsTitle {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(self.0.as_str())
}
}
impl Display for SerializeMessageKindAsTitle {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.0.as_str())
}
}
impl From<MessageKind> for SerializeMessageKindAsTitle {
fn from(kind: MessageKind) -> Self {
Self(kind)
}
}
pub(crate) struct Printer {
format: OutputFormat,
log_level: LogLevel,
@@ -157,7 +109,8 @@ impl Printer {
} else {
"es"
};
writeln!(writer,
writeln!(
writer,
"{fix_prefix} {} fixable with the `--fix` option ({} hidden fix{es} can be enabled with the `--unsafe-fixes` option).",
fixables.applicable, fixables.inapplicable_unsafe
)?;
@@ -175,7 +128,8 @@ impl Printer {
} else {
"es"
};
writeln!(writer,
writeln!(
writer,
"No fixes available ({} hidden fix{es} can be enabled with the `--unsafe-fixes` option).",
fixables.inapplicable_unsafe
)?;
@@ -205,15 +159,27 @@ impl Printer {
if fixed > 0 {
let s = if fixed == 1 { "" } else { "s" };
if self.fix_mode.is_apply() {
writeln!(writer, "Fixed {fixed} error{s} ({unapplied} additional fix{es} available with `--unsafe-fixes`).")?;
writeln!(
writer,
"Fixed {fixed} error{s} ({unapplied} additional fix{es} available with `--unsafe-fixes`)."
)?;
} else {
writeln!(writer, "Would fix {fixed} error{s} ({unapplied} additional fix{es} available with `--unsafe-fixes`).")?;
writeln!(
writer,
"Would fix {fixed} error{s} ({unapplied} additional fix{es} available with `--unsafe-fixes`)."
)?;
}
} else {
if self.fix_mode.is_apply() {
writeln!(writer, "No errors fixed ({unapplied} fix{es} available with `--unsafe-fixes`).")?;
writeln!(
writer,
"No errors fixed ({unapplied} fix{es} available with `--unsafe-fixes`)."
)?;
} else {
writeln!(writer, "No errors would be fixed ({unapplied} fix{es} available with `--unsafe-fixes`).")?;
writeln!(
writer,
"No errors would be fixed ({unapplied} fix{es} available with `--unsafe-fixes`)."
)?;
}
}
} else {
@@ -336,21 +302,25 @@ impl Printer {
let statistics: Vec<ExpandedStatistics> = diagnostics
.messages
.iter()
.sorted_by_key(|message| (message.rule(), message.fixable()))
.fold(vec![], |mut acc: Vec<(&Message, usize)>, message| {
if let Some((prev_message, count)) = acc.last_mut() {
if prev_message.rule() == message.rule() {
*count += 1;
return acc;
.map(|message| (message.to_noqa_code(), message))
.sorted_by_key(|(code, message)| (*code, message.fixable()))
.fold(
vec![],
|mut acc: Vec<((Option<NoqaCode>, &Message), usize)>, (code, message)| {
if let Some(((prev_code, _prev_message), count)) = acc.last_mut() {
if *prev_code == code {
*count += 1;
return acc;
}
}
}
acc.push((message, 1));
acc
})
acc.push(((code, message), 1));
acc
},
)
.iter()
.map(|&(message, count)| ExpandedStatistics {
code: message.rule().map(std::convert::Into::into),
name: message.kind().into(),
.map(|&((code, message), count)| ExpandedStatistics {
code,
name: message.name(),
count,
fixable: if let Some(fix) = message.fix() {
fix.applies(self.unsafe_fixes.required_applicability())

View File

@@ -1,14 +1,14 @@
use std::path::Path;
use anyhow::{bail, Result};
use anyhow::{Result, bail};
use log::debug;
use path_absolutize::path_dedot;
use ruff_workspace::configuration::Configuration;
use ruff_workspace::pyproject::{self, find_fallback_target_version};
use ruff_workspace::resolver::{
resolve_root_settings, ConfigurationOrigin, ConfigurationTransformer, PyprojectConfig,
PyprojectDiscoveryStrategy,
ConfigurationOrigin, ConfigurationTransformer, PyprojectConfig, PyprojectDiscoveryStrategy,
resolve_root_settings,
};
use ruff_python_ast as ast;

View File

@@ -1157,18 +1157,20 @@ include = ["*.ipy"]
#[test]
fn warn_invalid_noqa_with_no_diagnostics() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--isolated"])
.arg("--select")
.arg("F401")
.arg("-")
.pass_stdin(
r#"
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--isolated"])
.arg("--select")
.arg("F401")
.arg("-")
.pass_stdin(
r#"
# ruff: noqa: AAA101
print("Hello world!")
"#
));
)
);
}
#[test]
@@ -4997,30 +4999,34 @@ fn flake8_import_convention_invalid_aliases_config_module_name() -> Result<()> {
#[test]
fn flake8_import_convention_unused_aliased_import() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(r#"lint.isort.required-imports = ["import pandas"]"#)
.args(["--select", "I002,ICN001,F401"])
.args(["--stdin-filename", "test.py"])
.arg("--unsafe-fixes")
.arg("--fix")
.arg("-")
.pass_stdin("1"));
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(r#"lint.isort.required-imports = ["import pandas"]"#)
.args(["--select", "I002,ICN001,F401"])
.args(["--stdin-filename", "test.py"])
.arg("--unsafe-fixes")
.arg("--fix")
.arg("-")
.pass_stdin("1")
);
}
#[test]
fn flake8_import_convention_unused_aliased_import_no_conflict() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(r#"lint.isort.required-imports = ["import pandas as pd"]"#)
.args(["--select", "I002,ICN001,F401"])
.args(["--stdin-filename", "test.py"])
.arg("--unsafe-fixes")
.arg("--fix")
.arg("-")
.pass_stdin("1"));
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(r#"lint.isort.required-imports = ["import pandas as pd"]"#)
.args(["--select", "I002,ICN001,F401"])
.args(["--stdin-filename", "test.py"])
.arg("--unsafe-fixes")
.arg("--fix")
.arg("-")
.pass_stdin("1")
);
}
// See: https://github.com/astral-sh/ruff/issues/16177

View File

@@ -32,7 +32,7 @@
//!
//! The above snippet has been built out of the following structure:
use crate::snippet;
use std::cmp::{max, min, Reverse};
use std::cmp::{Reverse, max, min};
use std::collections::HashMap;
use std::fmt::Display;
use std::ops::Range;
@@ -41,7 +41,7 @@ use std::{cmp, fmt};
use unicode_width::UnicodeWidthStr;
use crate::renderer::styled_buffer::StyledBuffer;
use crate::renderer::{stylesheet::Stylesheet, Margin, Style, DEFAULT_TERM_WIDTH};
use crate::renderer::{DEFAULT_TERM_WIDTH, Margin, Style, stylesheet::Stylesheet};
const ANONYMIZED_LINE_NUM: &str = "LL";
const ERROR_TXT: &str = "error";
@@ -1273,10 +1273,7 @@ fn fold_body(body: Vec<DisplayLine<'_>>) -> Vec<DisplayLine<'_>> {
let inline_marks = lines
.last()
.and_then(|line| {
if let DisplayLine::Source {
ref inline_marks, ..
} = line
{
if let DisplayLine::Source { inline_marks, .. } = line {
let inline_marks = inline_marks.clone();
Some(inline_marks)
} else {

View File

@@ -2,8 +2,8 @@ mod deserialize;
use crate::deserialize::Fixture;
use ruff_annotate_snippets::{Message, Renderer};
use snapbox::data::DataFormat;
use snapbox::Data;
use snapbox::data::DataFormat;
use std::error::Error;
fn main() {

View File

@@ -1,14 +1,14 @@
use std::path::Path;
use ruff_benchmark::criterion::{
criterion_group, criterion_main, BenchmarkId, Criterion, Throughput,
BenchmarkId, Criterion, Throughput, criterion_group, criterion_main,
};
use ruff_benchmark::{
TestCase, LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, UNICODE_PYPINYIN,
LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN,
};
use ruff_python_formatter::{format_module_ast, PreviewMode, PyFormatOptions};
use ruff_python_parser::{parse, Mode, ParseOptions};
use ruff_python_formatter::{PreviewMode, PyFormatOptions, format_module_ast};
use ruff_python_parser::{Mode, ParseOptions, parse};
use ruff_python_trivia::CommentRanges;
#[cfg(target_os = "windows")]

View File

@@ -1,12 +1,12 @@
use ruff_benchmark::criterion;
use criterion::{
criterion_group, criterion_main, measurement::WallTime, BenchmarkId, Criterion, Throughput,
BenchmarkId, Criterion, Throughput, criterion_group, criterion_main, measurement::WallTime,
};
use ruff_benchmark::{
TestCase, LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, UNICODE_PYPINYIN,
LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN,
};
use ruff_python_parser::{lexer, Mode, TokenKind};
use ruff_python_parser::{Mode, TokenKind, lexer};
#[cfg(target_os = "windows")]
#[global_allocator]

View File

@@ -1,18 +1,18 @@
use ruff_benchmark::criterion;
use criterion::{
criterion_group, criterion_main, BenchmarkGroup, BenchmarkId, Criterion, Throughput,
BenchmarkGroup, BenchmarkId, Criterion, Throughput, criterion_group, criterion_main,
};
use ruff_benchmark::{
TestCase, LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, UNICODE_PYPINYIN,
LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN,
};
use ruff_linter::linter::{lint_only, ParseSource};
use ruff_linter::linter::{ParseSource, lint_only};
use ruff_linter::rule_selector::PreviewOptions;
use ruff_linter::settings::rule_table::RuleTable;
use ruff_linter::settings::types::PreviewMode;
use ruff_linter::settings::{flags, LinterSettings};
use ruff_linter::settings::{LinterSettings, flags};
use ruff_linter::source_kind::SourceKind;
use ruff_linter::{registry::Rule, RuleSelector};
use ruff_linter::{RuleSelector, registry::Rule};
use ruff_python_ast::PySourceType;
use ruff_python_parser::parse_module;
@@ -45,8 +45,8 @@ static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
target_arch = "powerpc64"
)
))]
#[unsafe(export_name = "_rjem_malloc_conf")]
#[expect(non_upper_case_globals)]
#[export_name = "_rjem_malloc_conf"]
#[expect(unsafe_code)]
pub static _rjem_malloc_conf: &[u8] = b"dirty_decay_ms:-1,muzzy_decay_ms:-1\0";

View File

@@ -1,13 +1,13 @@
use ruff_benchmark::criterion;
use criterion::{
criterion_group, criterion_main, measurement::WallTime, BenchmarkId, Criterion, Throughput,
BenchmarkId, Criterion, Throughput, criterion_group, criterion_main, measurement::WallTime,
};
use ruff_benchmark::{
TestCase, LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, UNICODE_PYPINYIN,
LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN,
};
use ruff_python_ast::statement_visitor::{walk_stmt, StatementVisitor};
use ruff_python_ast::Stmt;
use ruff_python_ast::statement_visitor::{StatementVisitor, walk_stmt};
use ruff_python_parser::parse_module;
#[cfg(target_os = "windows")]

View File

@@ -3,13 +3,13 @@ use ruff_benchmark::criterion;
use std::ops::Range;
use criterion::{criterion_group, criterion_main, BatchSize, Criterion};
use criterion::{BatchSize, Criterion, criterion_group, criterion_main};
use rayon::ThreadPoolBuilder;
use rustc_hash::FxHashSet;
use ruff_benchmark::TestFile;
use ruff_db::diagnostic::{Diagnostic, DiagnosticId, Severity};
use ruff_db::files::{system_path_to_file, File};
use ruff_db::files::{File, system_path_to_file};
use ruff_db::source::source_text;
use ruff_db::system::{MemoryFileSystem, SystemPath, SystemPathBuf, TestSystem};
use ruff_python_ast::PythonVersion;
@@ -59,13 +59,7 @@ type KeyDiagnosticFields = (
Severity,
);
static EXPECTED_TOMLLIB_DIAGNOSTICS: &[KeyDiagnosticFields] = &[(
DiagnosticId::lint("unused-ignore-comment"),
Some("/src/tomllib/_parser.py"),
Some(22299..22333),
"Unused blanket `type: ignore` directive",
Severity::Warning,
)];
static EXPECTED_TOMLLIB_DIAGNOSTICS: &[KeyDiagnosticFields] = &[];
fn tomllib_path(file: &TestFile) -> SystemPathBuf {
SystemPathBuf::from("src").join(file.name())
@@ -203,7 +197,7 @@ fn assert_diagnostics(db: &dyn Db, diagnostics: &[Diagnostic], expected: &[KeyDi
diagnostic.id(),
diagnostic
.primary_span()
.map(|span| span.file())
.map(|span| span.expect_ty_file())
.map(|file| file.path(db).as_str()),
diagnostic
.primary_span()
@@ -307,6 +301,56 @@ fn benchmark_many_string_assignments(criterion: &mut Criterion) {
});
}
fn benchmark_many_tuple_assignments(criterion: &mut Criterion) {
setup_rayon();
criterion.bench_function("ty_micro[many_tuple_assignments]", |b| {
b.iter_batched_ref(
|| {
// This is a micro benchmark, but it is effectively identical to a code sample
// observed in https://github.com/astral-sh/ty/issues/362
setup_micro_case(
r#"
def flag() -> bool:
return True
t = ()
if flag():
t += (1,)
if flag():
t += (2,)
if flag():
t += (3,)
if flag():
t += (4,)
if flag():
t += (5,)
if flag():
t += (6,)
if flag():
t += (7,)
if flag():
t += (8,)
# Perform some kind of operation on the union type
print(1 in t)
"#,
)
},
|case| {
let Case { db, .. } = case;
let result = db.check().unwrap();
assert_eq!(result.len(), 0);
},
BatchSize::SmallInput,
);
});
}
criterion_group!(check_file, benchmark_cold, benchmark_incremental);
criterion_group!(micro, benchmark_many_string_assignments);
criterion_group!(
micro,
benchmark_many_string_assignments,
benchmark_many_tuple_assignments
);
criterion_main!(check_file, micro);

View File

@@ -2,8 +2,8 @@ use std::borrow::Cow;
use std::collections::{BTreeMap, BTreeSet, HashMap, HashSet};
use std::hash::{Hash, Hasher};
use std::num::{
NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroU128, NonZeroU16,
NonZeroU32, NonZeroU64, NonZeroU8,
NonZeroI8, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI128, NonZeroU8, NonZeroU16, NonZeroU32,
NonZeroU64, NonZeroU128,
};
use std::path::{Path, PathBuf};

View File

@@ -36,7 +36,6 @@ path-slash = { workspace = true }
thiserror = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true, optional = true }
tracing-tree = { workspace = true, optional = true }
rustc-hash = { workspace = true }
zip = { workspace = true }
@@ -55,4 +54,4 @@ cache = ["ruff_cache"]
os = ["ignore", "dep:etcetera"]
serde = ["dep:serde", "camino/serde1"]
# Exposes testing utilities.
testing = ["tracing-subscriber", "tracing-tree"]
testing = ["tracing-subscriber"]

View File

@@ -1,15 +1,14 @@
use std::{fmt::Formatter, sync::Arc};
use thiserror::Error;
use render::{FileResolver, Input};
use ruff_source_file::{SourceCode, SourceFile};
use ruff_annotate_snippets::Level as AnnotateLevel;
use ruff_text_size::{Ranged, TextRange};
pub use self::render::DisplayDiagnostic;
use crate::files::File;
use crate::Db;
use crate::{Db, files::File};
use self::render::FileResolver;
mod render;
mod stylesheet;
@@ -115,10 +114,9 @@ impl Diagnostic {
/// callers should prefer using this with `write!` instead of `writeln!`.
pub fn display<'a>(
&'a self,
db: &'a dyn Db,
resolver: &'a dyn FileResolver,
config: &'a DisplayDiagnosticConfig,
) -> DisplayDiagnostic<'a> {
let resolver = FileResolver::new(db);
DisplayDiagnostic::new(resolver, config, self)
}
@@ -233,6 +231,16 @@ impl Diagnostic {
self.primary_annotation().map(|ann| ann.tags.as_slice())
}
/// Returns the "primary" span of this diagnostic, panicking if it does not exist.
///
/// This should typically only be used when working with diagnostics in Ruff, where diagnostics
/// are currently required to have a primary span.
///
/// See [`Diagnostic::primary_span`] for more details.
pub fn expect_primary_span(&self) -> Span {
self.primary_span().expect("Expected a primary span")
}
/// Returns a key that can be used to sort two diagnostics into the canonical order
/// in which they should appear when rendered.
pub fn rendering_sort_key<'a>(&'a self, db: &'a dyn Db) -> impl Ord + 'a {
@@ -241,6 +249,25 @@ impl Diagnostic {
diagnostic: self,
}
}
/// Returns all annotations, skipping the first primary annotation.
pub fn secondary_annotations(&self) -> impl Iterator<Item = &Annotation> {
let mut seen_primary = false;
self.inner.annotations.iter().filter(move |ann| {
if seen_primary {
true
} else if ann.is_primary {
seen_primary = true;
false
} else {
true
}
})
}
pub fn sub_diagnostics(&self) -> &[SubDiagnostic] {
&self.inner.subs
}
}
#[derive(Debug, Clone, Eq, PartialEq)]
@@ -267,11 +294,7 @@ impl Ord for RenderingSortKey<'_> {
self.diagnostic.primary_span(),
other.diagnostic.primary_span(),
) {
let order = span1
.file()
.path(self.db)
.as_str()
.cmp(span2.file().path(self.db).as_str());
let order = span1.file().path(&self.db).cmp(span2.file().path(&self.db));
if order.is_ne() {
return order;
}
@@ -367,6 +390,57 @@ impl SubDiagnostic {
pub fn annotate(&mut self, ann: Annotation) {
self.inner.annotations.push(ann);
}
pub fn annotations(&self) -> &[Annotation] {
&self.inner.annotations
}
/// Returns a shared borrow of the "primary" annotation of this diagnostic
/// if one exists.
///
/// When there are multiple primary annotations, then the first one that
/// was added to this diagnostic is returned.
pub fn primary_annotation(&self) -> Option<&Annotation> {
self.inner.annotations.iter().find(|ann| ann.is_primary)
}
/// Introspects this diagnostic and returns what kind of "primary" message
/// it contains for concise formatting.
///
/// When we concisely format diagnostics, we likely want to not only
/// include the primary diagnostic message but also the message attached
/// to the primary annotation. In particular, the primary annotation often
/// contains *essential* information or context for understanding the
/// diagnostic.
///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
pub fn concise_message(&self) -> ConciseMessage {
let main = self.inner.message.as_str();
let annotation = self
.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) {
(false, true) => ConciseMessage::MainDiagnostic(main),
(true, false) => ConciseMessage::PrimaryAnnotation(annotation),
(false, false) => ConciseMessage::Both { main, annotation },
(true, true) => ConciseMessage::Empty,
}
}
}
#[derive(Debug, Clone, Eq, PartialEq)]
@@ -489,6 +563,11 @@ impl Annotation {
&self.span
}
/// Sets the span on this annotation.
pub fn set_span(&mut self, span: Span) {
self.span = span;
}
/// Returns the tags associated with this annotation.
pub fn get_tags(&self) -> &[DiagnosticTag] {
&self.tags
@@ -608,62 +687,84 @@ impl DiagnosticId {
code.split_once(':').map(|(_, rest)| rest)
}
/// Returns `true` if this `DiagnosticId` matches the given name.
/// Returns a concise description of this diagnostic ID.
///
/// ## Examples
/// ```
/// use ruff_db::diagnostic::DiagnosticId;
///
/// assert!(DiagnosticId::Io.matches("io"));
/// assert!(DiagnosticId::lint("test").matches("lint:test"));
/// assert!(!DiagnosticId::lint("test").matches("test"));
/// ```
pub fn matches(&self, expected_name: &str) -> bool {
match self.as_str() {
Ok(id) => id == expected_name,
Err(DiagnosticAsStrError::Category { category, name }) => expected_name
.strip_prefix(category)
.and_then(|prefix| prefix.strip_prefix(":"))
.is_some_and(|rest| rest == name),
}
}
pub fn as_str(&self) -> Result<&str, DiagnosticAsStrError> {
Ok(match self {
/// Note that this doesn't include the lint's category. It
/// only includes the lint's name.
pub fn as_str(&self) -> &'static str {
match self {
DiagnosticId::Panic => "panic",
DiagnosticId::Io => "io",
DiagnosticId::InvalidSyntax => "invalid-syntax",
DiagnosticId::Lint(name) => {
return Err(DiagnosticAsStrError::Category {
category: "lint",
name: name.as_str(),
})
}
DiagnosticId::Lint(name) => name.as_str(),
DiagnosticId::RevealedType => "revealed-type",
DiagnosticId::UnknownRule => "unknown-rule",
})
}
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Error)]
pub enum DiagnosticAsStrError {
/// The id can't be converted to a string because it belongs to a sub-category.
#[error("id from a sub-category: {category}:{name}")]
Category {
/// The id's category.
category: &'static str,
/// The diagnostic id in this category.
name: &'static str,
},
pub fn is_invalid_syntax(&self) -> bool {
matches!(self, Self::InvalidSyntax)
}
}
impl std::fmt::Display for DiagnosticId {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self.as_str() {
Ok(name) => f.write_str(name),
Err(DiagnosticAsStrError::Category { category, name }) => {
write!(f, "{category}:{name}")
}
write!(f, "{}", self.as_str())
}
}
/// A unified file representation for both ruff and ty.
///
/// Such a representation is needed for rendering [`Diagnostic`]s that can optionally contain
/// [`Annotation`]s with [`Span`]s that need to refer to the text of a file. However, ty and ruff
/// use very different file types: a `Copy`-able salsa-interned [`File`], and a heavier-weight
/// [`SourceFile`], respectively.
///
/// This enum presents a unified interface to these two types for the sake of creating [`Span`]s and
/// emitting diagnostics from both ty and ruff.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UnifiedFile {
Ty(File),
Ruff(SourceFile),
}
impl UnifiedFile {
pub fn path<'a>(&'a self, resolver: &'a dyn FileResolver) -> &'a str {
match self {
UnifiedFile::Ty(file) => resolver.path(*file),
UnifiedFile::Ruff(file) => file.name(),
}
}
fn diagnostic_source(&self, resolver: &dyn FileResolver) -> DiagnosticSource {
match self {
UnifiedFile::Ty(file) => DiagnosticSource::Ty(resolver.input(*file)),
UnifiedFile::Ruff(file) => DiagnosticSource::Ruff(file.clone()),
}
}
}
/// A unified wrapper for types that can be converted to a [`SourceCode`].
///
/// As with [`UnifiedFile`], ruff and ty use slightly different representations for source code.
/// [`DiagnosticSource`] wraps both of these and provides the single
/// [`DiagnosticSource::as_source_code`] method to produce a [`SourceCode`] with the appropriate
/// lifetimes.
///
/// See [`UnifiedFile::diagnostic_source`] for a way to obtain a [`DiagnosticSource`] from a file
/// and [`FileResolver`].
#[derive(Clone, Debug)]
enum DiagnosticSource {
Ty(Input),
Ruff(SourceFile),
}
impl DiagnosticSource {
/// Returns this input as a `SourceCode` for convenient querying.
fn as_source_code(&self) -> SourceCode {
match self {
DiagnosticSource::Ty(input) => SourceCode::new(input.text.as_str(), &input.line_index),
DiagnosticSource::Ruff(source) => SourceCode::new(source.source_text(), source.index()),
}
}
}
@@ -675,14 +776,14 @@ impl std::fmt::Display for DiagnosticId {
/// the entire file. For example, when the file should be executable but isn't.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Span {
file: File,
file: UnifiedFile,
range: Option<TextRange>,
}
impl Span {
/// Returns the `File` attached to this `Span`.
pub fn file(&self) -> File {
self.file
/// Returns the `UnifiedFile` attached to this `Span`.
pub fn file(&self) -> &UnifiedFile {
&self.file
}
/// Returns the range, if available, attached to this `Span`.
@@ -703,10 +804,38 @@ impl Span {
pub fn with_optional_range(self, range: Option<TextRange>) -> Span {
Span { range, ..self }
}
/// Returns the [`File`] attached to this [`Span`].
///
/// Panics if the file is a [`UnifiedFile::Ruff`] instead of a [`UnifiedFile::Ty`].
pub fn expect_ty_file(&self) -> File {
match self.file {
UnifiedFile::Ty(file) => file,
UnifiedFile::Ruff(_) => panic!("Expected a ty `File`, found a ruff `SourceFile`"),
}
}
/// Returns the [`SourceFile`] attached to this [`Span`].
///
/// Panics if the file is a [`UnifiedFile::Ty`] instead of a [`UnifiedFile::Ruff`].
pub fn expect_ruff_file(&self) -> &SourceFile {
match &self.file {
UnifiedFile::Ty(_) => panic!("Expected a ruff `SourceFile`, found a ty `File`"),
UnifiedFile::Ruff(file) => file,
}
}
}
impl From<File> for Span {
fn from(file: File) -> Span {
let file = UnifiedFile::Ty(file);
Span { file, range: None }
}
}
impl From<SourceFile> for Span {
fn from(file: SourceFile) -> Self {
let file = UnifiedFile::Ruff(file);
Span { file, range: None }
}
}

View File

@@ -7,16 +7,17 @@ use ruff_annotate_snippets::{
use ruff_source_file::{LineIndex, OneIndexed, SourceCode};
use ruff_text_size::{TextRange, TextSize};
use crate::diagnostic::stylesheet::{fmt_styled, DiagnosticStylesheet};
use crate::diagnostic::stylesheet::{DiagnosticStylesheet, fmt_styled};
use crate::{
files::File,
source::{line_index, source_text, SourceText},
system::SystemPath,
Db,
files::File,
source::{SourceText, line_index, source_text},
system::SystemPath,
};
use super::{
Annotation, Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig, Severity, SubDiagnostic,
Annotation, Diagnostic, DiagnosticFormat, DiagnosticSource, DisplayDiagnosticConfig, Severity,
SubDiagnostic,
};
/// A type that implements `std::fmt::Display` for diagnostic rendering.
@@ -30,17 +31,16 @@ use super::{
/// values. When using Salsa, this most commonly corresponds to the lifetime
/// of a Salsa `Db`.
/// * The lifetime of the diagnostic being rendered.
#[derive(Debug)]
pub struct DisplayDiagnostic<'a> {
config: &'a DisplayDiagnosticConfig,
resolver: FileResolver<'a>,
resolver: &'a dyn FileResolver,
annotate_renderer: AnnotateRenderer,
diag: &'a Diagnostic,
}
impl<'a> DisplayDiagnostic<'a> {
pub(crate) fn new(
resolver: FileResolver<'a>,
resolver: &'a dyn FileResolver,
config: &'a DisplayDiagnosticConfig,
diag: &'a Diagnostic,
) -> DisplayDiagnostic<'a> {
@@ -86,11 +86,13 @@ impl std::fmt::Display for DisplayDiagnostic<'_> {
write!(
f,
" {path}",
path = fmt_styled(self.resolver.path(span.file()), stylesheet.emphasis)
path = fmt_styled(span.file().path(self.resolver), stylesheet.emphasis)
)?;
if let Some(range) = span.range() {
let input = self.resolver.input(span.file());
let start = input.as_source_code().line_column(range.start());
let diagnostic_source = span.file().diagnostic_source(self.resolver);
let start = diagnostic_source
.as_source_code()
.line_column(range.start());
write!(
f,
@@ -115,7 +117,7 @@ impl std::fmt::Display for DisplayDiagnostic<'_> {
.emphasis(stylesheet.emphasis)
.none(stylesheet.none);
let resolved = Resolved::new(&self.resolver, self.diag);
let resolved = Resolved::new(self.resolver, self.diag);
let renderable = resolved.to_renderable(self.config.context);
for diag in renderable.diagnostics.iter() {
writeln!(f, "{}", renderer.render(diag.to_annotate()))?;
@@ -138,26 +140,23 @@ impl std::fmt::Display for DisplayDiagnostic<'_> {
/// both.)
#[derive(Debug)]
struct Resolved<'a> {
id: String,
diagnostics: Vec<ResolvedDiagnostic<'a>>,
}
impl<'a> Resolved<'a> {
/// Creates a new resolved set of diagnostics.
fn new(resolver: &FileResolver<'a>, diag: &'a Diagnostic) -> Resolved<'a> {
fn new(resolver: &'a dyn FileResolver, diag: &'a Diagnostic) -> Resolved<'a> {
let mut diagnostics = vec![];
diagnostics.push(ResolvedDiagnostic::from_diagnostic(resolver, diag));
for sub in &diag.inner.subs {
diagnostics.push(ResolvedDiagnostic::from_sub_diagnostic(resolver, sub));
}
let id = diag.inner.id.to_string();
Resolved { id, diagnostics }
Resolved { diagnostics }
}
/// Creates a value that is amenable to rendering directly.
fn to_renderable(&self, context: usize) -> Renderable<'_> {
Renderable {
id: &self.id,
diagnostics: self
.diagnostics
.iter()
@@ -175,6 +174,7 @@ impl<'a> Resolved<'a> {
#[derive(Debug)]
struct ResolvedDiagnostic<'a> {
severity: Severity,
id: Option<String>,
message: String,
annotations: Vec<ResolvedAnnotation<'a>>,
}
@@ -182,7 +182,7 @@ struct ResolvedDiagnostic<'a> {
impl<'a> ResolvedDiagnostic<'a> {
/// Resolve a single diagnostic.
fn from_diagnostic(
resolver: &FileResolver<'a>,
resolver: &'a dyn FileResolver,
diag: &'a Diagnostic,
) -> ResolvedDiagnostic<'a> {
let annotations: Vec<_> = diag
@@ -190,25 +190,16 @@ impl<'a> ResolvedDiagnostic<'a> {
.annotations
.iter()
.filter_map(|ann| {
let path = resolver.path(ann.span.file);
let input = resolver.input(ann.span.file);
ResolvedAnnotation::new(path, &input, ann)
let path = ann.span.file.path(resolver);
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
})
.collect();
let message = if diag.inner.message.as_str().is_empty() {
diag.inner.id.to_string()
} else {
// TODO: See the comment on `Renderable::id` for
// a plausible better idea than smushing the ID
// into the diagnostic message.
format!(
"{id}: {message}",
id = diag.inner.id,
message = diag.inner.message.as_str(),
)
};
let id = Some(diag.inner.id.to_string());
let message = diag.inner.message.as_str().to_string();
ResolvedDiagnostic {
severity: diag.inner.severity,
id,
message,
annotations,
}
@@ -216,7 +207,7 @@ impl<'a> ResolvedDiagnostic<'a> {
/// Resolve a single sub-diagnostic.
fn from_sub_diagnostic(
resolver: &FileResolver<'a>,
resolver: &'a dyn FileResolver,
diag: &'a SubDiagnostic,
) -> ResolvedDiagnostic<'a> {
let annotations: Vec<_> = diag
@@ -224,13 +215,14 @@ impl<'a> ResolvedDiagnostic<'a> {
.annotations
.iter()
.filter_map(|ann| {
let path = resolver.path(ann.span.file);
let input = resolver.input(ann.span.file);
ResolvedAnnotation::new(path, &input, ann)
let path = ann.span.file.path(resolver);
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
})
.collect();
ResolvedDiagnostic {
severity: diag.inner.severity,
id: None,
message: diag.inner.message.as_str().to_string(),
annotations,
}
@@ -259,10 +251,18 @@ impl<'a> ResolvedDiagnostic<'a> {
continue;
};
let prev_context_ends =
context_after(&prev.input.as_source_code(), context, prev.line_end).get();
let this_context_begins =
context_before(&ann.input.as_source_code(), context, ann.line_start).get();
let prev_context_ends = context_after(
&prev.diagnostic_source.as_source_code(),
context,
prev.line_end,
)
.get();
let this_context_begins = context_before(
&ann.diagnostic_source.as_source_code(),
context,
ann.line_start,
)
.get();
// The boundary case here is when `prev_context_ends`
// is exactly one less than `this_context_begins`. In
// that case, the context windows are adajcent and we
@@ -289,6 +289,7 @@ impl<'a> ResolvedDiagnostic<'a> {
.sort_by(|snips1, snips2| snips1.has_primary.cmp(&snips2.has_primary).reverse());
RenderableDiagnostic {
severity: self.severity,
id: self.id.as_deref(),
message: &self.message,
snippets_by_input,
}
@@ -304,7 +305,7 @@ impl<'a> ResolvedDiagnostic<'a> {
#[derive(Debug)]
struct ResolvedAnnotation<'a> {
path: &'a str,
input: Input,
diagnostic_source: DiagnosticSource,
range: TextRange,
line_start: OneIndexed,
line_end: OneIndexed,
@@ -318,8 +319,12 @@ impl<'a> ResolvedAnnotation<'a> {
/// `path` is the path of the file that this annotation points to.
///
/// `input` is the contents of the file that this annotation points to.
fn new(path: &'a str, input: &Input, ann: &'a Annotation) -> Option<ResolvedAnnotation<'a>> {
let source = input.as_source_code();
fn new(
path: &'a str,
diagnostic_source: &DiagnosticSource,
ann: &'a Annotation,
) -> Option<ResolvedAnnotation<'a>> {
let source = diagnostic_source.as_source_code();
let (range, line_start, line_end) = match (ann.span.range(), ann.message.is_some()) {
// An annotation with no range AND no message is probably(?)
// meaningless, but we should try to render it anyway.
@@ -345,7 +350,7 @@ impl<'a> ResolvedAnnotation<'a> {
};
Some(ResolvedAnnotation {
path,
input: input.clone(),
diagnostic_source: diagnostic_source.clone(),
range,
line_start,
line_end,
@@ -364,20 +369,6 @@ impl<'a> ResolvedAnnotation<'a> {
/// renderable value. This is usually the lifetime of `Resolved`.
#[derive(Debug)]
struct Renderable<'r> {
// TODO: This is currently unused in the rendering logic below. I'm not
// 100% sure yet where I want to put it, but I like what `rustc` does:
//
// error[E0599]: no method named `sub_builder` <..snip..>
//
// I believe in order to do this, we'll need to patch it in to
// `ruff_annotate_snippets` though. We leave it here for now with that plan
// in mind.
//
// (At time of writing, 2025-03-13, we currently render the diagnostic
// ID into the main message of the parent diagnostic. We don't use this
// specific field to do that though.)
#[expect(dead_code)]
id: &'r str,
diagnostics: Vec<RenderableDiagnostic<'r>>,
}
@@ -386,6 +377,12 @@ struct Renderable<'r> {
struct RenderableDiagnostic<'r> {
/// The severity of the diagnostic.
severity: Severity,
/// The ID of the diagnostic. The ID can usually be used on the CLI or in a
/// config file to change the severity of a lint.
///
/// An ID is always present for top-level diagnostics and always absent for
/// sub-diagnostics.
id: Option<&'r str>,
/// The message emitted with the diagnostic, before any snippets are
/// rendered.
message: &'r str,
@@ -406,7 +403,11 @@ impl RenderableDiagnostic<'_> {
.iter()
.map(|snippet| snippet.to_annotate(path))
});
level.title(self.message).snippets(snippets)
let mut message = level.title(self.message);
if let Some(id) = self.id {
message = message.id(id);
}
message.snippets(snippets)
}
}
@@ -510,8 +511,8 @@ impl<'r> RenderableSnippet<'r> {
!anns.is_empty(),
"creating a renderable snippet requires a non-zero number of annotations",
);
let input = &anns[0].input;
let source = input.as_source_code();
let diagnostic_source = &anns[0].diagnostic_source;
let source = diagnostic_source.as_source_code();
let has_primary = anns.iter().any(|ann| ann.is_primary);
let line_start = context_before(
@@ -527,7 +528,7 @@ impl<'r> RenderableSnippet<'r> {
let snippet_start = source.line_start(line_start);
let snippet_end = source.line_end(line_end);
let snippet = input
let snippet = diagnostic_source
.as_source_code()
.slice(TextRange::new(snippet_start, snippet_end));
@@ -613,7 +614,7 @@ impl<'r> RenderableAnnotation<'r> {
}
}
/// A type that facilitates the retrieval of source code from a `Span`.
/// A trait that facilitates the retrieval of source code from a `Span`.
///
/// At present, this is tightly coupled with a Salsa database. In the future,
/// it is intended for this resolver to become an abstraction providing a
@@ -628,36 +629,24 @@ impl<'r> RenderableAnnotation<'r> {
/// callers will need to pass in a different "resolver" for turning `Span`s
/// into actual file paths/contents. The infrastructure for this isn't fully in
/// place, but this type serves to demarcate the intended abstraction boundary.
pub(crate) struct FileResolver<'a> {
db: &'a dyn Db,
}
impl<'a> FileResolver<'a> {
/// Creates a new resolver from a Salsa database.
pub(crate) fn new(db: &'a dyn Db) -> FileResolver<'a> {
FileResolver { db }
}
pub trait FileResolver {
/// Returns the path associated with the file given.
fn path(&self, file: File) -> &'a str {
relativize_path(
self.db.system().current_directory(),
file.path(self.db).as_str(),
)
}
fn path(&self, file: File) -> &str;
/// Returns the input contents associated with the file given.
fn input(&self, file: File) -> Input {
Input {
text: source_text(self.db, file),
line_index: line_index(self.db, file),
}
}
fn input(&self, file: File) -> Input;
}
impl std::fmt::Debug for FileResolver<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "<salsa based file resolver>")
impl FileResolver for &dyn Db {
fn path(&self, file: File) -> &str {
relativize_path(self.system().current_directory(), file.path(*self).as_str())
}
fn input(&self, file: File) -> Input {
Input {
text: source_text(*self, file),
line_index: line_index(*self, file),
}
}
}
@@ -667,16 +656,9 @@ impl std::fmt::Debug for FileResolver<'_> {
/// This contains the actual content of that input as well as a
/// line index for efficiently querying its contents.
#[derive(Clone, Debug)]
struct Input {
text: SourceText,
line_index: LineIndex,
}
impl Input {
/// Returns this input as a `SourceCode` for convenient querying.
fn as_source_code(&self) -> SourceCode<'_, '_> {
SourceCode::new(self.text.as_str(), &self.line_index)
}
pub struct Input {
pub(crate) text: SourceText,
pub(crate) line_index: LineIndex,
}
/// Returns the line number accounting for the given `len`
@@ -726,6 +708,7 @@ fn relativize_path<'p>(cwd: &SystemPath, path: &'p str) -> &'p str {
#[cfg(test)]
mod tests {
use crate::Upcast;
use crate::diagnostic::{Annotation, DiagnosticId, Severity, Span};
use crate::files::system_path_to_file;
use crate::system::{DbWithWritableSystem, SystemPath};
@@ -803,7 +786,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -827,7 +810,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
warning: lint:test-diagnostic: main diagnostic message
warning[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -847,7 +830,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
info: lint:test-diagnostic: main diagnostic message
info[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -874,7 +857,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -893,7 +876,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -914,7 +897,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> non-ascii:5:1
|
3 | ΔΔΔΔΔΔΔΔΔΔΔΔ
@@ -933,7 +916,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> non-ascii:2:2
|
1 | ☃☃☃☃☃☃☃☃☃☃☃☃
@@ -957,7 +940,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
4 | dog
@@ -974,7 +957,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
5 | elephant
@@ -989,7 +972,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1006,7 +989,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:11:1
|
9 | inchworm
@@ -1023,7 +1006,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
1 | aardvark
@@ -1056,7 +1039,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1100,7 +1083,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1125,7 +1108,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1153,7 +1136,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1181,7 +1164,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1206,7 +1189,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1237,7 +1220,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:1:1
|
1 | aardvark
@@ -1275,7 +1258,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> spacey-animals:8:1
|
7 | dog
@@ -1292,7 +1275,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> spacey-animals:12:1
|
11 | gorilla
@@ -1310,7 +1293,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> spacey-animals:13:1
|
11 | gorilla
@@ -1350,7 +1333,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> spacey-animals:3:1
|
3 | beetle
@@ -1379,7 +1362,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1416,7 +1399,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1453,7 +1436,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1481,7 +1464,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1517,7 +1500,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1556,7 +1539,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1604,7 +1587,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:3:1
|
1 | aardvark
@@ -1640,7 +1623,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -1663,7 +1646,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -1683,7 +1666,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -1703,7 +1686,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:4
|
3 | canary
@@ -1725,7 +1708,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:4
|
3 | canary
@@ -1757,7 +1740,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:4:1
|
2 | beetle
@@ -1786,7 +1769,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:4:1
|
2 | beetle
@@ -1817,7 +1800,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -1852,7 +1835,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -1880,7 +1863,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
3 | canary
@@ -1912,7 +1895,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:3
|
3 | canary
@@ -1934,7 +1917,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:3
|
3 | canary
@@ -1967,7 +1950,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:8:1
|
6 | finch
@@ -2007,7 +1990,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:5:1
|
5 | elephant
@@ -2051,7 +2034,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> fruits:1:1
|
1 | apple
@@ -2086,7 +2069,7 @@ watermelon
insta::assert_snapshot!(
env.render(&diag),
@r"
error: lint:test-diagnostic: main diagnostic message
error[test-diagnostic]: main diagnostic message
--> animals:11:1
|
11 | kangaroo
@@ -2174,8 +2157,9 @@ watermelon
fn span(&self, path: &str, line_offset_start: &str, line_offset_end: &str) -> Span {
let span = self.path(path);
let text = source_text(&self.db, span.file());
let line_index = line_index(&self.db, span.file());
let file = span.expect_ty_file();
let text = source_text(&self.db, file);
let line_index = line_index(&self.db, file);
let source = SourceCode::new(text.as_str(), &line_index);
let (line_start, offset_start) = parse_line_offset(line_offset_start);
@@ -2237,7 +2221,7 @@ watermelon
///
/// (This will set the "printed" flag on `Diagnostic`.)
fn render(&self, diag: &Diagnostic) -> String {
diag.display(&self.db, &self.config).to_string()
diag.display(&self.db.upcast(), &self.config).to_string()
}
}

View File

@@ -11,12 +11,13 @@ use ruff_text_size::{Ranged, TextRange};
use salsa::plumbing::AsId;
use salsa::{Durability, Setter};
use crate::diagnostic::{Span, UnifiedFile};
use crate::file_revision::FileRevision;
use crate::files::file_root::FileRoots;
use crate::files::private::FileStatus;
use crate::system::{SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf};
use crate::vendored::{VendoredPath, VendoredPathBuf};
use crate::{vendored, Db, FxDashMap};
use crate::{Db, FxDashMap, vendored};
mod file_root;
mod path;
@@ -277,7 +278,7 @@ impl std::panic::RefUnwindSafe for Files {}
#[salsa::input]
pub struct File {
/// The path of the file (immutable).
#[return_ref]
#[returns(ref)]
pub path: FilePath,
/// The unix permissions of the file. Only supported on unix systems. Always `None` on Windows
@@ -549,10 +550,33 @@ impl Ranged for FileRange {
}
}
impl TryFrom<&Span> for FileRange {
type Error = ();
fn try_from(value: &Span) -> Result<Self, Self::Error> {
let UnifiedFile::Ty(file) = value.file() else {
return Err(());
};
Ok(Self {
file: *file,
range: value.range().ok_or(())?,
})
}
}
impl TryFrom<Span> for FileRange {
type Error = ();
fn try_from(value: Span) -> Result<Self, Self::Error> {
Self::try_from(&value)
}
}
#[cfg(test)]
mod tests {
use crate::file_revision::FileRevision;
use crate::files::{system_path_to_file, vendored_path_to_file, FileError};
use crate::files::{FileError, system_path_to_file, vendored_path_to_file};
use crate::system::DbWithWritableSystem as _;
use crate::tests::TestDb;
use crate::vendored::VendoredFileSystemBuilder;

View File

@@ -3,9 +3,9 @@ use std::fmt::Formatter;
use path_slash::PathExt;
use salsa::Durability;
use crate::Db;
use crate::file_revision::FileRevision;
use crate::system::{SystemPath, SystemPathBuf};
use crate::Db;
/// A root path for files tracked by the database.
///
@@ -19,8 +19,8 @@ use crate::Db;
#[salsa::input(debug)]
pub struct FileRoot {
/// The path of a root is guaranteed to never change.
#[return_ref]
path_buf: SystemPathBuf,
#[returns(deref)]
pub path: SystemPathBuf,
/// The kind of the root at the time of its creation.
kind_at_time_of_creation: FileRootKind,
@@ -32,10 +32,6 @@ pub struct FileRoot {
}
impl FileRoot {
pub fn path(self, db: &dyn Db) -> &SystemPath {
self.path_buf(db)
}
pub fn durability(self, db: &dyn Db) -> salsa::Durability {
self.kind_at_time_of_creation(db).durability()
}

View File

@@ -1,7 +1,7 @@
use crate::files::{system_path_to_file, vendored_path_to_file, File};
use crate::Db;
use crate::files::{File, system_path_to_file, vendored_path_to_file};
use crate::system::{SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf};
use crate::vendored::{VendoredPath, VendoredPathBuf};
use crate::Db;
use std::fmt::{Display, Formatter};
/// Path to a file.

View File

@@ -67,7 +67,7 @@ mod tests {
use crate::system::TestSystem;
use crate::system::{DbWithTestSystem, System};
use crate::vendored::VendoredFileSystem;
use crate::Db;
use crate::{Db, Upcast};
type Events = Arc<Mutex<Vec<salsa::Event>>>;
@@ -136,7 +136,16 @@ mod tests {
}
fn python_version(&self) -> ruff_python_ast::PythonVersion {
ruff_python_ast::PythonVersion::latest()
ruff_python_ast::PythonVersion::latest_ty()
}
}
impl Upcast<dyn Db> for TestDb {
fn upcast(&self) -> &(dyn Db + 'static) {
self
}
fn upcast_mut(&mut self) -> &mut (dyn Db + 'static) {
self
}
}

View File

@@ -3,11 +3,11 @@ use std::ops::Deref;
use std::sync::Arc;
use ruff_python_ast::ModModule;
use ruff_python_parser::{parse_unchecked, ParseOptions, Parsed};
use ruff_python_parser::{ParseOptions, Parsed, parse_unchecked};
use crate::Db;
use crate::files::File;
use crate::source::source_text;
use crate::Db;
/// Returns the parsed AST of `file`, including its token stream.
///
@@ -20,7 +20,7 @@ use crate::Db;
/// reflected in the changed AST offsets.
/// The other reason is that Ruff's AST doesn't implement `Eq` which Sala requires
/// for determining if a query result is unchanged.
#[salsa::tracked(return_ref, no_eq)]
#[salsa::tracked(returns(ref), no_eq)]
pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
let _span = tracing::trace_span!("parsed_module", ?file).entered();
@@ -79,6 +79,7 @@ impl Eq for ParsedModule {}
#[cfg(test)]
mod tests {
use crate::Db;
use crate::files::{system_path_to_file, vendored_path_to_file};
use crate::parsed::parsed_module;
use crate::system::{
@@ -86,7 +87,6 @@ mod tests {
};
use crate::tests::TestDb;
use crate::vendored::{VendoredFileSystemBuilder, VendoredPath};
use crate::Db;
use zip::CompressionMethod;
#[test]

View File

@@ -7,8 +7,8 @@ use ruff_notebook::Notebook;
use ruff_python_ast::PySourceType;
use ruff_source_file::LineIndex;
use crate::files::{File, FilePath};
use crate::Db;
use crate::files::{File, FilePath};
/// Reads the source text of a python text file (must be valid UTF8) or notebook.
#[salsa::tracked]
@@ -133,7 +133,7 @@ struct SourceTextInner {
#[derive(Eq, PartialEq)]
enum SourceTextKind {
Text(String),
Notebook(Notebook),
Notebook(Box<Notebook>),
}
impl From<String> for SourceTextKind {
@@ -144,7 +144,7 @@ impl From<String> for SourceTextKind {
impl From<Notebook> for SourceTextKind {
fn from(notebook: Notebook) -> Self {
SourceTextKind::Notebook(notebook)
SourceTextKind::Notebook(Box::new(notebook))
}
}
@@ -216,9 +216,11 @@ mod tests {
let events = db.take_salsa_events();
assert!(!events
.iter()
.any(|event| matches!(event.kind, EventKind::WillExecute { .. })));
assert!(
!events
.iter()
.any(|event| matches!(event.kind, EventKind::WillExecute { .. }))
);
Ok(())
}

View File

@@ -19,8 +19,8 @@ use walk_directory::WalkDirectoryBuilder;
use crate::file_revision::FileRevision;
pub use self::path::{
deduplicate_nested_paths, DeduplicatedNestedPathsIter, SystemPath, SystemPathBuf,
SystemVirtualPath, SystemVirtualPathBuf,
DeduplicatedNestedPathsIter, SystemPath, SystemPathBuf, SystemVirtualPath,
SystemVirtualPathBuf, deduplicate_nested_paths,
};
mod memory_fs;
@@ -167,7 +167,7 @@ pub trait System: Debug {
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>> + '_>,
PatternError,
>;

View File

@@ -7,8 +7,8 @@ use filetime::FileTime;
use rustc_hash::FxHashMap;
use crate::system::{
file_time_now, walk_directory, DirectoryEntry, FileType, GlobError, GlobErrorKind, Metadata,
Result, SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf,
DirectoryEntry, FileType, GlobError, GlobErrorKind, Metadata, Result, SystemPath,
SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf, file_time_now, walk_directory,
};
use super::walk_directory::{
@@ -236,7 +236,7 @@ impl MemoryFileSystem {
&self,
pattern: &str,
) -> std::result::Result<
impl Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>,
impl Iterator<Item = std::result::Result<SystemPathBuf, GlobError>> + '_,
glob::PatternError,
> {
// Very naive implementation that iterates over all files and collects all that match the given pattern.
@@ -463,17 +463,17 @@ fn not_found() -> std::io::Error {
fn is_a_directory() -> std::io::Error {
// Note: Rust returns `ErrorKind::IsADirectory` for this error but this is a nightly only variant :(.
// So we have to use other for now.
std::io::Error::new(std::io::ErrorKind::Other, "Is a directory")
std::io::Error::other("Is a directory")
}
fn not_a_directory() -> std::io::Error {
// Note: Rust returns `ErrorKind::NotADirectory` for this error but this is a nightly only variant :(.
// So we have to use `Other` for now.
std::io::Error::new(std::io::ErrorKind::Other, "Not a directory")
std::io::Error::other("Not a directory")
}
fn directory_not_empty() -> std::io::Error {
std::io::Error::new(std::io::ErrorKind::Other, "directory not empty")
std::io::Error::other("directory not empty")
}
fn create_dir_all(
@@ -701,8 +701,8 @@ mod tests {
use std::time::Duration;
use crate::system::walk_directory::tests::DirectoryEntryToString;
use crate::system::walk_directory::WalkState;
use crate::system::walk_directory::tests::DirectoryEntryToString;
use crate::system::{
DirectoryEntry, FileType, MemoryFileSystem, Result, SystemPath, SystemPathBuf,
SystemVirtualPath,

View File

@@ -256,7 +256,9 @@ impl OsSystem {
let Ok(canonicalized) = SystemPathBuf::from_path_buf(canonicalized) else {
// The original path is valid UTF8 but the canonicalized path isn't. This definitely suggests
// that a symlink is involved. Fall back to the slow path.
tracing::debug!("Falling back to the slow case-sensitive path existence check because the canonicalized path of `{simplified}` is not valid UTF-8");
tracing::debug!(
"Falling back to the slow case-sensitive path existence check because the canonicalized path of `{simplified}` is not valid UTF-8"
);
return None;
};
@@ -266,7 +268,9 @@ impl OsSystem {
// `path` pointed to a symlink (or some other none reversible path normalization happened).
// In this case, fall back to the slow path.
if simplified_canonicalized.as_str().to_lowercase() != simplified.as_str().to_lowercase() {
tracing::debug!("Falling back to the slow case-sensitive path existence check for `{simplified}` because the canonicalized path `{simplified_canonicalized}` differs not only by casing");
tracing::debug!(
"Falling back to the slow case-sensitive path existence check for `{simplified}` because the canonicalized path `{simplified_canonicalized}` differs not only by casing"
);
return None;
}
@@ -662,8 +666,8 @@ fn detect_case_sensitivity(path: &SystemPath) -> CaseSensitivity {
mod tests {
use tempfile::TempDir;
use crate::system::walk_directory::tests::DirectoryEntryToString;
use crate::system::DirectoryEntry;
use crate::system::walk_directory::tests::DirectoryEntryToString;
use super::*;

View File

@@ -3,15 +3,15 @@ use ruff_notebook::{Notebook, NotebookError};
use std::panic::RefUnwindSafe;
use std::sync::{Arc, Mutex};
use crate::Db;
use crate::files::File;
use crate::system::{
CaseSensitivity, DirectoryEntry, GlobError, MemoryFileSystem, Metadata, Result, System,
SystemPath, SystemPathBuf, SystemVirtualPath,
};
use crate::Db;
use super::walk_directory::WalkDirectoryBuilder;
use super::WritableSystem;
use super::walk_directory::WalkDirectoryBuilder;
/// System implementation intended for testing.
///
@@ -117,7 +117,7 @@ impl System for TestSystem {
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>> + '_>,
PatternError,
> {
self.system().glob(pattern)
@@ -343,7 +343,7 @@ impl System for InMemorySystem {
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>> + '_>,
PatternError,
> {
let iterator = self.memory_fs.glob(pattern)?;

View File

@@ -1,7 +1,7 @@
//! Test helpers for working with Salsa databases
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::layer::SubscriberExt;
pub fn assert_function_query_was_not_run<Db, Q, QDb, I, R>(
db: &Db,
@@ -141,7 +141,6 @@ pub fn setup_logging_with_filter(filter: &str) -> Option<LoggingGuard> {
#[derive(Debug)]
pub struct LoggingBuilder {
filter: EnvFilter,
hierarchical: bool,
}
impl LoggingBuilder {
@@ -154,50 +153,26 @@ impl LoggingBuilder {
.parse()
.expect("Hardcoded directive to be valid"),
),
hierarchical: false,
}
}
pub fn with_filter(filter: &str) -> Option<Self> {
let filter = EnvFilter::builder().parse(filter).ok()?;
Some(Self {
filter,
hierarchical: false,
})
}
pub fn with_hierarchical(mut self, hierarchical: bool) -> Self {
self.hierarchical = hierarchical;
self
Some(Self { filter })
}
pub fn build(self) -> LoggingGuard {
let registry = tracing_subscriber::registry().with(self.filter);
let guard = if self.hierarchical {
let subscriber = registry.with(
tracing_tree::HierarchicalLayer::default()
.with_indent_lines(true)
.with_indent_amount(2)
.with_bracketed_fields(true)
.with_thread_ids(true)
.with_targets(true)
.with_writer(std::io::stderr)
.with_timer(tracing_tree::time::Uptime::default()),
);
let subscriber = registry.with(
tracing_subscriber::fmt::layer()
.compact()
.with_writer(std::io::stderr)
.with_timer(tracing_subscriber::fmt::time()),
);
tracing::subscriber::set_default(subscriber)
} else {
let subscriber = registry.with(
tracing_subscriber::fmt::layer()
.compact()
.with_writer(std::io::stderr)
.with_timer(tracing_subscriber::fmt::time()),
);
tracing::subscriber::set_default(subscriber)
};
let guard = tracing::subscriber::set_default(subscriber);
LoggingGuard { _guard: guard }
}

View File

@@ -7,7 +7,7 @@ use std::sync::{Arc, Mutex, MutexGuard};
use crate::file_revision::FileRevision;
use zip::result::ZipResult;
use zip::write::FileOptions;
use zip::{read::ZipFile, CompressionMethod, ZipArchive, ZipWriter};
use zip::{CompressionMethod, ZipArchive, ZipWriter, read::ZipFile};
pub use self::path::{VendoredPath, VendoredPathBuf};
@@ -503,9 +503,11 @@ pub(crate) mod tests {
let path = VendoredPath::new(path);
assert!(!mock_typeshed.exists(path));
assert!(mock_typeshed.metadata(path).is_err());
assert!(mock_typeshed
.read_to_string(path)
.is_err_and(|err| err.to_string().contains("file not found")));
assert!(
mock_typeshed
.read_to_string(path)
.is_err_and(|err| err.to_string().contains("file not found"))
);
}
#[test]

View File

@@ -11,12 +11,14 @@ repository = { workspace = true }
license = { workspace = true }
[dependencies]
ty = { workspace = true }
ty_project = { workspace = true, features = ["schemars"] }
ruff = { workspace = true }
ruff_diagnostics = { workspace = true }
ruff_formatter = { workspace = true }
ruff_linter = { workspace = true, features = ["schemars"] }
ruff_notebook = { workspace = true }
ruff_options_metadata = { workspace = true }
ruff_python_ast = { workspace = true }
ruff_python_codegen = { workspace = true }
ruff_python_formatter = { workspace = true }
@@ -31,6 +33,7 @@ imara-diff = { workspace = true }
indicatif = { workspace = true }
itertools = { workspace = true }
libcst = { workspace = true }
markdown = { version = "1.0.0" }
pretty_assertions = { workspace = true }
rayon = { workspace = true }
regex = { workspace = true }
@@ -44,6 +47,7 @@ toml = { workspace = true, features = ["parse"] }
tracing = { workspace = true }
tracing-indicatif = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
url = { workspace = true }
[dev-dependencies]
indoc = { workspace = true }

View File

@@ -9,11 +9,11 @@ use std::process::ExitCode;
use std::time::{Duration, Instant};
use std::{fmt, fs, io, iter};
use anyhow::{bail, format_err, Context, Error};
use anyhow::{Context, Error, bail, format_err};
use clap::{CommandFactory, FromArgMatches};
use imara_diff::intern::InternedInput;
use imara_diff::sink::Counter;
use imara_diff::{diff, Algorithm};
use imara_diff::{Algorithm, diff};
use indicatif::ProgressStyle;
#[cfg_attr(feature = "singlethreaded", allow(unused_imports))]
use rayon::iter::{IntoParallelIterator, ParallelIterator};
@@ -21,11 +21,11 @@ use serde::Deserialize;
use similar::{ChangeTag, TextDiff};
use tempfile::NamedTempFile;
use tracing::{debug, error, info, info_span};
use tracing_indicatif::span_ext::IndicatifSpanExt;
use tracing_indicatif::IndicatifLayer;
use tracing_indicatif::span_ext::IndicatifSpanExt;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::EnvFilter;
use ruff::args::{ConfigArguments, FormatArguments, FormatCommand, GlobalConfigArgs, LogLevelArgs};
use ruff::resolve::resolve;
@@ -33,10 +33,10 @@ use ruff_formatter::{FormatError, LineWidth, PrintError};
use ruff_linter::logging::LogLevel;
use ruff_linter::settings::types::{FilePattern, FilePatternSet};
use ruff_python_formatter::{
format_module_source, FormatModuleError, MagicTrailingComma, PreviewMode, PyFormatOptions,
FormatModuleError, MagicTrailingComma, PreviewMode, PyFormatOptions, format_module_source,
};
use ruff_python_parser::ParseError;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile, Resolver};
use ruff_workspace::resolver::{PyprojectConfig, ResolvedFile, Resolver, python_files_in_path};
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
let args_matches = FormatCommand::command()

View File

@@ -2,7 +2,10 @@
use anyhow::Result;
use crate::{generate_cli_help, generate_docs, generate_json_schema, generate_ty_schema};
use crate::{
generate_cli_help, generate_docs, generate_json_schema, generate_ty_cli_reference,
generate_ty_options, generate_ty_rules, generate_ty_schema,
};
pub(crate) const REGENERATE_ALL_COMMAND: &str = "cargo dev generate-all";
@@ -38,5 +41,8 @@ pub(crate) fn main(args: &Args) -> Result<()> {
generate_docs::main(&generate_docs::Args {
dry_run: args.mode.is_dry_run(),
})?;
generate_ty_options::main(&generate_ty_options::Args { mode: args.mode })?;
generate_ty_rules::main(&generate_ty_rules::Args { mode: args.mode })?;
generate_ty_cli_reference::main(&generate_ty_cli_reference::Args { mode: args.mode })?;
Ok(())
}

View File

@@ -1,17 +1,16 @@
//! Generate CLI help.
#![allow(clippy::print_stdout)]
use std::path::PathBuf;
use std::{fs, str};
use anyhow::{bail, Context, Result};
use anyhow::{Context, Result, bail};
use clap::CommandFactory;
use pretty_assertions::StrComparison;
use ruff::args;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use crate::ROOT_DIR;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
const COMMAND_HELP_BEGIN_PRAGMA: &str = "<!-- Begin auto-generated command help. -->\n";
const COMMAND_HELP_END_PRAGMA: &str = "<!-- End auto-generated command help. -->";
@@ -141,7 +140,7 @@ mod tests {
use crate::generate_all::Mode;
use super::{main, Args};
use super::{Args, main};
#[test]
fn test_generate_json_schema() -> Result<()> {

View File

@@ -1,5 +1,4 @@
//! Generate Markdown documentation for applicable rules.
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::collections::HashSet;
use std::fmt::Write as _;
@@ -13,8 +12,8 @@ use strum::IntoEnumIterator;
use ruff_diagnostics::FixAvailability;
use ruff_linter::registry::{Linter, Rule, RuleNamespace};
use ruff_options_metadata::{OptionEntry, OptionsMetadata};
use ruff_workspace::options::Options;
use ruff_workspace::options_base::{OptionEntry, OptionsMetadata};
use crate::ROOT_DIR;

View File

@@ -1,14 +1,12 @@
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::fs;
use std::path::PathBuf;
use anyhow::{bail, Result};
use anyhow::{Result, bail};
use pretty_assertions::StrComparison;
use schemars::schema_for;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use crate::ROOT_DIR;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use ruff_workspace::options::Options;
#[derive(clap::Args)]
@@ -58,7 +56,7 @@ mod tests {
use crate::generate_all::Mode;
use super::{main, Args};
use super::{Args, main};
#[test]
fn test_generate_json_schema() -> Result<()> {

View File

@@ -4,9 +4,9 @@
use itertools::Itertools;
use std::fmt::Write;
use ruff_options_metadata::{OptionField, OptionSet, OptionsMetadata, Visit};
use ruff_python_trivia::textwrap;
use ruff_workspace::options::Options;
use ruff_workspace::options_base::{OptionField, OptionSet, OptionsMetadata, Visit};
pub(crate) fn generate() -> String {
let mut output = String::new();
@@ -100,8 +100,8 @@ fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[S
if parents_anchor.is_empty() {
let _ = writeln!(output, "{header_level} [`{name}`](#{name}) {{: #{name} }}");
} else {
let _ =
writeln!(output,
let _ = writeln!(
output,
"{header_level} [`{name}`](#{parents_anchor}_{name}) {{: #{parents_anchor}_{name} }}"
);

View File

@@ -11,8 +11,8 @@ use strum::IntoEnumIterator;
use ruff_diagnostics::FixAvailability;
use ruff_linter::registry::{Linter, Rule, RuleNamespace};
use ruff_linter::upstream_categories::UpstreamCategoryAndPrefix;
use ruff_options_metadata::OptionsMetadata;
use ruff_workspace::options::Options;
use ruff_workspace::options_base::OptionsMetadata;
const FIX_SYMBOL: &str = "🛠️";
const PREVIEW_SYMBOL: &str = "🧪";
@@ -48,7 +48,9 @@ fn generate_table(table_out: &mut String, rules: impl IntoIterator<Item = Rule>,
format!("<span title='Automatic fix available'>{FIX_SYMBOL}</span>")
}
FixAvailability::None => {
format!("<span title='Automatic fix not available' style='opacity: 0.1' aria-hidden='true'>{FIX_SYMBOL}</span>")
format!(
"<span title='Automatic fix not available' style='opacity: 0.1' aria-hidden='true'>{FIX_SYMBOL}</span>"
)
}
};
@@ -108,22 +110,26 @@ pub(crate) fn generate() -> String {
);
table_out.push_str("<br />");
let _ = write!(&mut table_out,
let _ = write!(
&mut table_out,
"{SPACER}{PREVIEW_SYMBOL}{SPACER} The rule is unstable and is in [\"preview\"](faq.md#what-is-preview)."
);
table_out.push_str("<br />");
let _ = write!(&mut table_out,
let _ = write!(
&mut table_out,
"{SPACER}{WARNING_SYMBOL}{SPACER} The rule has been deprecated and will be removed in a future release."
);
table_out.push_str("<br />");
let _ = write!(&mut table_out,
let _ = write!(
&mut table_out,
"{SPACER}{REMOVED_SYMBOL}{SPACER} The rule has been removed only the documentation is available."
);
table_out.push_str("<br />");
let _ = write!(&mut table_out,
let _ = write!(
&mut table_out,
"{SPACER}{FIX_SYMBOL}{SPACER} The rule is automatically fixable by the `--fix` command-line option."
);
table_out.push_str("<br />");

View File

@@ -0,0 +1,334 @@
//! Generate a Markdown-compatible reference for the ty command-line interface.
use std::cmp::max;
use std::path::PathBuf;
use anyhow::{Result, bail};
use clap::{Command, CommandFactory};
use itertools::Itertools;
use pretty_assertions::StrComparison;
use crate::ROOT_DIR;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use ty::Cli;
const SHOW_HIDDEN_COMMANDS: &[&str] = &["generate-shell-completion"];
#[derive(clap::Args)]
pub(crate) struct Args {
#[arg(long, default_value_t, value_enum)]
pub(crate) mode: Mode,
}
pub(crate) fn main(args: &Args) -> Result<()> {
let reference_string = generate();
let filename = "crates/ty/docs/cli.md";
let reference_path = PathBuf::from(ROOT_DIR).join(filename);
match args.mode {
Mode::DryRun => {
println!("{reference_string}");
}
Mode::Check => match std::fs::read_to_string(reference_path) {
Ok(current) => {
if current == reference_string {
println!("Up-to-date: {filename}");
} else {
let comparison = StrComparison::new(&current, &reference_string);
bail!(
"{filename} changed, please run `{REGENERATE_ALL_COMMAND}`:\n{comparison}"
);
}
}
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {
bail!("{filename} not found, please run `{REGENERATE_ALL_COMMAND}`");
}
Err(err) => {
bail!("{filename} changed, please run `{REGENERATE_ALL_COMMAND}`:\n{err}");
}
},
Mode::Write => match std::fs::read_to_string(&reference_path) {
Ok(current) => {
if current == reference_string {
println!("Up-to-date: {filename}");
} else {
println!("Updating: {filename}");
std::fs::write(reference_path, reference_string.as_bytes())?;
}
}
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {
println!("Updating: {filename}");
std::fs::write(reference_path, reference_string.as_bytes())?;
}
Err(err) => {
bail!("{filename} changed, please run `cargo dev generate-cli-reference`:\n{err}");
}
},
}
Ok(())
}
fn generate() -> String {
let mut output = String::new();
let mut ty = Cli::command();
// It is very important to build the command before beginning inspection or subcommands
// will be missing all of the propagated options.
ty.build();
let mut parents = Vec::new();
output.push_str("# CLI Reference\n\n");
generate_command(&mut output, &ty, &mut parents);
output
}
#[allow(clippy::format_push_string)]
fn generate_command<'a>(output: &mut String, command: &'a Command, parents: &mut Vec<&'a Command>) {
if command.is_hide_set() && !SHOW_HIDDEN_COMMANDS.contains(&command.get_name()) {
return;
}
// Generate the command header.
let name = if parents.is_empty() {
command.get_name().to_string()
} else {
format!(
"{} {}",
parents.iter().map(|cmd| cmd.get_name()).join(" "),
command.get_name()
)
};
// Display the top-level `ty` command at the same level as its children
let level = max(2, parents.len() + 1);
output.push_str(&format!("{} {name}\n\n", "#".repeat(level)));
// Display the command description.
if let Some(about) = command.get_long_about().or_else(|| command.get_about()) {
output.push_str(&about.to_string());
output.push_str("\n\n");
}
// Display the usage
{
// This appears to be the simplest way to get rendered usage from Clap,
// it is complicated to render it manually. It's annoying that it
// requires a mutable reference but it doesn't really matter.
let mut command = command.clone();
output.push_str("<h3 class=\"cli-reference\">Usage</h3>\n\n");
output.push_str(&format!(
"```\n{}\n```",
command
.render_usage()
.to_string()
.trim_start_matches("Usage: "),
));
output.push_str("\n\n");
}
if command.get_name() == "help" {
return;
}
// Display a list of child commands
let mut subcommands = command.get_subcommands().peekable();
let has_subcommands = subcommands.peek().is_some();
if has_subcommands {
output.push_str("<h3 class=\"cli-reference\">Commands</h3>\n\n");
output.push_str("<dl class=\"cli-reference\">");
for subcommand in subcommands {
if subcommand.is_hide_set() {
continue;
}
let subcommand_name = format!("{name} {}", subcommand.get_name());
output.push_str(&format!(
"<dt><a href=\"#{}\"><code>{subcommand_name}</code></a></dt>",
subcommand_name.replace(' ', "-")
));
if let Some(about) = subcommand.get_about() {
output.push_str(&format!(
"<dd>{}</dd>\n",
markdown::to_html(&about.to_string())
));
}
}
output.push_str("</dl>\n\n");
}
// Do not display options for commands with children
if !has_subcommands {
let name_key = name.replace(' ', "-");
// Display positional arguments
let mut arguments = command
.get_positionals()
.filter(|arg| !arg.is_hide_set())
.peekable();
if arguments.peek().is_some() {
output.push_str("<h3 class=\"cli-reference\">Arguments</h3>\n\n");
output.push_str("<dl class=\"cli-reference\">");
for arg in arguments {
let id = format!("{name_key}--{}", arg.get_id());
output.push_str(&format!("<dt id=\"{id}\">"));
output.push_str(&format!(
"<a href=\"#{id}\"><code>{}</code></a>",
arg.get_id().to_string().to_uppercase(),
));
output.push_str("</dt>");
if let Some(help) = arg.get_long_help().or_else(|| arg.get_help()) {
output.push_str("<dd>");
output.push_str(&format!("{}\n", markdown::to_html(&help.to_string())));
output.push_str("</dd>");
}
}
output.push_str("</dl>\n\n");
}
// Display options and flags
let mut options = command
.get_arguments()
.filter(|arg| !arg.is_positional())
.filter(|arg| !arg.is_hide_set())
.sorted_by_key(|arg| arg.get_id())
.peekable();
if options.peek().is_some() {
output.push_str("<h3 class=\"cli-reference\">Options</h3>\n\n");
output.push_str("<dl class=\"cli-reference\">");
for opt in options {
let Some(long) = opt.get_long() else { continue };
let id = format!("{name_key}--{long}");
output.push_str(&format!("<dt id=\"{id}\">"));
output.push_str(&format!("<a href=\"#{id}\"><code>--{long}</code></a>"));
for long_alias in opt.get_all_aliases().into_iter().flatten() {
output.push_str(&format!(", <code>--{long_alias}</code>"));
}
if let Some(short) = opt.get_short() {
output.push_str(&format!(", <code>-{short}</code>"));
}
for short_alias in opt.get_all_short_aliases().into_iter().flatten() {
output.push_str(&format!(", <code>-{short_alias}</code>"));
}
// Re-implements private `Arg::is_takes_value_set` used in `Command::get_opts`
if opt
.get_num_args()
.unwrap_or_else(|| 1.into())
.takes_values()
{
if let Some(values) = opt.get_value_names() {
for value in values {
output.push_str(&format!(
" <i>{}</i>",
value.to_lowercase().replace('_', "-")
));
}
}
}
output.push_str("</dt>");
if let Some(help) = opt.get_long_help().or_else(|| opt.get_help()) {
output.push_str("<dd>");
output.push_str(&format!("{}\n", markdown::to_html(&help.to_string())));
emit_env_option(opt, output);
emit_default_option(opt, output);
emit_possible_options(opt, output);
output.push_str("</dd>");
}
}
output.push_str("</dl>");
}
output.push_str("\n\n");
}
parents.push(command);
// Recurse to all of the subcommands.
for subcommand in command.get_subcommands() {
generate_command(output, subcommand, parents);
}
parents.pop();
}
fn emit_env_option(opt: &clap::Arg, output: &mut String) {
if opt.is_hide_env_set() {
return;
}
if let Some(env) = opt.get_env() {
output.push_str(&markdown::to_html(&format!(
"May also be set with the `{}` environment variable.",
env.to_string_lossy()
)));
}
}
fn emit_default_option(opt: &clap::Arg, output: &mut String) {
if opt.is_hide_default_value_set() || !opt.get_num_args().expect("built").takes_values() {
return;
}
let values = opt.get_default_values();
if !values.is_empty() {
let value = format!(
"\n[default: {}]",
opt.get_default_values()
.iter()
.map(|s| s.to_string_lossy())
.join(",")
);
output.push_str(&markdown::to_html(&value));
}
}
fn emit_possible_options(opt: &clap::Arg, output: &mut String) {
if opt.is_hide_possible_values_set() {
return;
}
let values = opt.get_possible_values();
if !values.is_empty() {
let value = format!(
"\nPossible values:\n{}",
values
.into_iter()
.filter(|value| !value.is_hide_set())
.map(|value| {
let name = value.get_name();
value.get_help().map_or_else(
|| format!(" - `{name}`"),
|help| format!(" - `{name}`: {help}"),
)
})
.collect_vec()
.join("\n"),
);
output.push_str(&markdown::to_html(&value));
}
}
#[cfg(test)]
mod tests {
use anyhow::Result;
use crate::generate_all::Mode;
use super::{Args, main};
#[test]
fn ty_cli_reference_is_up_to_date() -> Result<()> {
main(&Args { mode: Mode::Check })
}
}

View File

@@ -0,0 +1,257 @@
//! Generate a Markdown-compatible listing of configuration options for `pyproject.toml`.
use anyhow::bail;
use itertools::Itertools;
use pretty_assertions::StrComparison;
use std::{fmt::Write, path::PathBuf};
use ruff_options_metadata::{OptionField, OptionSet, OptionsMetadata, Visit};
use ty_project::metadata::Options;
use crate::{
ROOT_DIR,
generate_all::{Mode, REGENERATE_ALL_COMMAND},
};
#[derive(clap::Args)]
pub(crate) struct Args {
/// Write the generated table to stdout (rather than to `crates/ty/docs/configuration.md`).
#[arg(long, default_value_t, value_enum)]
pub(crate) mode: Mode,
}
pub(crate) fn main(args: &Args) -> anyhow::Result<()> {
let mut output = String::new();
let file_name = "crates/ty/docs/configuration.md";
let markdown_path = PathBuf::from(ROOT_DIR).join(file_name);
generate_set(
&mut output,
Set::Toplevel(Options::metadata()),
&mut Vec::new(),
);
match args.mode {
Mode::DryRun => {
println!("{output}");
}
Mode::Check => {
let current = std::fs::read_to_string(&markdown_path)?;
if output == current {
println!("Up-to-date: {file_name}",);
} else {
let comparison = StrComparison::new(&current, &output);
bail!("{file_name} changed, please run `{REGENERATE_ALL_COMMAND}`:\n{comparison}",);
}
}
Mode::Write => {
let current = std::fs::read_to_string(&markdown_path)?;
if current == output {
println!("Up-to-date: {file_name}",);
} else {
println!("Updating: {file_name}",);
std::fs::write(markdown_path, output.as_bytes())?;
}
}
}
Ok(())
}
fn generate_set(output: &mut String, set: Set, parents: &mut Vec<Set>) {
match &set {
Set::Toplevel(_) => {
output.push_str("# Configuration\n");
}
Set::Named { name, .. } => {
let title = parents
.iter()
.filter_map(|set| set.name())
.chain(std::iter::once(name.as_str()))
.join(".");
writeln!(output, "## `{title}`\n",).unwrap();
}
}
if let Some(documentation) = set.metadata().documentation() {
output.push_str(documentation);
output.push('\n');
output.push('\n');
}
let mut visitor = CollectOptionsVisitor::default();
set.metadata().record(&mut visitor);
let (mut fields, mut sets) = (visitor.fields, visitor.groups);
fields.sort_unstable_by(|(name, _), (name2, _)| name.cmp(name2));
sets.sort_unstable_by(|(name, _), (name2, _)| name.cmp(name2));
parents.push(set);
// Generate the fields.
for (name, field) in &fields {
emit_field(output, name, field, parents.as_slice());
output.push_str("---\n\n");
}
// Generate all the sub-sets.
for (set_name, sub_set) in &sets {
generate_set(
output,
Set::Named {
name: set_name.to_string(),
set: *sub_set,
},
parents,
);
}
parents.pop();
}
enum Set {
Toplevel(OptionSet),
Named { name: String, set: OptionSet },
}
impl Set {
fn name(&self) -> Option<&str> {
match self {
Set::Toplevel(_) => None,
Set::Named { name, .. } => Some(name),
}
}
fn metadata(&self) -> &OptionSet {
match self {
Set::Toplevel(set) => set,
Set::Named { set, .. } => set,
}
}
}
fn emit_field(output: &mut String, name: &str, field: &OptionField, parents: &[Set]) {
let header_level = if parents.is_empty() { "###" } else { "####" };
let _ = writeln!(output, "{header_level} `{name}`");
output.push('\n');
if let Some(deprecated) = &field.deprecated {
output.push_str("> [!WARN] \"Deprecated\"\n");
output.push_str("> This option has been deprecated");
if let Some(since) = deprecated.since {
write!(output, " in {since}").unwrap();
}
output.push('.');
if let Some(message) = deprecated.message {
writeln!(output, " {message}").unwrap();
}
output.push('\n');
}
output.push_str(field.doc);
output.push_str("\n\n");
let _ = writeln!(output, "**Default value**: `{}`", field.default);
output.push('\n');
let _ = writeln!(output, "**Type**: `{}`", field.value_type);
output.push('\n');
output.push_str("**Example usage** (`pyproject.toml`):\n\n");
output.push_str(&format_example(
&format_header(
field.scope,
field.example,
parents,
ConfigurationFile::PyprojectToml,
),
field.example,
));
output.push('\n');
}
fn format_example(header: &str, content: &str) -> String {
if header.is_empty() {
format!("```toml\n{content}\n```\n",)
} else {
format!("```toml\n{header}\n{content}\n```\n",)
}
}
/// Format the TOML header for the example usage for a given option.
///
/// For example: `[tool.ruff.format]` or `[tool.ruff.lint.isort]`.
fn format_header(
scope: Option<&str>,
example: &str,
parents: &[Set],
configuration: ConfigurationFile,
) -> String {
let tool_parent = match configuration {
ConfigurationFile::PyprojectToml => Some("tool.ty"),
ConfigurationFile::TyToml => None,
};
let header = tool_parent
.into_iter()
.chain(parents.iter().filter_map(|parent| parent.name()))
.chain(scope)
.join(".");
// Ex) `[[tool.ty.xx]]`
if example.starts_with(&format!("[[{header}")) {
return String::new();
}
// Ex) `[tool.ty.rules]`
if example.starts_with(&format!("[{header}")) {
return String::new();
}
if header.is_empty() {
String::new()
} else {
format!("[{header}]")
}
}
#[derive(Default)]
struct CollectOptionsVisitor {
groups: Vec<(String, OptionSet)>,
fields: Vec<(String, OptionField)>,
}
impl Visit for CollectOptionsVisitor {
fn record_set(&mut self, name: &str, group: OptionSet) {
self.groups.push((name.to_owned(), group));
}
fn record_field(&mut self, name: &str, field: OptionField) {
self.fields.push((name.to_owned(), field));
}
}
#[derive(Debug, Copy, Clone)]
enum ConfigurationFile {
PyprojectToml,
#[expect(dead_code)]
TyToml,
}
#[cfg(test)]
mod tests {
use anyhow::Result;
use crate::generate_all::Mode;
use super::{Args, main};
#[test]
fn ty_configuration_markdown_up_to_date() -> Result<()> {
main(&Args { mode: Mode::Check })?;
Ok(())
}
}

View File

@@ -0,0 +1,143 @@
//! Generates the rules table for ty
use std::borrow::Cow;
use std::fmt::Write as _;
use std::fs;
use std::path::PathBuf;
use anyhow::{Result, bail};
use itertools::Itertools as _;
use pretty_assertions::StrComparison;
use crate::ROOT_DIR;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
#[derive(clap::Args)]
pub(crate) struct Args {
/// Write the generated table to stdout (rather than to `ty.schema.json`).
#[arg(long, default_value_t, value_enum)]
pub(crate) mode: Mode,
}
pub(crate) fn main(args: &Args) -> Result<()> {
let markdown = generate_markdown();
let filename = "crates/ty/docs/rules.md";
let schema_path = PathBuf::from(ROOT_DIR).join(filename);
match args.mode {
Mode::DryRun => {
println!("{markdown}");
}
Mode::Check => {
let current = fs::read_to_string(schema_path)?;
if current == markdown {
println!("Up-to-date: {filename}");
} else {
let comparison = StrComparison::new(&current, &markdown);
bail!("{filename} changed, please run `{REGENERATE_ALL_COMMAND}`:\n{comparison}");
}
}
Mode::Write => {
let current = fs::read_to_string(&schema_path)?;
if current == markdown {
println!("Up-to-date: {filename}");
} else {
println!("Updating: {filename}");
fs::write(schema_path, markdown.as_bytes())?;
}
}
}
Ok(())
}
fn generate_markdown() -> String {
let registry = &*ty_project::DEFAULT_LINT_REGISTRY;
let mut output = String::new();
let _ = writeln!(&mut output, "# Rules\n");
let mut lints: Vec<_> = registry.lints().iter().collect();
lints.sort_by(|a, b| {
a.default_level()
.cmp(&b.default_level())
.reverse()
.then_with(|| a.name().cmp(&b.name()))
});
for lint in lints {
let _ = writeln!(&mut output, "## `{rule_name}`\n", rule_name = lint.name());
// Increase the header-level by one
let documentation = lint
.documentation_lines()
.map(|line| {
if line.starts_with('#') {
Cow::Owned(format!("#{line}"))
} else {
Cow::Borrowed(line)
}
})
.join("\n");
let _ = writeln!(
&mut output,
r#"**Default level**: {level}
<details>
<summary>{summary}</summary>
{documentation}
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20{encoded_name})
* [View source](https://github.com/astral-sh/ruff/blob/main/{file}#L{line})
</details>
"#,
level = lint.default_level(),
// GitHub doesn't support markdown in `summary` headers
summary = replace_inline_code(lint.summary()),
encoded_name = url::form_urlencoded::byte_serialize(lint.name().as_str().as_bytes())
.collect::<String>(),
file = url::form_urlencoded::byte_serialize(lint.file().replace('\\', "/").as_bytes())
.collect::<String>(),
line = lint.line(),
);
}
output
}
/// Replaces inline code blocks (`code`) with `<code>code</code>`
fn replace_inline_code(input: &str) -> String {
let mut output = String::new();
let mut parts = input.split('`');
while let Some(before) = parts.next() {
if let Some(between) = parts.next() {
output.push_str(before);
output.push_str("<code>");
output.push_str(between);
output.push_str("</code>");
} else {
output.push_str(before);
}
}
output
}
#[cfg(test)]
mod tests {
use anyhow::Result;
use crate::generate_all::Mode;
use super::{Args, main};
#[test]
fn ty_rules_up_to_date() -> Result<()> {
main(&Args { mode: Mode::Check })
}
}

View File

@@ -1,14 +1,12 @@
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::fs;
use std::path::PathBuf;
use anyhow::{bail, Result};
use anyhow::{Result, bail};
use pretty_assertions::StrComparison;
use schemars::schema_for;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use crate::ROOT_DIR;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use ty_project::metadata::options::Options;
#[derive(clap::Args)]
@@ -58,7 +56,7 @@ mod tests {
use crate::generate_all::Mode;
use super::{main, Args};
use super::{Args, main};
#[test]
fn test_generate_json_schema() -> Result<()> {

View File

@@ -2,6 +2,8 @@
//!
//! Within the ruff repository you can run it with `cargo dev`.
#![allow(clippy::print_stdout, clippy::print_stderr)]
use anyhow::Result;
use clap::{Parser, Subcommand};
use ruff::{args::GlobalConfigArgs, check};
@@ -15,6 +17,9 @@ mod generate_docs;
mod generate_json_schema;
mod generate_options;
mod generate_rules_table;
mod generate_ty_cli_reference;
mod generate_ty_options;
mod generate_ty_rules;
mod generate_ty_schema;
mod print_ast;
mod print_cst;
@@ -44,8 +49,10 @@ enum Command {
GenerateTySchema(generate_ty_schema::Args),
/// Generate a Markdown-compatible table of supported lint rules.
GenerateRulesTable,
GenerateTyRules(generate_ty_rules::Args),
/// Generate a Markdown-compatible listing of configuration options.
GenerateOptions,
GenerateTyOptions(generate_ty_options::Args),
/// Generate CLI help.
GenerateCliHelp(generate_cli_help::Args),
/// Generate Markdown docs.
@@ -88,7 +95,9 @@ fn main() -> Result<ExitCode> {
Command::GenerateJSONSchema(args) => generate_json_schema::main(&args)?,
Command::GenerateTySchema(args) => generate_ty_schema::main(&args)?,
Command::GenerateRulesTable => println!("{}", generate_rules_table::generate()),
Command::GenerateTyRules(args) => generate_ty_rules::main(&args)?,
Command::GenerateOptions => println!("{}", generate_options::generate()),
Command::GenerateTyOptions(args) => generate_ty_options::main(&args)?,
Command::GenerateCliHelp(args) => generate_cli_help::main(&args)?,
Command::GenerateDocs(args) => generate_docs::main(&args)?,
Command::PrintAST(args) => print_ast::main(&args)?,

View File

@@ -1,5 +1,4 @@
//! Print the AST for a given Python file.
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::path::PathBuf;
@@ -7,7 +6,7 @@ use anyhow::Result;
use ruff_linter::source_kind::SourceKind;
use ruff_python_ast::PySourceType;
use ruff_python_parser::{parse, ParseOptions};
use ruff_python_parser::{ParseOptions, parse};
#[derive(clap::Args)]
pub(crate) struct Args {

View File

@@ -1,10 +1,9 @@
//! Print the `LibCST` CST for a given Python file.
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::fs;
use std::path::PathBuf;
use anyhow::{bail, Result};
use anyhow::{Result, bail};
#[derive(clap::Args)]
pub(crate) struct Args {

View File

@@ -1,5 +1,4 @@
//! Print the token stream for a given Python file.
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::path::PathBuf;

View File

@@ -1,5 +1,4 @@
//! Run round-trip source code generation on a given Python or Jupyter notebook file.
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::fs;
use std::path::PathBuf;

View File

@@ -1,35 +1,29 @@
use anyhow::Result;
use log::debug;
#[cfg(feature = "serde")]
use serde::{Deserialize, Serialize};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::Fix;
use crate::{Fix, Violation};
#[derive(Debug, PartialEq, Eq, Clone)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct DiagnosticKind {
pub struct Diagnostic {
/// The identifier of the diagnostic, used to align the diagnostic with a rule.
pub name: String,
pub name: &'static str,
/// The message body to display to the user, to explain the diagnostic.
pub body: String,
/// The message to display to the user, to explain the suggested fix.
pub suggestion: Option<String>,
}
#[derive(Debug, PartialEq, Eq, Clone)]
pub struct Diagnostic {
pub kind: DiagnosticKind,
pub range: TextRange,
pub fix: Option<Fix>,
pub parent: Option<TextSize>,
}
impl Diagnostic {
pub fn new<T: Into<DiagnosticKind>>(kind: T, range: TextRange) -> Self {
pub fn new<T: Violation>(kind: T, range: TextRange) -> Self {
Self {
kind: kind.into(),
name: T::rule_name(),
body: Violation::message(&kind),
suggestion: Violation::fix_title(&kind),
range,
fix: None,
parent: None,
@@ -56,7 +50,7 @@ impl Diagnostic {
pub fn try_set_fix(&mut self, func: impl FnOnce() -> Result<Fix>) {
match func() {
Ok(fix) => self.fix = Some(fix),
Err(err) => debug!("Failed to create fix for {}: {}", self.kind.name, err),
Err(err) => debug!("Failed to create fix for {}: {}", self.name, err),
}
}
@@ -67,7 +61,7 @@ impl Diagnostic {
match func() {
Ok(None) => {}
Ok(Some(fix)) => self.fix = Some(fix),
Err(err) => debug!("Failed to create fix for {}: {}", self.kind.name, err),
Err(err) => debug!("Failed to create fix for {}: {}", self.name, err),
}
}

View File

@@ -1,4 +1,4 @@
pub use diagnostic::{Diagnostic, DiagnosticKind};
pub use diagnostic::Diagnostic;
pub use edit::Edit;
pub use fix::{Applicability, Fix, IsolationLevel};
pub use source_map::{SourceMap, SourceMarker};

View File

@@ -1,4 +1,3 @@
use crate::DiagnosticKind;
use std::fmt::{Debug, Display};
#[derive(Debug, Copy, Clone)]
@@ -79,16 +78,3 @@ impl<V: AlwaysFixableViolation> Violation for V {
<Self as AlwaysFixableViolation>::message_formats()
}
}
impl<T> From<T> for DiagnosticKind
where
T: Violation,
{
fn from(value: T) -> Self {
Self {
body: Violation::message(&value),
suggestion: Violation::fix_title(&value),
name: T::rule_name().to_string(),
}
}
}

View File

@@ -17,7 +17,7 @@ impl<'fmt, Context> Argument<'fmt, Context> {
/// Called by the [ruff_formatter::format_args] macro.
#[doc(hidden)]
#[inline]
pub fn new<F: Format<Context>>(value: &'fmt F) -> Self {
pub const fn new<F: Format<Context>>(value: &'fmt F) -> Self {
Self { value }
}
@@ -55,7 +55,7 @@ pub struct Arguments<'fmt, Context>(pub &'fmt [Argument<'fmt, Context>]);
impl<'fmt, Context> Arguments<'fmt, Context> {
#[doc(hidden)]
#[inline]
pub fn new(arguments: &'fmt [Argument<'fmt, Context>]) -> Self {
pub const fn new(arguments: &'fmt [Argument<'fmt, Context>]) -> Self {
Self(arguments)
}
@@ -98,7 +98,7 @@ impl<'fmt, Context> From<&'fmt Argument<'fmt, Context>> for Arguments<'fmt, Cont
mod tests {
use crate::format_element::tag::Tag;
use crate::prelude::*;
use crate::{format_args, write, FormatState, VecBuffer};
use crate::{FormatState, VecBuffer, format_args, write};
#[test]
fn test_nesting() {

View File

@@ -1,4 +1,4 @@
use super::{write, Arguments, FormatElement};
use super::{Arguments, FormatElement, write};
use crate::format_element::Interned;
use crate::prelude::{LineMode, Tag};
use crate::{FormatResult, FormatState};

View File

@@ -2,14 +2,14 @@ use std::cell::Cell;
use std::marker::PhantomData;
use std::num::NonZeroU8;
use ruff_text_size::TextRange;
#[allow(clippy::enum_glob_use)]
use Tag::*;
use ruff_text_size::TextRange;
use crate::format_element::tag::{Condition, Tag};
use crate::prelude::tag::{DedentMode, GroupMode, LabelId};
use crate::prelude::*;
use crate::{write, Argument, Arguments, FormatContext, FormatOptions, GroupId, TextSize};
use crate::{Argument, Arguments, FormatContext, FormatOptions, GroupId, TextSize, write};
use crate::{Buffer, VecBuffer};
/// A line break that only gets printed if the enclosing `Group` doesn't fit on a single line.
@@ -402,7 +402,10 @@ where
}
fn debug_assert_no_newlines(text: &str) {
debug_assert!(!text.contains('\r'), "The content '{text}' contains an unsupported '\\r' line terminator character but text must only use line feeds '\\n' as line separator. Use '\\n' instead of '\\r' and '\\r\\n' to insert a line break in strings.");
debug_assert!(
!text.contains('\r'),
"The content '{text}' contains an unsupported '\\r' line terminator character but text must only use line feeds '\\n' as line separator. Use '\\n' instead of '\\r' and '\\r\\n' to insert a line break in strings."
);
}
/// Pushes some content to the end of the current line.
@@ -1388,7 +1391,7 @@ pub fn soft_space_or_block_indent<Context>(content: &impl Format<Context>) -> Bl
pub fn group<Context>(content: &impl Format<Context>) -> Group<Context> {
Group {
content: Argument::new(content),
group_id: None,
id: None,
should_expand: false,
}
}
@@ -1396,14 +1399,14 @@ pub fn group<Context>(content: &impl Format<Context>) -> Group<Context> {
#[derive(Copy, Clone)]
pub struct Group<'a, Context> {
content: Argument<'a, Context>,
group_id: Option<GroupId>,
id: Option<GroupId>,
should_expand: bool,
}
impl<Context> Group<'_, Context> {
#[must_use]
pub fn with_group_id(mut self, group_id: Option<GroupId>) -> Self {
self.group_id = group_id;
pub fn with_id(mut self, group_id: Option<GroupId>) -> Self {
self.id = group_id;
self
}
@@ -1429,7 +1432,7 @@ impl<Context> Format<Context> for Group<'_, Context> {
};
f.write_element(FormatElement::Tag(StartGroup(
tag::Group::new().with_id(self.group_id).with_mode(mode),
tag::Group::new().with_id(self.id).with_mode(mode),
)));
Arguments::from(&self.content).fmt(f)?;
@@ -1443,7 +1446,7 @@ impl<Context> Format<Context> for Group<'_, Context> {
impl<Context> std::fmt::Debug for Group<'_, Context> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("Group")
.field("group_id", &self.group_id)
.field("id", &self.id)
.field("should_expand", &self.should_expand)
.field("content", &"{{content}}")
.finish()
@@ -1642,7 +1645,7 @@ impl<Context> std::fmt::Debug for BestFitParenthesize<'_, Context> {
/// soft_line_break(),
/// if_group_breaks(&token(")"))
/// ])
/// .with_group_id(Some(parentheses_id))
/// .with_id(Some(parentheses_id))
/// .fmt(f)
/// });
///
@@ -1991,7 +1994,7 @@ impl<Context> IfGroupBreaks<'_, Context> {
/// })),
/// token("]")
/// ],
/// ).with_group_id(Some(group_id))
/// ).with_id(Some(group_id))
/// ])
/// })])?;
///
@@ -2046,7 +2049,7 @@ impl<Context> std::fmt::Debug for IfGroupBreaks<'_, Context> {
/// let id = f.group_id("head");
///
/// write!(f, [
/// group(&token("Head")).with_group_id(Some(id)),
/// group(&token("Head")).with_id(Some(id)),
/// if_group_breaks(&indent(&token("indented"))).with_group_id(Some(id)),
/// if_group_fits_on_line(&token("indented")).with_group_id(Some(id))
/// ])
@@ -2071,7 +2074,7 @@ impl<Context> std::fmt::Debug for IfGroupBreaks<'_, Context> {
/// let group_id = f.group_id("header");
///
/// write!(f, [
/// group(&token("(aLongHeaderThatBreaksForSomeReason) =>")).with_group_id(Some(group_id)),
/// group(&token("(aLongHeaderThatBreaksForSomeReason) =>")).with_id(Some(group_id)),
/// indent_if_group_breaks(&format_args![hard_line_break(), token("a => b")], group_id)
/// ])
/// });
@@ -2101,7 +2104,7 @@ impl<Context> std::fmt::Debug for IfGroupBreaks<'_, Context> {
/// let group_id = f.group_id("header");
///
/// write!(f, [
/// group(&token("(aLongHeaderThatBreaksForSomeReason) =>")).with_group_id(Some(group_id)),
/// group(&token("(aLongHeaderThatBreaksForSomeReason) =>")).with_id(Some(group_id)),
/// indent_if_group_breaks(&format_args![hard_line_break(), token("a => b")], group_id)
/// ])
/// });
@@ -2564,7 +2567,7 @@ impl<'a, Context> BestFitting<'a, Context> {
/// # Panics
///
/// When the slice contains less than two variants.
pub fn from_arguments_unchecked(variants: Arguments<'a, Context>) -> Self {
pub const fn from_arguments_unchecked(variants: Arguments<'a, Context>) -> Self {
assert!(
variants.0.len() >= 2,
"Requires at least the least expanded and most expanded variants"
@@ -2572,7 +2575,7 @@ impl<'a, Context> BestFitting<'a, Context> {
Self {
variants,
mode: BestFittingMode::default(),
mode: BestFittingMode::FirstLine,
}
}

View File

@@ -1,5 +1,5 @@
use crate::prelude::TagKind;
use crate::GroupId;
use crate::prelude::TagKind;
use ruff_text_size::TextRange;
use std::error::Error;
@@ -29,16 +29,22 @@ pub enum FormatError {
impl std::fmt::Display for FormatError {
fn fmt(&self, fmt: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
FormatError::SyntaxError {message} => {
FormatError::SyntaxError { message } => {
std::write!(fmt, "syntax error: {message}")
},
}
FormatError::RangeError { input, tree } => std::write!(
fmt,
"formatting range {input:?} is larger than syntax tree {tree:?}"
),
FormatError::InvalidDocument(error) => std::write!(fmt, "Invalid document: {error}\n\n This is an internal Rome error. Please report if necessary."),
FormatError::InvalidDocument(error) => std::write!(
fmt,
"Invalid document: {error}\n\n This is an internal Rome error. Please report if necessary."
),
FormatError::PoorLayout => {
std::write!(fmt, "Poor layout: The formatter wasn't able to pick a good layout for your document. This is an internal Rome error. Please report if necessary.")
std::write!(
fmt,
"Poor layout: The formatter wasn't able to pick a good layout for your document. This is an internal Rome error. Please report if necessary."
)
}
}
}
@@ -139,24 +145,37 @@ impl std::fmt::Display for InvalidDocumentError {
InvalidDocumentError::ExpectedStart {
expected_start,
actual,
} => {
match actual {
ActualStart::EndOfDocument => {
std::write!(f, "Expected start tag of kind {expected_start:?} but at the end of document.")
}
ActualStart::Start(start) => {
std::write!(f, "Expected start tag of kind {expected_start:?} but found start tag of kind {start:?}.")
}
ActualStart::End(end) => {
std::write!(f, "Expected start tag of kind {expected_start:?} but found end tag of kind {end:?}.")
}
ActualStart::Content => {
std::write!(f, "Expected start tag of kind {expected_start:?} but found non-tag element.")
}
} => match actual {
ActualStart::EndOfDocument => {
std::write!(
f,
"Expected start tag of kind {expected_start:?} but at the end of document."
)
}
}
ActualStart::Start(start) => {
std::write!(
f,
"Expected start tag of kind {expected_start:?} but found start tag of kind {start:?}."
)
}
ActualStart::End(end) => {
std::write!(
f,
"Expected start tag of kind {expected_start:?} but found end tag of kind {end:?}."
)
}
ActualStart::Content => {
std::write!(
f,
"Expected start tag of kind {expected_start:?} but found non-tag element."
)
}
},
InvalidDocumentError::UnknownGroupId { group_id } => {
std::write!(f, "Encountered unknown group id {group_id:?}. Ensure that the group with the id {group_id:?} exists and that the group is a parent of or comes before the element referring to it.")
std::write!(
f,
"Encountered unknown group id {group_id:?}. Ensure that the group with the id {group_id:?} exists and that the group is a parent of or comes before the element referring to it."
)
}
}
}

View File

@@ -543,7 +543,7 @@ impl TextWidth {
#[cfg(test)]
mod tests {
use crate::format_element::{normalize_newlines, LINE_TERMINATORS};
use crate::format_element::{LINE_TERMINATORS, normalize_newlines};
#[test]
fn test_normalize_newlines() {

View File

@@ -8,8 +8,8 @@ use crate::prelude::tag::GroupMode;
use crate::prelude::*;
use crate::source_code::SourceCode;
use crate::{
format, write, BufferExtensions, Format, FormatContext, FormatElement, FormatOptions,
FormatResult, Formatter, IndentStyle, IndentWidth, LineWidth, PrinterOptions,
BufferExtensions, Format, FormatContext, FormatElement, FormatOptions, FormatResult, Formatter,
IndentStyle, IndentWidth, LineWidth, PrinterOptions, format, write,
};
use super::tag::Tag;
@@ -811,8 +811,8 @@ mod tests {
use ruff_text_size::{TextRange, TextSize};
use crate::prelude::*;
use crate::{format, format_args, write};
use crate::{SimpleFormatContext, SourceCode};
use crate::{format, format_args, write};
#[test]
fn display_elements() {

View File

@@ -370,7 +370,10 @@ impl PartialEq for LabelId {
#[cfg(debug_assertions)]
{
if is_equal {
assert_eq!(self.name, other.name, "Two `LabelId`s with different names have the same `value`. Are you mixing labels of two different `LabelDefinition` or are the values returned by the `LabelDefinition` not unique?");
assert_eq!(
self.name, other.name,
"Two `LabelId`s with different names have the same `value`. Are you mixing labels of two different `LabelDefinition` or are the values returned by the `LabelDefinition` not unique?"
);
}
}

View File

@@ -38,7 +38,7 @@ use crate::prelude::TagKind;
use std::fmt;
use std::fmt::{Debug, Display};
use std::marker::PhantomData;
use std::num::{NonZeroU16, NonZeroU8, TryFromIntError};
use std::num::{NonZeroU8, NonZeroU16, TryFromIntError};
use crate::format_element::document::Document;
use crate::printer::{Printer, PrinterOptions};
@@ -50,7 +50,7 @@ pub use builders::BestFitting;
pub use source_code::{SourceCode, SourceCodeSlice};
pub use crate::diagnostics::{ActualStart, FormatError, InvalidDocumentError, PrintError};
pub use format_element::{normalize_newlines, FormatElement, LINE_TERMINATORS};
pub use format_element::{FormatElement, LINE_TERMINATORS, normalize_newlines};
pub use group_id::GroupId;
use ruff_macros::CacheKey;
use ruff_text_size::{TextLen, TextRange, TextSize};
@@ -92,7 +92,7 @@ impl std::fmt::Display for IndentStyle {
}
}
/// The visual width of a indentation.
/// The visual width of an indentation.
///
/// Determines the visual width of a tab character (`\t`) and the number of
/// spaces per indent when using [`IndentStyle::Space`].
@@ -207,7 +207,7 @@ pub trait FormatOptions {
/// What's the max width of a line. Defaults to 80.
fn line_width(&self) -> LineWidth;
/// Derives the print options from the these format options
/// Derives the print options from these format options
fn as_print_options(&self) -> PrinterOptions;
}

View File

@@ -328,16 +328,16 @@ macro_rules! format {
/// [`MostExpanded`]: crate::format_element::BestFittingVariants::most_expanded
#[macro_export]
macro_rules! best_fitting {
($least_expanded:expr, $($tail:expr),+ $(,)?) => {{
($least_expanded:expr, $($tail:expr),+ $(,)?) => {
// OK because the macro syntax requires at least two variants.
$crate::BestFitting::from_arguments_unchecked($crate::format_args!($least_expanded, $($tail),+))
}}
}
}
#[cfg(test)]
mod tests {
use crate::prelude::*;
use crate::{write, FormatState, SimpleFormatOptions, VecBuffer};
use crate::{FormatState, SimpleFormatOptions, VecBuffer, write};
struct TestFormat;
@@ -386,7 +386,7 @@ mod tests {
#[test]
fn best_fitting_variants_print_as_lists() {
use crate::prelude::*;
use crate::{format, format_args, Formatted};
use crate::{Formatted, format, format_args};
// The second variant below should be selected when printing at a width of 30
let formatted_best_fitting = format!(
@@ -398,34 +398,36 @@ mod tests {
format_args![token(
"Something that will not fit on a line with 30 character print width."
)],
format_args![group(&format_args![
token("Start"),
soft_line_break(),
group(&soft_block_indent(&format_args![
token("1,"),
soft_line_break_or_space(),
token("2,"),
soft_line_break_or_space(),
token("3"),
])),
soft_line_break_or_space(),
soft_block_indent(&format_args![
token("1,"),
soft_line_break_or_space(),
token("2,"),
soft_line_break_or_space(),
group(&format_args!(
token("A,"),
format_args![
group(&format_args![
token("Start"),
soft_line_break(),
group(&soft_block_indent(&format_args![
token("1,"),
soft_line_break_or_space(),
token("B")
)),
token("2,"),
soft_line_break_or_space(),
token("3"),
])),
soft_line_break_or_space(),
token("3")
]),
soft_line_break_or_space(),
token("End")
])
.should_expand(true)],
soft_block_indent(&format_args![
token("1,"),
soft_line_break_or_space(),
token("2,"),
soft_line_break_or_space(),
group(&format_args!(
token("A,"),
soft_line_break_or_space(),
token("B")
)),
soft_line_break_or_space(),
token("3")
]),
soft_line_break_or_space(),
token("End")
])
.should_expand(true)
],
format_args!(token("Most"), hard_line_break(), token("Expanded"))
]
]

View File

@@ -7,6 +7,6 @@ pub use crate::formatter::Formatter;
pub use crate::printer::PrinterOptions;
pub use crate::{
best_fitting, dbg_write, format, format_args, write, Buffer as _, BufferExtensions, Format,
Format as _, FormatResult, FormatRule, FormatWithRule as _, SimpleFormatContext,
Buffer as _, BufferExtensions, Format, Format as _, FormatResult, FormatRule,
FormatWithRule as _, SimpleFormatContext, best_fitting, dbg_write, format, format_args, write,
};

View File

@@ -1,5 +1,5 @@
use crate::format_element::tag::TagKind;
use crate::format_element::PrintMode;
use crate::format_element::tag::TagKind;
use crate::printer::stack::{Stack, StackedStack};
use crate::printer::{Indentation, MeasureMode};
use crate::{IndentStyle, InvalidDocumentError, PrintError, PrintResult};

View File

@@ -1,5 +1,5 @@
use crate::printer::call_stack::PrintElementArgs;
use crate::FormatElement;
use crate::printer::call_stack::PrintElementArgs;
/// Stores the queued line suffixes.
#[derive(Debug, Default)]

View File

@@ -10,7 +10,7 @@ use crate::format_element::document::Document;
use crate::format_element::tag::{Condition, GroupMode};
use crate::format_element::{BestFittingMode, BestFittingVariants, LineMode, PrintMode};
use crate::prelude::tag::{DedentMode, Tag, TagKind, VerbatimKind};
use crate::prelude::{tag, TextWidth};
use crate::prelude::{TextWidth, tag};
use crate::printer::call_stack::{
CallStack, FitsCallStack, PrintCallStack, PrintElementArgs, StackFrame,
};
@@ -1199,7 +1199,7 @@ impl<'a, 'print> FitsMeasurer<'a, 'print> {
text_width: *text_width,
},
args,
))
));
}
FormatElement::SourceCodeSlice { slice, text_width } => {
let text = slice.text(self.printer.source_code);
@@ -1597,11 +1597,7 @@ enum Fits {
impl From<bool> for Fits {
fn from(value: bool) -> Self {
if value {
Fits::Yes
} else {
Fits::No
}
if value { Fits::Yes } else { Fits::No }
}
}
@@ -1662,8 +1658,8 @@ mod tests {
use crate::printer::{LineEnding, Printer, PrinterOptions};
use crate::source_code::SourceCode;
use crate::{
format_args, write, Document, FormatState, IndentStyle, IndentWidth, LineWidth, Printed,
VecBuffer,
Document, FormatState, IndentStyle, IndentWidth, LineWidth, Printed, VecBuffer,
format_args, write,
};
fn format(root: &dyn Format<SimpleFormatContext>) -> Printed {
@@ -1985,10 +1981,21 @@ two lines`,
token("]")
]),
token(";"),
line_suffix(&format_args![space(), token("// Using reserved width causes this content to not fit even though it's a line suffix element")], 93)
line_suffix(
&format_args![
space(),
token(
"// Using reserved width causes this content to not fit even though it's a line suffix element"
)
],
93
)
]);
assert_eq!(printed.as_code(), "[\n 1, 2, 3\n]; // Using reserved width causes this content to not fit even though it's a line suffix element");
assert_eq!(
printed.as_code(),
"[\n 1, 2, 3\n]; // Using reserved width causes this content to not fit even though it's a line suffix element"
);
}
#[test]
@@ -2002,7 +2009,7 @@ two lines`,
token("The referenced group breaks."),
hard_line_break()
])
.with_group_id(Some(group_id)),
.with_id(Some(group_id)),
group(&format_args![
token("This group breaks because:"),
soft_line_break_or_space(),
@@ -2015,7 +2022,10 @@ two lines`,
let printed = format(&content);
assert_eq!(printed.as_code(), "The referenced group breaks.\nThis group breaks because:\nIt measures with the 'if_group_breaks' variant because the referenced group breaks and that's just way too much text.");
assert_eq!(
printed.as_code(),
"The referenced group breaks.\nThis group breaks because:\nIt measures with the 'if_group_breaks' variant because the referenced group breaks and that's just way too much text."
);
}
#[test]
@@ -2027,7 +2037,7 @@ two lines`,
write!(
f,
[
group(&token("Group with id-2")).with_group_id(Some(id_2)),
group(&token("Group with id-2")).with_id(Some(id_2)),
hard_line_break()
]
)?;
@@ -2035,7 +2045,7 @@ two lines`,
write!(
f,
[
group(&token("Group with id-1 does not fit on the line because it exceeds the line width of 80 characters by")).with_group_id(Some(id_1)),
group(&token("Group with id-1 does not fit on the line because it exceeds the line width of 80 characters by")).with_id(Some(id_1)),
hard_line_break()
]
)?;

View File

@@ -325,10 +325,10 @@ impl FitsEndPredicate for SingleEntryPredicate {
#[cfg(test)]
mod tests {
use crate::FormatElement;
use crate::format_element::LineMode;
use crate::prelude::Tag;
use crate::printer::queue::{PrintQueue, Queue};
use crate::FormatElement;
#[test]
fn extend_back_pop_last() {

View File

@@ -65,7 +65,10 @@ pub struct SourceCodeSlice {
impl SourceCodeSlice {
/// Returns the slice's text.
pub fn text<'a>(&self, code: SourceCode<'a>) -> &'a str {
assert!(usize::from(self.range.end()) <= code.text.len(), "The range of this slice is out of bounds. Did you provide the correct source code for this slice?");
assert!(
usize::from(self.range.end()) <= code.text.len(),
"The range of this slice is out of bounds. Did you provide the correct source code for this slice?"
);
&code.text[self.range]
}
}

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::visitor::source_order::{
walk_expr, walk_module, walk_stmt, SourceOrderVisitor,
SourceOrderVisitor, walk_expr, walk_module, walk_stmt,
};
use ruff_python_ast::{self as ast, Expr, Mod, Stmt};
use ty_python_semantic::ModuleName;

View File

@@ -9,8 +9,8 @@ use ruff_db::{Db as SourceDb, Upcast};
use ruff_python_ast::PythonVersion;
use ty_python_semantic::lint::{LintRegistry, RuleSelection};
use ty_python_semantic::{
default_lint_registry, Db, Program, ProgramSettings, PythonPath, PythonPlatform,
SearchPathSettings,
Db, Program, ProgramSettings, PythonPath, PythonPlatform, SearchPathSettings,
default_lint_registry,
};
static EMPTY_VENDORED: std::sync::LazyLock<VendoredFileSystem> = std::sync::LazyLock::new(|| {
@@ -88,8 +88,8 @@ impl Db for ModuleDb {
!file.path(self).is_vendored_path()
}
fn rule_selection(&self) -> Arc<RuleSelection> {
self.rule_selection.clone()
fn rule_selection(&self) -> &RuleSelection {
&self.rule_selection
}
fn lint_registry(&self) -> &LintRegistry {

View File

@@ -4,7 +4,7 @@ use anyhow::Result;
use ruff_db::system::{SystemPath, SystemPathBuf};
use ruff_python_ast::helpers::to_module_path;
use ruff_python_parser::{parse, Mode, ParseOptions};
use ruff_python_parser::{Mode, ParseOptions, parse};
use crate::collector::Collector;
pub use crate::db::ModuleDb;

View File

@@ -1,8 +1,8 @@
use ruff_db::files::FilePath;
use ty_python_semantic::resolve_module;
use crate::collector::CollectedImport;
use crate::ModuleDb;
use crate::collector::CollectedImport;
/// Collect all imports for a given Python file.
pub(crate) struct Resolver<'a> {

Some files were not shown because too many files have changed in this diff Show More