Compare commits

...

106 Commits

Author SHA1 Message Date
Dylan
c7ff9826d6 Bump 0.14.4 (#21306) 2025-11-06 15:47:29 -06:00
Dhruv Manilawala
35640dd853 Fix main by using infer_expression (#21299) 2025-11-06 20:10:43 +00:00
Dhruv Manilawala
cb2e277482 [ty] Understand legacy and PEP 695 ParamSpec (#21139)
## Summary

This PR adds support for understanding the legacy definition and PEP 695
definition for `ParamSpec`.

This is still very initial and doesn't really implement any of the
semantics.

Part of https://github.com/astral-sh/ty/issues/157

## Test Plan

Add mdtest cases.

## Ecosystem analysis

Most of the diagnostics in `starlette` are due to the fact that ty now
understands `ParamSpec` is not a `Todo` type, so the assignability check
fails. The code looks something like:

```py
class _MiddlewareFactory(Protocol[P]):
    def __call__(self, app: ASGIApp, /, *args: P.args, **kwargs: P.kwargs) -> ASGIApp: ...  # pragma: no cover

class Middleware:
    def __init__(
        self,
        cls: _MiddlewareFactory[P],
        *args: P.args,
        **kwargs: P.kwargs,
    ) -> None:
        self.cls = cls
        self.args = args
        self.kwargs = kwargs

# ty complains that `ServerErrorMiddleware` is not assignable to `_MiddlewareFactory[P]`
Middleware(ServerErrorMiddleware, handler=error_handler, debug=debug)
```

There are multiple diagnostics where there's an attribute access on the
`Wrapped` object of `functools` which Pyright also raises:
```py
from functools import wraps

def my_decorator(f):
    @wraps(f)
    def wrapper(*args, **kwds):
        return f(*args, **kwds)

	# Pyright: Cannot access attribute "__signature__" for class "_Wrapped[..., Unknown, ..., Unknown]"
      Attribute "__signature__" is unknown [reportAttributeAccessIssue]
	# ty: Object of type `_Wrapped[Unknown, Unknown, Unknown, Unknown]` has no attribute `__signature__` [unresolved-attribute]
    wrapper.__signature__
    return wrapper
```

There are additional diagnostics that is due to the assignability checks
failing because ty now infers the `ParamSpec` instead of using the
`Todo` type which would always succeed. This results in a few
`no-matching-overload` diagnostics because the assignability checks
fail.

There are a few diagnostics related to
https://github.com/astral-sh/ty/issues/491 where there's a variable
which is either a bound method or a variable that's annotated with
`Callable` that doesn't contain the instance as the first parameter.

Another set of (valid) diagnostics are where the code hasn't provided
all the type variables. ty is now raising diagnostics for these because
we include `ParamSpec` type variable in the signature. For example,
`staticmethod[Any]` which contains two type variables.
2025-11-06 11:14:40 -05:00
Zanie Blue
132d10fb6f [ty] Discover site-packages from the environment that ty is installed in (#21286)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Closes https://github.com/astral-sh/ty/issues/989

There are various situations where users expect the Python packages
installed in the same environment as ty itself to be considered during
type checking. A minimal example would look like:

```
uv venv my-env
uv pip install my-env ty httpx
echo "import httpx" > foo.py
./my-env/bin/ty check foo.py
```

or

```
uv tool install ty --with httpx
echo "import httpx" > foo.py
ty check foo.py
```

While these are a bit contrived, there are real-world situations where a
user would expect a similar behavior to work. Notably, all of the other
type checkers consider their own environment when determining search
paths (though I'll admit that I have not verified when they choose not
to do this).

One common situation where users are encountering this today is with
`uvx --with-requirements script.py ty check script.py` — which is
currently our "best" recommendation for type checking a PEP 723 script,
but it doesn't work.

Of the options discussed in
https://github.com/astral-sh/ty/issues/989#issuecomment-3307417985, I've
chosen (2) as our criteria for including ty's environment in the search
paths.

- If no virtual environment is discovered, we will always include ty's
environment.
- If a `.venv` is discovered in the working directory, we will _prepend_
ty's environment to the search paths. The dependencies in ty's
environment (e.g., from `uvx --with`) will take precedence.
- If a virtual environment is active, e.g., `VIRTUAL_ENV` (i.e.,
including conda prefixes) is set, we will not include ty's environment.

The reason we need to special case the `.venv` case is that we both

1.  Recommend `uvx ty` today as a way to check your project
2. Want to enable `uvx --with <...> ty`

And I don't want (2) to break when you _happen_ to be in a project
(i.e., if we only included ty's environment when _no_ environment is
found) and don't want to remove support for (1).

I think long-term, I want to make `uvx <cmd>` layer the environment on
_top_ of the project environment (in uv), which would obviate the need
for this change when you're using uv. However, that change is breaking
and I think users will expect this behavior in contexts where they're
not using uv, so I think we should handle it in ty regardless.

I've opted not to include the environment if it's non-virtual (i.e., a
system environment) for now. It seems better to start by being more
restrictive. I left a comment in the code.

## Test Plan

I did some manual testing with the initial commit, then subsequently
added some unit tests.

```
❯ echo "import httpx" > example.py
❯ uvx --with httpx ty check example.py
Installed 8 packages in 19ms
error[unresolved-import]: Cannot resolve imported module `httpx`
 --> foo/example.py:1:8
  |
1 | import httpx
  |        ^^^^^
  |
info: Searched in the following paths during module resolution:
info:   1. /Users/zb/workspace/ty/python (first-party code)
info:   2. /Users/zb/workspace/ty (first-party code)
info:   3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default

Found 1 diagnostic
❯ uvx --from . --with httpx ty check example.py
All checks passed!
```

```
❯ uv init --script foo.py
Initialized script at `foo.py`
❯ uv add --script foo.py httpx
warning: The Python request from `.python-version` resolved to Python 3.13.8, which is incompatible with the script's Python requirement: `>=3.14`
Updated `foo.py`
❯ echo "import httpx" >> foo.py
❯ uvx --with-requirements foo.py ty check foo.py
error[unresolved-import]: Cannot resolve imported module `httpx`
  --> foo.py:15:8
   |
13 | if __name__ == "__main__":
14 |     main()
15 | import httpx
   |        ^^^^^
   |
info: Searched in the following paths during module resolution:
info:   1. /Users/zb/workspace/ty/python (first-party code)
info:   2. /Users/zb/workspace/ty (first-party code)
info:   3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default

Found 1 diagnostic
❯ uvx --from . --with-requirements foo.py ty check foo.py
All checks passed!
```

Notice we do not include ty's environment if `VIRTUAL_ENV` is set

```
❯ VIRTUAL_ENV=.venv uvx --with httpx ty check foo/example.py
error[unresolved-import]: Cannot resolve imported module `httpx`
 --> foo/example.py:1:8
  |
1 | import httpx
  |        ^^^^^
  |
info: Searched in the following paths during module resolution:
info:   1. /Users/zb/workspace/ty/python (first-party code)
info:   2. /Users/zb/workspace/ty (first-party code)
info:   3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info:   4. /Users/zb/workspace/ty/.venv/lib/python3.13/site-packages (site-packages)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default

Found 1 diagnostic
```
2025-11-06 09:27:49 -05:00
Alex Waygood
f189aad6d2 [ty] Make special cases for UnionType slightly narrower (#21276)
Fixes https://github.com/astral-sh/ty/issues/1478
2025-11-06 09:00:43 -05:00
Ben Beasley
5517c9943a Require ignore 0.4.24 in Cargo.toml (#21292)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Since 4c4ddc8c29, ruff uses the `WalkBuilder::current_dir` API
[introduced in `ignore` version
0.4.24](https://diff.rs/ignore/0.4.23/0.4.24/src%2Fwalk.rs), so it
should explicitly depend on this minimum version.

See also https://github.com/astral-sh/ruff/pull/20979.

## Test Plan

<!-- How was it tested? -->
Source inspection verifies this version is necessary; no additional
testing is required since `Cargo.lock` already has (at least) this
version.
2025-11-06 07:43:32 -05:00
Matthew Mckee
b5ff96595d [ty] Favour imported symbols over builtin symbols (#21285)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Raised by @AlexWaygood.

We previously did not favour imported symbols, when we probably
should've

## Test Plan

Add test showing that we favour imported symbol even if it is
alphabetically after other symbols that are builtin.
2025-11-06 06:46:08 -05:00
Loïc Riegel
c6573b16ac docs: revise Ruff setup instructions for Zed editor (#20935)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-11-06 01:11:29 +00:00
Micha Reiser
76127e5fb5 [ty] Update salsa (#21281) 2025-11-05 22:15:01 +00:00
Bhuminjay Soni
cddc0fedc2 [syntax-error]: no binding for nonlocal PLE0117 as a semantic syntax error (#21032)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

This PR ports PLE0117 as a semantic syntax error.

## Test Plan

<!-- How was it tested? -->
Tests previously written

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-11-05 19:13:28 +00:00
Douglas Creager
eda85f3c64 [ty] Constraining a typevar with itself (possibly via union or intersection) (#21273)
This PR carries over some of the `has_relation_to` logic for comparing a
typevar with itself. A typevar will specialize to the same type if it's
mentioned multiple times, so it is always assignable to and a subtype of
itself. (Note that typevars can only specialize to fully static types.)
This is also true when the typevar appears in a union on the right-hand
side, or in an intersection on the left-hand side. Similarly, a typevar
is always disjoint from its negation, so when a negated typevar appears
on the left-hand side, the constraint set is never satisfiable.

(Eventually this will allow us to remove the corresponding clauses from
`has_relation_to`, but that can't happen until more of #20093 lands.)
2025-11-05 12:31:53 -05:00
chiri
cef6600cf3 [ruff] Fix false positives on starred arguments (RUF057) (#21256)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/21209

## Test Plan

`cargo nextest run ruf057`
2025-11-05 12:07:33 -05:00
Ibraheem Ahmed
5c69e00d1c [ty] Simplify unions containing multiple type variables during inference (#21275)
## Summary

Splitting this one out from https://github.com/astral-sh/ruff/pull/21210. This is also something that should be made obselete by the new constraint solver, but is easy enough to fix now.
2025-11-05 15:03:19 +00:00
Micha Reiser
7569b09bdd [ty] Add ty_server::Db trait (#21241) 2025-11-05 13:40:07 +00:00
Micha Reiser
7009d60260 [ty] Refactor Range to/from TextRange conversion as prep for notebook support (#21230) 2025-11-05 13:24:03 +00:00
David Peter
f79044478c [ty] Fix playground crash when file name includes path separator (#21151) 2025-11-04 18:36:36 +01:00
Dan Parizher
47e41ac6b6 [refurb] Fix false negative for underscores before sign in Decimal constructor (FURB157) (#21190)
## Summary

Fixes FURB157 false negative where `Decimal("_-1")` was not flagged as
verbose when underscores precede the sign character. This fixes #21186.

## Problem Analysis

The `verbose-decimal-constructor` (FURB157) rule failed to detect
verbose `Decimal` constructors when the sign character (`+` or `-`) was
preceded by underscores. For example, `Decimal("_-1")` was not flagged,
even though it can be simplified to `Decimal(-1)`.

The bug occurred because the rule checked for the sign character at the
start of the string before stripping leading underscores. According to
Python's `Decimal` parser behavior (as documented in CPython's
`_pydecimal.py`), underscores are removed before parsing the sign. The
rule's logic didn't match this behavior, causing a false negative for
cases like `"_-1"` where the underscore came before the sign.

This was a regression introduced in version 0.14.3, as these cases were
correctly flagged in version 0.14.2.

## Approach

The fix updates the sign extraction logic to:
1. Strip leading underscores first (matching Python's Decimal parser
behavior)
2. Extract the sign from the underscore-stripped string
3. Preserve the string after the sign for normalization purposes

This ensures that cases like `Decimal("_-1")`, `Decimal("_+1")`, and
`Decimal("_-1_000")` are correctly detected and flagged. The
normalization logic was also updated to use the string after the sign
(without underscores) to avoid double signs in the replacement output.
2025-11-04 11:02:50 -05:00
David Peter
2e7ab00d51 [ty] Allow values of type None in type expressions (#21263)
## Summary

Allow values of type `None` in type expressions. The [typing
spec](https://typing.python.org/en/latest/spec/annotations.html#type-and-annotation-expressions)
could be more explicit on whether this is actually allowed or not, but
it seems relatively harmless and does help in some use cases like:

```py
try:
    from module import MyClass
except ImportError:
    MyClass = None  # ty: ignore


def f(m: MyClass):
    pass
``` 

## Test Plan

Updated tests, ecosystem check.
2025-11-04 16:29:55 +01:00
Ibraheem Ahmed
d8106d38a0 Run codspeed benchmarks with profiling profile (#21261)
## Summary

This reduces the walltime benchmarks from 15m to 10m, and we should see an even bigger improvement once build caching kicks in, so I think it's worth the downsides.
2025-11-04 09:59:40 -05:00
Micha Reiser
4fd8d4b0ee [ty] Update expected diagnostic count in benchmarks (#21269) 2025-11-04 03:18:12 +00:00
Brent Westbrook
63b1c1ea8b Avoid extra parentheses for long match patterns with as captures (#21176)
Summary
--

This PR fixes #17796 by taking the approach mentioned in
https://github.com/astral-sh/ruff/issues/17796#issuecomment-2847943862
of simply recursing into the `MatchAs` patterns when checking if we need
parentheses. This allows us to reuse the parentheses in the inner
pattern before also breaking the `MatchAs` pattern itself:

```diff
 match class_pattern:
     case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
         pass
-    case (
-        Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
-    ):
+    case Class(
+        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+    ) as capture:
         pass
-    case (
-        Class(
-            xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-        ) as capture
-    ):
+    case Class(
+        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+    ) as capture:
         pass
     case (
         Class(
@@ -685,13 +683,11 @@
 match sequence_pattern_brackets:
     case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
         pass
-    case (
-        [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture
-    ):
+    case [
+        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+    ] as capture:
         pass
-    case (
-        [
-            xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-        ] as capture
-    ):
+    case [
+        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+    ] as capture:
         pass
```

I haven't really resolved the question of whether or not it's okay
always to recurse, but I'm hoping the ecosystem check on this PR might
shed some light on that.

Test Plan
--

New tests based on the issue and then reviewing the ecosystem check here
2025-11-03 17:06:52 -05:00
Micha Reiser
3c5e4e1477 [ty] Update salsa (#21265) 2025-11-03 22:00:30 +00:00
Ibraheem Ahmed
3c8fb68765 [ty] dict is not assignable to TypedDict (#21238)
## Summary

A lot of the bidirectional inference work relies on `dict` not being
assignable to `TypedDict`, so I think it makes sense to add this before
fully implementing https://github.com/astral-sh/ty/issues/1387.
2025-11-03 16:57:49 -05:00
Alex Waygood
42adfd40ea Run py-fuzzer with --profile=profiling locally and in CI (#21266) 2025-11-03 16:53:42 -05:00
chiri
79a02711c1 [refurb] Expand fix safety for keyword arguments and Decimals (FURB164) (#21259)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/21257

## Test Plan

`cargo nextest run furb164`
2025-11-03 16:09:02 -05:00
David Peter
d2fe6347fb [ty] Rename UnionType to types.UnionType (#21262) 2025-11-03 22:06:56 +01:00
David Peter
1fe958c694 [ty] Implicit type aliases: Support for PEP 604 unions (#21195)
## Summary

Add support for implicit type aliases that use PEP 604 unions:
```py
IntOrStr = int | str

reveal_type(IntOrStr)  # UnionType

def _(int_or_str: IntOrStr):
    reveal_type(int_or_str)  # int | str
```

## Typing conformance

The changes are either removed false positives, or new diagnostics due
to known limitations unrelated to this PR.

## Ecosystem impact

Spot checked, a mix of true positives and known limitations.

## Test Plan

New Markdown tests.
2025-11-03 21:50:25 +01:00
Carl Meyer
fe4ee81b97 [ty] prefer submodule over module __getattr__ in from-imports (#21260)
Fixes https://github.com/astral-sh/ty/issues/1053

## Summary

Other type checkers prioritize a submodule over a package `__getattr__`
in `from mod import sub`, even though the runtime precedence is the
other direction. In effect, this is making an implicit assumption that a
module `__getattr__` will not handle (that is, will raise
`AttributeError`) for names that are also actual submodules, rather than
shadowing them. In practice this seems like a realistic assumption in
the ecosystem? Or at least the ecosystem has adapted to it, and we need
to adapt this precedence also, for ecosystem compatibility.

The implementation is a bit ugly, precisely because it departs from the
runtime semantics, and our implementation is oriented toward modeling
runtime semantics accurately. That is, `__getattr__` is modeled within
the member-lookup code, so it's hard to split "member lookup result from
module `__getattr__`" apart from other member lookup results. I did this
via a synthetic `TypeQualifier::FROM_MODULE_GETATTR` that we attach to a
type resulting from a member lookup, which isn't beautiful but it works
well and doesn't introduce inefficiency (e.g. redundant member lookups).

## Test Plan

Updated mdtests.

Also added a related mdtest formalizing our support for a module
`__getattr__` that is explicitly annotated to accept a limited set of
names. In principle this could be an alternative (more explicit) way to
handle the precedence problem without departing from runtime semantics,
if the ecosystem would adopt it.

### Ecosystem analysis

Lots of removed diagnostics which are an improvement because we now
infer the expected submodule.

Added diagnostics are mostly unrelated issues surfaced now because we
previously had an earlier attribute error resulting in `Unknown`; now we
correctly resolve the module so that earlier attribute error goes away,
we get an actual type instead of `Unknown`, and that triggers a new
error.

In scipy and sklearn, the module `__getattr__` which we were respecting
previously is un-annotated so returned a forgiving `Unknown`; now we
correctly see the actual module, which reveals some cases of
https://github.com/astral-sh/ty/issues/133 that were previously hidden
(`scipy/optimize/__init__.py` [imports `from
._tnc`](eff82ca575/scipy/optimize/__init__.py (L429)).)

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-11-03 15:24:01 -05:00
Wei Lee
0433526897 [airflow] extend deprecated argument concurrency in airflow..DAG (AIR301) (#21220)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
* extend AIR301 to include deprecated argument `concurrency` in
`airflow....DAG`

## Test Plan

<!-- How was it tested? -->

update the existing test fixture in the first commit and then reorganize
in the second one
2025-11-03 15:20:20 -05:00
Tom Kuson
78ee7ae925 [flake8-comprehensions] Fix typo in C416 documentation (#21184)
## Summary

Adds missing curly brace to the C416 documentation.

## Test Plan

Build the docs
2025-11-03 14:04:59 -05:00
renovate[bot]
21ec8aa7d4 Update Rust crate unicode-ident to v1.0.22 (#21228)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 16:10:49 +00:00
renovate[bot]
64a255df49 Update to Unicode 17 for line-width calculations (#21231)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 15:55:50 +00:00
Shunsuke Shibayama
b5305b5f32 [ty] Fix panic due to simplifying Divergent types out of intersections types (#21253) 2025-11-03 15:41:11 +00:00
Alex Waygood
39f105bc4a [ty] Use "cannot" consistently over "can not" (#21255) 2025-11-03 10:38:20 -05:00
Micha Reiser
e8c35b9704 [ty] Simplify semantic token tests (#21206) 2025-11-03 15:35:42 +00:00
Micha Reiser
1b2ed6a503 [ty] Fix caching of imported modules in playground (#21251) 2025-11-03 15:00:30 +00:00
Micha Reiser
de9df1b326 Revert "Update CodSpeedHQ/action action to v4.3.1" (#21252) 2025-11-03 14:53:04 +00:00
Matthew Mckee
e2e83acd2f [ty] Remove mentions of VS Code from server logs (#21155)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-11-03 14:49:58 +00:00
Dan Parizher
6ddfb51d71 [flake8-bugbear] Mark fix as unsafe for non-NFKC attribute names (B009, B010) (#21131) 2025-11-03 14:45:23 +00:00
Matthew Mckee
e017b039df [ty] Favor in scope completions (#21194)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Resolves https://github.com/astral-sh/ty/issues/1464

We sort the completions before we add the unimported ones, meaning that
imported completions show up before unimported ones.

This is also spoken about in
https://github.com/astral-sh/ty/issues/1274, and this is probably a
duplicate of that.

@AlexWaygood mentions this
[here](https://github.com/astral-sh/ty/issues/1274#issuecomment-3345942698)
too.

## Test Plan

Add a test showing even if an unimported completion "should"
(alphabetically before) come first, we favor the imported one.
2025-11-03 09:33:05 -05:00
Brent Westbrook
0dfd55babf Delete unused AsciiCharSet in FURB156 (#21181)
Summary
--

This code has been unused since #14233 but not detected by clippy I
guess. This should help to remove the temptation to use the set
comparison again like I suggested in #21144. And we shouldn't do the set
comparison because of #13802, which #14233 fixed.

Test Plan
--

Existing tests
2025-11-03 08:38:34 -05:00
renovate[bot]
f947c23cd7 Update Rust crate tempfile to v3.23.0 (#21248)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:41:20 +00:00
renovate[bot]
02879fa377 Update actions/setup-node action to v6 (#21249)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:34:13 +00:00
renovate[bot]
7dbfb56c3d Update astral-sh/setup-uv action to v7 (#21250)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:33:16 +00:00
renovate[bot]
f97c38dd88 Update Rust crate matchit to 0.9.0 (#21245)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:32:49 +00:00
renovate[bot]
14fce1f788 Update Rust crate regex to v1.12.2 (#21246)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:32:04 +00:00
renovate[bot]
31194d048d Update Rust crate serde_with to v3.15.1 (#21247)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:31:24 +00:00
renovate[bot]
fe95ff6b06 Update dependency pyodide to ^0.29.0 (#21236)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:23:24 +00:00
renovate[bot]
61c1007137 Update Rust crate bitflags to v2.10.0 (#21242)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:21:41 +00:00
renovate[bot]
884c3b178e Update Rust crate indexmap to v2.12.0 (#21244)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:21:03 +00:00
renovate[bot]
ade727ce66 Update Rust crate csv to v1.4.0 (#21243)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:20:42 +00:00
renovate[bot]
3493c9b67a Update dependency tomli to v2.3.0 (#21240)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:17:38 +00:00
renovate[bot]
666dd5fef1 Update dependency ruff to v0.14.3 (#21239)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:13:38 +00:00
renovate[bot]
bb05527350 Update dependency monaco-editor to ^0.54.0 (#21235)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:06:16 +00:00
renovate[bot]
dc373e639e Update CodSpeedHQ/action action to v4.3.1 (#21234)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:04:20 +00:00
renovate[bot]
fc71c90de6 Update taiki-e/install-action action to v2.62.45 (#21233)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 03:02:16 +00:00
renovate[bot]
c11b00bea0 Update Rust crate wasm-bindgen-test to v0.3.55 (#21232)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:57:13 +00:00
renovate[bot]
770b4d12ab Update Rust crate tikv-jemallocator to v0.6.1 (#21226)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:35:50 +00:00
renovate[bot]
c596a78c08 Update Rust crate schemars to v1.0.5 (#21222)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:27:40 +00:00
renovate[bot]
cb98175a36 Update Rust crate syn to v2.0.108 (#21224)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:22:04 +00:00
renovate[bot]
3bef60f69a Update Rust crate toml to v0.9.8 (#21227)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:21:47 +00:00
renovate[bot]
f477e11d26 Update Rust crate thiserror to v2.0.17 (#21225)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:21:21 +00:00
renovate[bot]
c0bd092fa9 Update Rust crate snapbox to v0.6.23 (#21223)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:20:19 +00:00
renovate[bot]
73b9b8eb6b Update Rust crate proc-macro2 to v1.0.103 (#21221)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:15:56 +00:00
renovate[bot]
80eeb1d64f Update Rust crate clap to v4.5.51 (#21214)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 01:50:12 +00:00
renovate[bot]
50b75cfcc6 Update cargo-bins/cargo-binstall action to v1.15.10 (#21212)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 01:49:44 +00:00
renovate[bot]
1c4a9d6a06 Update Rust crate globset to v0.4.18 (#21216)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[globset](https://redirect.github.com/BurntSushi/ripgrep/tree/master/crates/globset)
([source](https://redirect.github.com/BurntSushi/ripgrep/tree/HEAD/crates/globset))
| workspace.dependencies | patch | `0.4.17` -> `0.4.18` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>BurntSushi/ripgrep (globset)</summary>

###
[`v0.4.18`](https://redirect.github.com/BurntSushi/ripgrep/compare/globset-0.4.17...globset-0.4.18)

[Compare
Source](https://redirect.github.com/BurntSushi/ripgrep/compare/globset-0.4.17...globset-0.4.18)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:48:22 +01:00
renovate[bot]
222c6fd496 Update Rust crate ctrlc to v3.5.1 (#21215)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 01:48:02 +00:00
renovate[bot]
b754abff1b Update Rust crate aho-corasick to v1.1.4 (#21213)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [aho-corasick](https://redirect.github.com/BurntSushi/aho-corasick) |
workspace.dependencies | patch | `1.1.3` -> `1.1.4` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>BurntSushi/aho-corasick (aho-corasick)</summary>

###
[`v1.1.4`](https://redirect.github.com/BurntSushi/aho-corasick/compare/1.1.3...1.1.4)

[Compare
Source](https://redirect.github.com/BurntSushi/aho-corasick/compare/1.1.3...1.1.4)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJpbnRlcm5hbCJdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 02:47:30 +01:00
renovate[bot]
41fe4d7f8c Update Rust crate ignore to v0.4.25 (#21217)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 01:47:12 +00:00
renovate[bot]
f14631e1cc Update Rust crate indicatif to v0.18.2 (#21218)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 01:46:05 +00:00
renovate[bot]
02f2dba28e Update Rust crate indoc to v2.0.7 (#21219)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-03 01:45:20 +00:00
Carl Meyer
0454a72674 [ty] don't union in default type for annotated parameters (#21208) 2025-11-02 18:21:54 -05:00
Carl Meyer
c32234cf0d [ty] support subscripting typing.Literal with a type alias (#21207)
Fixes https://github.com/astral-sh/ty/issues/1368

## Summary

Add support for patterns like this, where a type alias to a literal type
(or union of literal types) is used to subscript `typing.Literal`:

```py
type MyAlias = Literal[1]
def _(x: Literal[MyAlias]): ...
```

This shows up in the ecosystem report for PEP 613 type alias support.

One interesting case is an alias to `bool` or an enum type. `bool` is an
equivalent type to `Literal[True, False]`, which is a union of literal
types. Similarly an enum type `E` is also equivalent to a union of its
member literal types. Since (for explicit type aliases) we infer the RHS
directly as a type expression, this makes it difficult for us to
distinguish between `bool` and `Literal[True, False]`, so we allow
either one to (or an alias to either one) to appear inside `Literal`,
where other type checkers allow only the latter.

I think for implicit type aliases it may be simpler to support only
types derived from actually subscripting `typing.Literal`, though, so I
didn't make a TODO-comment commitment here.

## Test Plan

Added mdtests, including TODO-filled tests for PEP 613 and implicit type
aliases.

### Conformance suite

All changes here are positive -- we now emit errors on lines that should
be errors. This is a side effect of the new implementation, not the
primary purpose of this PR, but it's still a positive change.

### Ecosystem

Eliminates one ecosystem false positive, where a PEP 695 type alias for
a union of literal types is used to subscript `typing.Literal`.
2025-11-02 12:39:55 -05:00
David Peter
566d1d6497 [ty] Update to the latest version of the conformance suite (#21205)
## Summary

There have been some larger-scale updates to the conformance suite since
we introduced our CI job, so it seems sensible to bump the version of
the conformance suite to the latest state.

## Test plan

This is a bit awkward to test. Here is the diff of running ty on the
conformance suite before and after this bump. I filtered out line/column
information (`sed -re 's/\.py:[0-9]+:[0-9]+:/.py/'`) to avoid spurious
changes from content that has simply been moved around.

```diff
1,2c1
< fatal[panic] Panicked at /home/shark/.cargo/git/checkouts/salsa-e6f3bb7c2a062968/cdd0b85/src/function/execute.rs:419:17 when checking `/home/shark/typing/conformance/tests/aliases_typealiastype.py`: `infer_definition_types(Id(1a99c)): execute: too many cycle iterations`
< src/type_checker.py error[unresolved-import] Cannot resolve imported module `tqdm`
---
> fatal[panic] Panicked at /home/shark/.cargo/git/checkouts/salsa-e6f3bb7c2a062968/cdd0b85/src/function/execute.rs:419:17 when checking `/home/shark/typing/conformance/tests/aliases_typealiastype.py`: `infer_definition_types(Id(6e4c)): execute: too many cycle iterations`
205,206d203
< tests/constructors_call_metaclass.py error[type-assertion-failure] Argument does not have asserted type `Never`
< tests/constructors_call_metaclass.py error[missing-argument] No argument provided for required parameter `x` of function `__new__`
268a266,273
> tests/dataclasses_match_args.py error[type-assertion-failure] Argument does not have asserted type `tuple[Literal["x"]]`
> tests/dataclasses_match_args.py error[unresolved-attribute] Class `DC1` has no attribute `__match_args__`
> tests/dataclasses_match_args.py error[type-assertion-failure] Argument does not have asserted type `tuple[Literal["x"]]`
> tests/dataclasses_match_args.py error[unresolved-attribute] Class `DC2` has no attribute `__match_args__`
> tests/dataclasses_match_args.py error[type-assertion-failure] Argument does not have asserted type `tuple[Literal["x"]]`
> tests/dataclasses_match_args.py error[unresolved-attribute] Class `DC3` has no attribute `__match_args__`
> tests/dataclasses_match_args.py error[unresolved-attribute] Class `DC4` has no attribute `__match_args__`
> tests/dataclasses_match_args.py error[type-assertion-failure] Argument does not have asserted type `tuple[()]`
339a345
> tests/directives_assert_type.py error[type-assertion-failure] Argument does not have asserted type `Any`
424a431
> tests/generics_defaults.py error[type-assertion-failure] Argument does not have asserted type `Any`
520a528,529
> tests/generics_syntax_infer_variance.py error[invalid-return-type] Function always implicitly returns `None`, which is not assignable to return type `T@ShouldBeCovariant2 | Sequence[T@ShouldBeCovariant2]`
> tests/generics_syntax_infer_variance.py error[invalid-return-type] Function always implicitly returns `None`, which is not assignable to return type `int`
711a721
> tests/namedtuples_define_class.py error[too-many-positional-arguments] Too many positional arguments: expected 3, got 4
795d804
< tests/protocols_explicit.py error[invalid-attribute-access] Cannot assign to ClassVar `cm1` from an instance of type `Self@__init__`
822,823d830
< tests/qualifiers_annotated.py error[invalid-syntax] named expression cannot be used within a type annotation
< tests/qualifiers_annotated.py error[invalid-syntax] await expression cannot be used within a type annotation
922a930,953
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `Movie`: Unknown key "novel_adaptation"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `Movie`: Unknown key "year"
> tests/typeddicts_extra_items.py error[type-assertion-failure] Argument does not have asserted type `bool`
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `Movie`: Unknown key "novel_adaptation"
> tests/typeddicts_extra_items.py error[invalid-argument-type] Invalid argument to key "year" with declared type `int` on TypedDict `InheritedMovie`: value of type `None`
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `InheritedMovie`: Unknown key "other_extra_key"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `MovieEI`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `MovieExtraInt`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `MovieExtraStr`: Unknown key "description"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `MovieExtraInt`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `NonClosedMovie`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `ExtraMovie`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `ExtraMovie`: Unknown key "language"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `ClosedMovie`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `MovieExtraStr`: Unknown key "summary"
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `MovieExtraInt`: Unknown key "year"
> tests/typeddicts_extra_items.py error[invalid-assignment] Object of type `dict[Unknown | str, Unknown | str | int]` is not assignable to `Mapping[str, int]`
> tests/typeddicts_extra_items.py error[type-assertion-failure] Argument does not have asserted type `list[tuple[str, int | str]]`
> tests/typeddicts_extra_items.py error[type-assertion-failure] Argument does not have asserted type `list[int | str]`
> tests/typeddicts_extra_items.py error[unresolved-attribute] Object of type `IntDict` has no attribute `clear`
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `IntDictWithNum`: Unknown key "bar" - did you mean "num"?
> tests/typeddicts_extra_items.py error[type-assertion-failure] Argument does not have asserted type `tuple[str, int]`
> tests/typeddicts_extra_items.py error[invalid-key] Cannot access `IntDictWithNum` with a key of type `str`. Only string literals are allowed as keys on TypedDicts.
> tests/typeddicts_extra_items.py error[invalid-key] Invalid key for TypedDict `IntDictWithNum` of type `str`
950c981
< Found 949 diagnostics
---
> Found 980 diagnostics
```
2025-11-02 17:33:31 +01:00
Micha Reiser
6c3d6124c8 [ty] Fix range filtering for tokens starting at the end of the requested range (#21193)
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
2025-11-02 14:58:36 +00:00
David Peter
73107a083c [ty] Type inference for comprehensions (#20962)
## Summary

Adds type inference for list/dict/set comprehensions, including
bidirectional inference:

```py
reveal_type({k: v for k, v in [("a", 1), ("b", 2)]})  # dict[Unknown | str, Unknown | int]

squares: list[int | None] = [x for x in range(10)]
reveal_type(squares)  # list[int | None]
```

## Ecosystem impact

I did spot check the changes and most of them seem like known
limitations or true positives. Without proper bidirectional inference,
we saw a lot of false positives.

## Test Plan

New Markdown tests
2025-11-02 14:35:33 +01:00
Matthew Mckee
de1a6fb8ad Clean up definition completions docs and tests (#21183)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

@BurntSushi provided some feedback in #21146 so i address it here.
2025-11-02 08:01:06 -05:00
Alex Waygood
bff32a41dc [ty] Increase timeout-minutes to 10 for py-fuzzer job (#21196) 2025-11-01 22:06:03 -04:00
Micha Reiser
17c7b3cde1 Bump MSRV to Rust 1.89 (#21180) 2025-11-01 02:26:38 +00:00
Micha Reiser
921f409ee8 Update Rust toolchain to 1.91 (#21179) 2025-11-01 01:50:58 +00:00
github-actions[bot]
a151f9746d [ty] Sync vendored typeshed stubs (#21178)
Close and reopen this PR to trigger CI

---------

Co-authored-by: typeshedbot <>
2025-10-31 21:03:40 -04:00
Gautham Venkataraman
521217bb90 [ruff]: Make ruff analyze graph work with jupyter notebooks (#21161)
Co-authored-by: Gautham Venkataraman <gautham@dexterenergy.ai>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-10-31 21:47:01 +00:00
Alex Waygood
a32d5b8dc4 [ty] Improve exhaustiveness analysis for type variables with bounds or constraints (#21172) 2025-10-31 16:51:11 -04:00
Micha Reiser
6337e22f0c [ty] Smaller refactors to server API in prep for notebook support (#21095) 2025-10-31 20:00:04 +00:00
Brent Westbrook
827d8ae5d4 Allow newlines after function headers without docstrings (#21110)
Summary
--

This is a first step toward fixing #9745. After reviewing our open
issues and several Black issues and PRs, I personally found the function
case the most compelling, especially with very long argument lists:

```py
def func(
	self,
	arg1: int,
	arg2: bool,
	arg3: bool,
	arg4: float,
	arg5: bool,
) -> tuple[...]:
	if arg2 and arg3:
		raise ValueError
```

or many annotations:

```py
def function(
    self, data: torch.Tensor | tuple[torch.Tensor, ...], other_argument: int
) -> torch.Tensor | tuple[torch.Tensor, ...]:
    do_something(data)
    return something
```

I think docstrings help the situation substantially both because syntax
highlighting will usually give a very clear separation between the
annotations and the docstring and because we already allow a blank line
_after_ the docstring:

```py
def function(
    self, data: torch.Tensor | tuple[torch.Tensor, ...], other_argument: int
) -> torch.Tensor | tuple[torch.Tensor, ...]:
    """
	A function doing something.

	And a longer description of the things it does.
	"""

    do_something(data)
    return something
```

There are still other comments on #9745, such as [this one] with 9
upvotes, where users specifically request blank lines in all block
types, or at least including conditionals and loops. I'm sympathetic to
that case as well, even if personally I don't find an [example] like
this:

```py
if blah:

    # Do some stuff that is logically related
    data = get_data()

    # Do some different stuff that is logically related
    results = calculate_results()

    return results
```

to be much more readable than:

```py
if blah:
    # Do some stuff that is logically related
    data = get_data()

    # Do some different stuff that is logically related
    results = calculate_results()

    return results
```

I'm probably just used to the latter from the formatters I've used, but
I do prefer it. I also think that functions are the least susceptible to
the accidental introduction of a newline after refactoring described in
Micha's [comment] on #8893.

I actually considered further restricting this change to functions with
multiline headers. I don't think very short functions like:

```py
def foo():

    return 1
```

benefit nearly as much from the allowed newline, but I just went with
any function without a docstring for now. I guess a marginal case like:

```py
def foo(a_long_parameter: ALongType, b_long_parameter: BLongType) -> CLongType:

    return 1
```

might be a good argument for not restricting it.

I caused a couple of syntax errors before adding special handling for
the ellipsis-only case, so I suspect that there are some other
interesting edge cases that may need to be handled better.

Test Plan
--

Existing tests, plus a few simple new ones. As noted above, I suspect
that we may need a few more for edge cases I haven't considered.

[this one]:
https://github.com/astral-sh/ruff/issues/9745#issuecomment-2876771400
[example]:
https://github.com/psf/black/issues/902#issuecomment-1562154809
[comment]:
https://github.com/astral-sh/ruff/issues/8893#issuecomment-1867259744
2025-10-31 14:53:40 -04:00
David Peter
1734ddfb3e [ty] Do not promote literals in contravariant positions of generic specializations (#21171)
## Summary

closes https://github.com/astral-sh/ty/issues/1284

supersedes https://github.com/astral-sh/ruff/pull/20950 by @ibraheemdev 

## Test Plan

New regression test
2025-10-31 17:48:34 +01:00
Ibraheem Ahmed
ff3a6a8fbd [ty] Support type context of union attribute assignments (#21170)
## Summary

Turns out this is easy to implement. Resolves
https://github.com/astral-sh/ty/issues/1375.
2025-10-31 12:41:14 -04:00
Carl Meyer
9664474c51 [ty] rollback preferring declared type on invalid TypedDict creation (#21169)
## Summary

Discussion with @ibraheemdev clarified that
https://github.com/astral-sh/ruff/pull/21168 was incorrect. In a case of
failed inference of a dict literal as a `TypedDict`, we should store the
context-less inferred type of the dict literal as the type of the dict
literal expression itself; the fallback to declared type should happen
at the level of the overall assignment definition.

The reason the latter isn't working yet is because currently we
(wrongly) consider a homogeneous dict type as assignable to a
`TypedDict`, so we don't actually consider the assignment itself as
failed. So the "bug" I observed (and tried to fix) will naturally be
fixed by implementing TypedDict assignability rules.

Rollback https://github.com/astral-sh/ruff/pull/21168 except for the
tests, and modify the tests to include TODOs as needed.

## Test Plan

Updated mdtests.
2025-10-31 12:06:47 -04:00
Luca Chiodini
69b4c29924 Consistently wrap tokens in parser diagnostics in backticks instead of 'quotes' (#21163)
The parser currently uses single quotes to wrap tokens. This is
inconsistent with the rest of ruff/ty, which use backticks.

For example, see the inconsistent diagnostics produced in this simple
example: https://play.ty.dev/0a9d6eab-6599-4a1d-8e40-032091f7f50f

Consistently wrapping tokens in backticks produces uniform diagnostics.
Following the style decision of #723, in #2889 some quotes were already
switched into backticks.

This is also in line with Rust's guide on diagnostics
(https://rustc-dev-guide.rust-lang.org/diagnostics.html#diagnostic-structure):

> When code or an identifier must appear in a message or label, it
should be surrounded with backticks
2025-10-31 11:59:11 -04:00
Ibraheem Ahmed
bb40c34361 [ty] Use declared attribute types as type context (#21143)
## Summary

For example:
```py
class X:
    x: list[int | str]

def _(x: X):
    x.x = [1]
```

Resolves https://github.com/astral-sh/ty/issues/1375.
2025-10-31 15:48:28 +00:00
chiri
b93d8f2b9f [refurb] Preserve argument ordering in autofix (FURB103) (#20790)
Fixes https://github.com/astral-sh/ruff/issues/20785
2025-10-31 11:16:09 -04:00
Carl Meyer
1d111c8780 [ty] prefer declared type on invalid TypedDict creation (#21168)
## Summary

In general, when we have an invalid assignment (inferred assigned type
is not assignable to declared type), we fall back to inferring the
declared type, since the declared type is a more explicit declaration of
the programmer's intent. This also maintains the invariant that our
inferred type for a name is always assignable to the declared type for
that same name. For example:

```py
x: str = 1
reveal_type(x)  # revealed: str
```

We weren't following this pattern for dictionary literals inferred (via
type context) as a typed dictionary; if the literal was not valid for
the annotated TypedDict type, we would just fall back to the normal
inferred type of the dict literal, effectively ignoring the annotation,
and resulting in inferred type not assignable to declared type.

## Test Plan

Added mdtest assertions.
2025-10-31 11:12:06 -04:00
chiri
9d7da914b9 Improve extend docs (#21135)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-10-31 15:10:14 +00:00
David Peter
0c2cf75869 [ty] Do not promote literals in contravariant position (#21164)
## Summary

closes https://github.com/astral-sh/ty/issues/1463

## Test Plan

Regression tests
2025-10-31 16:00:30 +01:00
Ibraheem Ahmed
1d6ae8596a [ty] Prefer exact matches when solving constrained type variables (#21165)
## Summary

The solver is currently order-dependent, and will choose a supertype
over the exact type if it appears earlier in the list of constraints. We
could be smarter and try to choose the most precise subtype, but I
imagine this is something the new constraint solver will fix anyways,
and this fixes the issue showing up on
https://github.com/astral-sh/ruff/pull/21070.
2025-10-31 10:58:09 -04:00
Douglas Creager
cf4e82d4b0 [ty] Add and test when constraint sets are satisfied by their typevars (#21129)
This PR adds a new `satisfied_by_all_typevar` method, which implements
one of the final steps of actually using these dang constraint sets.
Constraint sets exist to help us check assignability and subtyping of
types in the presence of typevars. We construct a constraint set
describing the conditions under which assignability holds between the
two types. Then we check whether that constraint set is satisfied for
the valid specializations of the relevant typevars (which is this new
method).

We also add a new `ty_extensions.ConstraintSet` method so that we can
test this method's behavior in mdtests, before hooking it up to the rest
of the specialization inference machinery.
2025-10-31 10:53:37 -04:00
Ibraheem Ahmed
1baf98aab3 [ty] Fix is_disjoint_from with @final classes (#21167)
## Summary

We currently perform a subtyping check instead of the intended subclass
check (and the subtyping check is confusingly named `is_subclass_of`).
This showed up in https://github.com/astral-sh/ruff/pull/21070.
2025-10-31 14:50:54 +00:00
Carl Meyer
3179b05221 [ty] don't assume in diagnostic messages that a TypedDict key error is about subscript access (#21166)
## Summary

Before this PR, we would emit diagnostics like "Invalid key access" for
a TypedDict literal with invalid key, which doesn't make sense since
there's no "access" in that case. This PR just adjusts the wording to be
more general, and adjusts the documentation of the lint rule too.

I noticed this in the playground and thought it would be a quick fix. As
usual, it turned out to be a bit more subtle than I expected, but for
now I chose to punt on the complexity. We may ultimately want to have
different rules for invalid subscript vs invalid TypedDict literal,
because an invalid key in a TypedDict literal is low severity: it's a
typo detector, but not actually a type error. But then there's another
wrinkle there: if the TypedDict is `closed=True`, then it _is_ a type
error. So would we want to separate the open and closed cases into
separate rules, too? I decided to leave this as a question for future.

If we wanted to use separate rules, or use specific wording for each
case instead of the generalized wording I chose here, that would also
involve a bit of extra work to distinguish the cases, since we use a
generic set of functions for reporting these errors.

## Test Plan

Added and updated mdtests.
2025-10-31 10:49:59 -04:00
Aria Desires
172e8d4ae0 [ty] Support implicit imports of submodules in __init__.pyi (#20855)
This is a second take at the implicit imports approach, allowing `from .
import submodule` in an `__init__.pyi` to create the
`mypackage.submodule` attribute everyhere.

This implementation operates inside of the
available_submodule_attributes subsystem instead of as a re-export rule.

The upside of this is we are no longer purely syntactic, and absolute
from imports that happen to target submodules work (an intentional
discussed deviation from pyright which demands a relative from import).
Also we don't re-export functions or classes.

The downside(?) of this is star imports no longer see these attributes
(this may be either good or bad. I believe it's not a huge lift to make
it work with star imports but it's some non-trivial reworking).

I've also intentionally made `import mypackage.submodule` not trigger
this rule although it's trivial to change that.

I've tried to cover as many relevant cases as possible for discussion in
the new test file I've added (there are some random overlaps with
existing tests but trying to add them piecemeal felt confusing and
weird, so I just made a dedicated file for this extension to the rules).

Fixes https://github.com/astral-sh/ty/issues/133

<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->
2025-10-31 14:29:24 +00:00
Mahmoud Saada
735ec0c1f9 [ty] Fix generic inference for non-dataclass inheriting from generic dataclass (#21159)
## Summary

Fixes https://github.com/astral-sh/ty/issues/1427

This PR fixes a regression introduced in alpha.24 where non-dataclass
children of generic dataclasses lost generic type parameter information
during `__init__` synthesis.

The issue occurred because when looking up inherited members in the MRO,
the child class's `inherited_generic_context` was correctly passed down,
but `own_synthesized_member()` (which synthesizes dataclass `__init__`
methods) didn't accept this parameter. It only used
`self.inherited_generic_context(db)`, which returned the parent's
context instead of the child's.

The fix threads the child's generic context through to the synthesis
logic, allowing proper generic type inference for inherited dataclass
constructors.

## Test Plan

- Added regression test for non-dataclass inheriting from generic
dataclass
- Verified the exact repro case from the issue now works
- All 277 mdtest tests passing
- Clippy clean
- Manually verified with Python runtime, mypy, and pyright - all accept
this code pattern

## Verification

Tested against multiple type checkers:
-  Python runtime: Code works correctly
-  mypy: No issues found
-  pyright: 0 errors, 0 warnings
-  ty alpha.23: Worked (before regression)
-  ty alpha.24: Regression
-  ty with this fix: Works correctly

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: David Peter <mail@david-peter.de>
2025-10-31 13:55:17 +01:00
Ben Beasley
3585c96ea5 Update etcetera to 0.11.0 (#21160) 2025-10-31 12:53:18 +00:00
Micha Reiser
4b026c2a55 Fix missing diagnostics for notebooks (#21156) 2025-10-31 01:16:43 +00:00
Matthew Mckee
4b758b3746 [ty] Fix tests for definition completions (#21153) 2025-10-31 00:43:50 +00:00
Amethyst Reese
8737a2d5f5 Bump v0.14.3 (#21152)
- **Upgrade to rooster==0.1.1**
- **Changelog for v0.14.3**
- **Bump v0.14.3**
2025-10-30 17:06:29 -07:00
Matthew Mckee
3be3a10a2f [ty] Don't provide completions when in class or function definition (#21146) 2025-10-30 23:19:59 +00:00
354 changed files with 9087 additions and 3211 deletions

View File

@@ -256,15 +256,15 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-insta
- name: "Install uv"
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
enable-cache: "true"
- name: ty mdtests (GitHub annotations)
@@ -277,8 +277,8 @@ jobs:
run: cargo test -p ty_python_semantic --test mdtest || true
- name: "Run tests"
run: cargo insta test --all-features --unreferenced reject --test-runner nextest
# Dogfood ty on py-fuzzer
- run: uv run --project=./python/py-fuzzer cargo run -p ty check --project=./python/py-fuzzer
- name: Dogfood ty on py-fuzzer
run: uv run --project=./python/py-fuzzer cargo run -p ty check --project=./python/py-fuzzer
# Check for broken links in the documentation.
- run: cargo doc --all --no-deps
env:
@@ -320,15 +320,15 @@ jobs:
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-insta
- name: "Install uv"
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
enable-cache: "true"
- name: "Run tests"
@@ -353,11 +353,11 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-nextest
- name: "Install uv"
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
enable-cache: "true"
- name: "Run tests"
@@ -378,7 +378,7 @@ jobs:
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
cache: "npm"
@@ -437,8 +437,10 @@ jobs:
workspaces: "fuzz -> target"
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@afcf9780305558bcc9e4bc94b7589ab2bb8b6106 # v1.15.9
uses: cargo-bins/cargo-binstall@b3f755e95653da9a2d25b99154edfdbd5b356d0a # v1.15.10
- name: "Install cargo-fuzz"
# Download the latest version from quick install and not the github releases because github releases only has MUSL targets.
run: cargo binstall cargo-fuzz --force --disable-strategies crate-meta-data --no-confirm
@@ -458,7 +460,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
name: Download Ruff binary to test
id: download-cached-binary
@@ -494,7 +496,7 @@ jobs:
with:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup component add rustfmt
# Run all code generation scripts, and verify that the current output is
@@ -529,7 +531,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
python-version: ${{ env.PYTHON_VERSION }}
activate-environment: true
@@ -645,36 +647,36 @@ jobs:
name: "Fuzz for new ty panics"
runs-on: ${{ github.repository == 'astral-sh/ruff' && 'depot-ubuntu-22.04-16' || 'ubuntu-latest' }}
needs:
- cargo-test-linux
- determine_changes
# Only runs on pull requests, since that is the only we way we can find the base version for comparison.
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-test') && github.event_name == 'pull_request' && (needs.determine_changes.outputs.ty == 'true' || needs.determine_changes.outputs.py-fuzzer == 'true') }}
timeout-minutes: ${{ github.repository == 'astral-sh/ruff' && 5 || 20 }}
timeout-minutes: ${{ github.repository == 'astral-sh/ruff' && 10 || 20 }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
name: Download new ty binary
id: ty-new
with:
name: ty
path: target/debug
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download baseline ty binary
with:
name: ty
branch: ${{ github.event.pull_request.base.ref }}
workflow: "ci.yaml"
check_artifacts: true
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: Fuzz
env:
FORCE_COLOR: 1
NEW_TY: ${{ steps.ty-new.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x "${PWD}/ty" "${NEW_TY}/ty"
echo "new commit"
git rev-list --format=%s --max-count=1 "$GITHUB_SHA"
cargo build --profile=profiling --bin=ty
mv target/profiling/ty ty-new
MERGE_BASE="$(git merge-base "$GITHUB_SHA" "origin/$GITHUB_BASE_REF")"
git checkout -b old_commit "$MERGE_BASE"
echo "old commit (merge base)"
git rev-list --format=%s --max-count=1 old_commit
cargo build --profile=profiling --bin=ty
mv target/profiling/ty ty-old
(
uv run \
@@ -682,8 +684,8 @@ jobs:
--project=./python/py-fuzzer \
--locked \
fuzz \
--test-executable="${NEW_TY}/ty" \
--baseline-executable="${PWD}/ty" \
--test-executable=ty-new \
--baseline-executable=ty-old \
--only-new-bugs \
--bin=ty \
0-1000
@@ -698,7 +700,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: cargo-bins/cargo-binstall@afcf9780305558bcc9e4bc94b7589ab2bb8b6106 # v1.15.9
- uses: cargo-bins/cargo-binstall@b3f755e95653da9a2d25b99154edfdbd5b356d0a # v1.15.10
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
@@ -711,10 +713,12 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Run ty completion evaluation"
run: cargo run --release --package ty_completion_eval -- all --threshold 0.4 --tasks /tmp/completion-evaluation-tasks.csv
- name: "Ensure there are no changes"
@@ -756,9 +760,9 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
- name: "Cache pre-commit"
@@ -796,7 +800,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup show
- name: Install uv
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
python-version: 3.13
activate-environment: true
@@ -900,7 +904,7 @@ jobs:
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
cache: "npm"
@@ -938,18 +942,18 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-codspeed
- name: "Build benchmarks"
run: cargo codspeed build --features "codspeed,instrumented" --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser
run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench formatter --bench lexer --bench linter --bench parser
- name: "Run benchmarks"
uses: CodSpeedHQ/action@6b43a0cd438f6ca5ad26f9ed03ed159ed2df7da9 # v4.1.1
@@ -976,18 +980,18 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-codspeed
- name: "Build benchmarks"
run: cargo codspeed build --features "codspeed,instrumented" --no-default-features -p ruff_benchmark --bench ty
run: cargo codspeed build --features "codspeed,instrumented" --profile profiling --no-default-features -p ruff_benchmark --bench ty
- name: "Run benchmarks"
uses: CodSpeedHQ/action@6b43a0cd438f6ca5ad26f9ed03ed159ed2df7da9 # v4.1.1
@@ -1014,18 +1018,18 @@ jobs:
persist-credentials: false
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install codspeed"
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-codspeed
- name: "Build benchmarks"
run: cargo codspeed build --features "codspeed,walltime" --no-default-features -p ruff_benchmark
run: cargo codspeed build --features "codspeed,walltime" --profile profiling --no-default-features -p ruff_benchmark
- name: "Run benchmarks"
uses: CodSpeedHQ/action@6b43a0cd438f6ca5ad26f9ed03ed159ed2df7da9 # v4.1.1

View File

@@ -34,7 +34,7 @@ jobs:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"

View File

@@ -43,7 +43,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:
@@ -85,7 +85,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
with:

View File

@@ -31,7 +31,7 @@ jobs:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
package-manager-cache: false

View File

@@ -22,7 +22,7 @@ jobs:
id-token: write
steps:
- name: "Install uv"
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
pattern: wheels-*

View File

@@ -35,7 +35,7 @@ jobs:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
package-manager-cache: false

View File

@@ -45,7 +45,7 @@ jobs:
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
- uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: 22
registry-url: "https://registry.npmjs.org"

View File

@@ -77,7 +77,7 @@ jobs:
run: |
git config --global user.name typeshedbot
git config --global user.email '<>'
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: Sync typeshed stubs
run: |
rm -rf "ruff/${VENDORED_TYPESHED}"
@@ -131,7 +131,7 @@ jobs:
with:
persist-credentials: true
ref: ${{ env.UPSTREAM_BRANCH}}
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: Setup git
run: |
git config --global user.name typeshedbot
@@ -170,7 +170,7 @@ jobs:
with:
persist-credentials: true
ref: ${{ env.UPSTREAM_BRANCH}}
- uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
- uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
- name: Setup git
run: |
git config --global user.name typeshedbot
@@ -207,12 +207,12 @@ jobs:
uses: rui314/setup-mold@725a8794d15fc7563f59595bd9556495c0564878 # v1
- name: "Install cargo nextest"
if: ${{ success() }}
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-nextest
- name: "Install cargo insta"
if: ${{ success() }}
uses: taiki-e/install-action@522492a8c115f1b6d4d318581f09638e9442547b # v2.62.21
uses: taiki-e/install-action@81ee1d48d9194cdcab880cbdc7d36e87d39874cb # v2.62.45
with:
tool: cargo-insta
- name: Update snapshots

View File

@@ -33,7 +33,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact

View File

@@ -29,7 +29,7 @@ jobs:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@d0cc045d04ccac9d8b7881df0226f9e82c39688e # v6.8.0
uses: astral-sh/setup-uv@85856786d1ce8acfbcc2f13a5f3fbd6b938f9f41 # v7.1.2
with:
enable-cache: true # zizmor: ignore[cache-poisoning] acceptable risk for CloudFlare pages artifact

View File

@@ -24,7 +24,7 @@ env:
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
RUST_BACKTRACE: 1
CONFORMANCE_SUITE_COMMIT: d4f39b27a4a47aac8b6d4019e1b0b5b3156fabdc
CONFORMANCE_SUITE_COMMIT: 9f6d8ced7cd1c8d92687a4e9c96d7716452e471e
jobs:
typing_conformance:

View File

@@ -1,5 +1,105 @@
# Changelog
## 0.14.4
Released on 2025-11-06.
### Preview features
- [formatter] Allow newlines after function headers without docstrings ([#21110](https://github.com/astral-sh/ruff/pull/21110))
- [formatter] Avoid extra parentheses for long `match` patterns with `as` captures ([#21176](https://github.com/astral-sh/ruff/pull/21176))
- \[`refurb`\] Expand fix safety for keyword arguments and `Decimal`s (`FURB164`) ([#21259](https://github.com/astral-sh/ruff/pull/21259))
- \[`refurb`\] Preserve argument ordering in autofix (`FURB103`) ([#20790](https://github.com/astral-sh/ruff/pull/20790))
### Bug fixes
- [server] Fix missing diagnostics for notebooks ([#21156](https://github.com/astral-sh/ruff/pull/21156))
- \[`flake8-bugbear`\] Ignore non-NFKC attribute names in `B009` and `B010` ([#21131](https://github.com/astral-sh/ruff/pull/21131))
- \[`refurb`\] Fix false negative for underscores before sign in `Decimal` constructor (`FURB157`) ([#21190](https://github.com/astral-sh/ruff/pull/21190))
- \[`ruff`\] Fix false positives on starred arguments (`RUF057`) ([#21256](https://github.com/astral-sh/ruff/pull/21256))
### Rule changes
- \[`airflow`\] extend deprecated argument `concurrency` in `airflow..DAG` (`AIR301`) ([#21220](https://github.com/astral-sh/ruff/pull/21220))
### Documentation
- Improve `extend` docs ([#21135](https://github.com/astral-sh/ruff/pull/21135))
- \[`flake8-comprehensions`\] Fix typo in `C416` documentation ([#21184](https://github.com/astral-sh/ruff/pull/21184))
- Revise Ruff setup instructions for Zed editor ([#20935](https://github.com/astral-sh/ruff/pull/20935))
### Other changes
- Make `ruff analyze graph` work with jupyter notebooks ([#21161](https://github.com/astral-sh/ruff/pull/21161))
### Contributors
- [@chirizxc](https://github.com/chirizxc)
- [@Lee-W](https://github.com/Lee-W)
- [@musicinmybrain](https://github.com/musicinmybrain)
- [@MichaReiser](https://github.com/MichaReiser)
- [@tjkuson](https://github.com/tjkuson)
- [@danparizher](https://github.com/danparizher)
- [@renovate](https://github.com/renovate)
- [@ntBre](https://github.com/ntBre)
- [@gauthsvenkat](https://github.com/gauthsvenkat)
- [@LoicRiegel](https://github.com/LoicRiegel)
## 0.14.3
Released on 2025-10-30.
### Preview features
- Respect `--output-format` with `--watch` ([#21097](https://github.com/astral-sh/ruff/pull/21097))
- \[`pydoclint`\] Fix false positive on explicit exception re-raising (`DOC501`, `DOC502`) ([#21011](https://github.com/astral-sh/ruff/pull/21011))
- \[`pyflakes`\] Revert to stable behavior if imports for module lie in alternate branches for `F401` ([#20878](https://github.com/astral-sh/ruff/pull/20878))
- \[`pylint`\] Implement `stop-iteration-return` (`PLR1708`) ([#20733](https://github.com/astral-sh/ruff/pull/20733))
- \[`ruff`\] Add support for additional eager conversion patterns (`RUF065`) ([#20657](https://github.com/astral-sh/ruff/pull/20657))
### Bug fixes
- Fix finding keyword range for clause header after statement ending with semicolon ([#21067](https://github.com/astral-sh/ruff/pull/21067))
- Fix syntax error false positive on nested alternative patterns ([#21104](https://github.com/astral-sh/ruff/pull/21104))
- \[`ISC001`\] Fix panic when string literals are unclosed ([#21034](https://github.com/astral-sh/ruff/pull/21034))
- \[`flake8-django`\] Apply `DJ001` to annotated fields ([#20907](https://github.com/astral-sh/ruff/pull/20907))
- \[`flake8-pyi`\] Fix `PYI034` to not trigger on metaclasses (`PYI034`) ([#20881](https://github.com/astral-sh/ruff/pull/20881))
- \[`flake8-type-checking`\] Fix `TC003` false positive with `future-annotations` ([#21125](https://github.com/astral-sh/ruff/pull/21125))
- \[`pyflakes`\] Fix false positive for `__class__` in lambda expressions within class definitions (`F821`) ([#20564](https://github.com/astral-sh/ruff/pull/20564))
- \[`pyupgrade`\] Fix false positive for `TypeVar` with default on Python \<3.13 (`UP046`,`UP047`) ([#21045](https://github.com/astral-sh/ruff/pull/21045))
### Rule changes
- Add missing docstring sections to the numpy list ([#20931](https://github.com/astral-sh/ruff/pull/20931))
- \[`airflow`\] Extend `airflow.models..Param` check (`AIR311`) ([#21043](https://github.com/astral-sh/ruff/pull/21043))
- \[`airflow`\] Warn that `airflow....DAG.create_dagrun` has been removed (`AIR301`) ([#21093](https://github.com/astral-sh/ruff/pull/21093))
- \[`refurb`\] Preserve digit separators in `Decimal` constructor (`FURB157`) ([#20588](https://github.com/astral-sh/ruff/pull/20588))
### Server
- Avoid sending an unnecessary "clear diagnostics" message for clients supporting pull diagnostics ([#21105](https://github.com/astral-sh/ruff/pull/21105))
### Documentation
- \[`flake8-bandit`\] Fix correct example for `S308` ([#21128](https://github.com/astral-sh/ruff/pull/21128))
### Other changes
- Clearer error message when `line-length` goes beyond threshold ([#21072](https://github.com/astral-sh/ruff/pull/21072))
### Contributors
- [@danparizher](https://github.com/danparizher)
- [@jvacek](https://github.com/jvacek)
- [@ntBre](https://github.com/ntBre)
- [@augustelalande](https://github.com/augustelalande)
- [@prakhar1144](https://github.com/prakhar1144)
- [@TaKO8Ki](https://github.com/TaKO8Ki)
- [@dylwil3](https://github.com/dylwil3)
- [@fatelei](https://github.com/fatelei)
- [@ShaharNaveh](https://github.com/ShaharNaveh)
- [@Lee-W](https://github.com/Lee-W)
## 0.14.2
Released on 2025-10-23.

402
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@ resolver = "2"
[workspace.package]
# Please update rustfmt.toml when bumping the Rust edition
edition = "2024"
rust-version = "1.88"
rust-version = "1.89"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -84,7 +84,7 @@ dashmap = { version = "6.0.1" }
dir-test = { version = "0.4.0" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
etcetera = { version = "0.10.0" }
etcetera = { version = "0.11.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
@@ -103,7 +103,7 @@ hashbrown = { version = "0.16.0", default-features = false, features = [
"inline-more",
] }
heck = "0.5.0"
ignore = { version = "0.4.22" }
ignore = { version = "0.4.24" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
indexmap = { version = "2.6.0" }
@@ -124,7 +124,7 @@ lsp-server = { version = "0.7.6" }
lsp-types = { git = "https://github.com/astral-sh/lsp-types.git", rev = "3512a9f", features = [
"proposed",
] }
matchit = { version = "0.8.1" }
matchit = { version = "0.9.0" }
memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
@@ -146,7 +146,7 @@ regex-automata = { version = "0.4.9" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "cdd0b85516a52c18b8a6d17a2279a96ed6c3e198", default-features = false, features = [
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "05a9af7f554b64b8aadc2eeb6f2caf73d0408d09", default-features = false, features = [
"compact_str",
"macros",
"salsa_unstable",

View File

@@ -147,8 +147,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.2/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.2/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.4/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.4/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -181,7 +181,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.2
rev: v0.14.4
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.2"
version = "0.14.4"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -7,6 +7,7 @@ use path_absolutize::CWD;
use ruff_db::system::{SystemPath, SystemPathBuf};
use ruff_graph::{Direction, ImportMap, ModuleDb, ModuleImports};
use ruff_linter::package::PackageRoot;
use ruff_linter::source_kind::SourceKind;
use ruff_linter::{warn_user, warn_user_once};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{ResolvedFile, match_exclusion, python_files_in_path};
@@ -127,10 +128,6 @@ pub(crate) fn analyze_graph(
},
Some(language) => PySourceType::from(language),
};
if matches!(source_type, PySourceType::Ipynb) {
debug!("Ignoring Jupyter notebook: {}", path.display());
continue;
}
// Convert to system paths.
let Ok(package) = package.map(SystemPathBuf::from_path_buf).transpose() else {
@@ -147,13 +144,34 @@ pub(crate) fn analyze_graph(
let root = root.clone();
let result = inner_result.clone();
scope.spawn(move |_| {
// Extract source code (handles both .py and .ipynb files)
let source_kind = match SourceKind::from_path(path.as_std_path(), source_type) {
Ok(Some(source_kind)) => source_kind,
Ok(None) => {
debug!("Skipping non-Python notebook: {path}");
return;
}
Err(err) => {
warn!("Failed to read source for {path}: {err}");
return;
}
};
let source_code = source_kind.source_code();
// Identify any imports via static analysis.
let mut imports =
ModuleImports::detect(&db, &path, package.as_deref(), string_imports)
.unwrap_or_else(|err| {
warn!("Failed to generate import map for {path}: {err}");
ModuleImports::default()
});
let mut imports = ModuleImports::detect(
&db,
source_code,
source_type,
&path,
package.as_deref(),
string_imports,
)
.unwrap_or_else(|err| {
warn!("Failed to generate import map for {path}: {err}");
ModuleImports::default()
});
debug!("Discovered {} imports for {}", imports.len(), path);

View File

@@ -370,7 +370,7 @@ pub(crate) fn format_source(
let line_index = LineIndex::from_source_text(unformatted);
let byte_range = range.to_text_range(unformatted, &line_index);
format_range(unformatted, byte_range, options).map(|formatted_range| {
let mut formatted = unformatted.to_string();
let mut formatted = unformatted.clone();
formatted.replace_range(
std::ops::Range::<usize>::from(formatted_range.source_range()),
formatted_range.as_code(),

View File

@@ -653,3 +653,133 @@ fn venv() -> Result<()> {
Ok(())
}
#[test]
fn notebook_basic() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
root.child("ruff").child("__init__.py").write_str("")?;
root.child("ruff")
.child("a.py")
.write_str(indoc::indoc! {r#"
def helper():
pass
"#})?;
// Create a basic notebook with a simple import
root.child("notebook.ipynb").write_str(indoc::indoc! {r#"
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from ruff.a import helper"
]
}
],
"metadata": {
"language_info": {
"name": "python",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
success: true
exit_code: 0
----- stdout -----
{
"notebook.ipynb": [
"ruff/a.py"
],
"ruff/__init__.py": [],
"ruff/a.py": []
}
----- stderr -----
"###);
});
Ok(())
}
#[test]
fn notebook_with_magic() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
root.child("ruff").child("__init__.py").write_str("")?;
root.child("ruff")
.child("a.py")
.write_str(indoc::indoc! {r#"
def helper():
pass
"#})?;
// Create a notebook with IPython magic commands and imports
root.child("notebook.ipynb").write_str(indoc::indoc! {r#"
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from ruff.a import helper"
]
}
],
"metadata": {
"language_info": {
"name": "python",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().current_dir(&root), @r###"
success: true
exit_code: 0
----- stdout -----
{
"notebook.ipynb": [
"ruff/a.py"
],
"ruff/__init__.py": [],
"ruff/a.py": []
}
----- stderr -----
"###);
});
Ok(())
}

View File

@@ -146,7 +146,7 @@ static FREQTRADE: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
400,
525,
);
static PANDAS: Benchmark = Benchmark::new(

View File

@@ -470,6 +470,11 @@ impl File {
self.source_type(db).is_stub()
}
/// Returns `true` if the file is an `__init__.pyi`
pub fn is_package_stub(self, db: &dyn Db) -> bool {
self.path(db).as_str().ends_with("__init__.pyi")
}
pub fn source_type(self, db: &dyn Db) -> PySourceType {
match self.path(db) {
FilePath::System(path) => path

View File

@@ -723,10 +723,11 @@ impl ruff_cache::CacheKey for SystemPathBuf {
/// A slice of a virtual path on [`System`](super::System) (akin to [`str`]).
#[repr(transparent)]
#[derive(Eq, PartialEq, Hash, PartialOrd, Ord)]
pub struct SystemVirtualPath(str);
impl SystemVirtualPath {
pub fn new(path: &str) -> &SystemVirtualPath {
pub const fn new(path: &str) -> &SystemVirtualPath {
// SAFETY: SystemVirtualPath is marked as #[repr(transparent)] so the conversion from a
// *const str to a *const SystemVirtualPath is valid.
unsafe { &*(path as *const str as *const SystemVirtualPath) }
@@ -767,8 +768,8 @@ pub struct SystemVirtualPathBuf(String);
impl SystemVirtualPathBuf {
#[inline]
pub fn as_path(&self) -> &SystemVirtualPath {
SystemVirtualPath::new(&self.0)
pub const fn as_path(&self) -> &SystemVirtualPath {
SystemVirtualPath::new(self.0.as_str())
}
}
@@ -852,6 +853,12 @@ impl ruff_cache::CacheKey for SystemVirtualPathBuf {
}
}
impl Borrow<SystemVirtualPath> for SystemVirtualPathBuf {
fn borrow(&self) -> &SystemVirtualPath {
self.as_path()
}
}
/// Deduplicates identical paths and removes nested paths.
///
/// # Examples

View File

@@ -62,7 +62,7 @@ fn generate_set(output: &mut String, set: Set, parents: &mut Vec<Set>) {
generate_set(
output,
Set::Named {
name: set_name.to_string(),
name: set_name.clone(),
set: *sub_set,
},
parents,

View File

@@ -104,7 +104,7 @@ fn generate_set(output: &mut String, set: Set, parents: &mut Vec<Set>) {
generate_set(
output,
Set::Named {
name: set_name.to_string(),
name: set_name.clone(),
set: *sub_set,
},
parents,

View File

@@ -1006,7 +1006,7 @@ impl<Context> std::fmt::Debug for Align<'_, Context> {
/// Block indents indent a block of code, such as in a function body, and therefore insert a line
/// break before and after the content.
///
/// Doesn't create an indentation if the passed in content is [`FormatElement.is_empty`].
/// Doesn't create an indentation if the passed in content is empty.
///
/// # Examples
///

View File

@@ -487,7 +487,7 @@ pub trait FormatElements {
/// Represents the width by adding 1 to the actual width so that the width can be represented by a [`NonZeroU32`],
/// allowing [`TextWidth`] or [`Option<Width>`] fit in 4 bytes rather than 8.
///
/// This means that 2^32 can not be precisely represented and instead has the same value as 2^32-1.
/// This means that 2^32 cannot be precisely represented and instead has the same value as 2^32-1.
/// This imprecision shouldn't matter in practice because either text are longer than any configured line width
/// and thus, the text should break.
#[derive(Copy, Clone, Debug, Eq, PartialEq)]

View File

@@ -3,8 +3,9 @@ use std::collections::{BTreeMap, BTreeSet};
use anyhow::Result;
use ruff_db::system::{SystemPath, SystemPathBuf};
use ruff_python_ast::PySourceType;
use ruff_python_ast::helpers::to_module_path;
use ruff_python_parser::{Mode, ParseOptions, parse};
use ruff_python_parser::{ParseOptions, parse};
use crate::collector::Collector;
pub use crate::db::ModuleDb;
@@ -24,13 +25,14 @@ impl ModuleImports {
/// Detect the [`ModuleImports`] for a given Python file.
pub fn detect(
db: &ModuleDb,
source: &str,
source_type: PySourceType,
path: &SystemPath,
package: Option<&SystemPath>,
string_imports: StringImports,
) -> Result<Self> {
// Read and parse the source code.
let source = std::fs::read_to_string(path)?;
let parsed = parse(&source, ParseOptions::from(Mode::Module))?;
// Parse the source code.
let parsed = parse(source, ParseOptions::from(source_type))?;
let module_path =
package.and_then(|package| to_module_path(package.as_std_path(), path.as_std_path()));

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.2"
version = "0.14.4"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -22,6 +22,7 @@ DAG(dag_id="class_schedule_interval", schedule_interval="@hourly")
DAG(dag_id="class_timetable", timetable=NullTimetable())
DAG(dag_id="class_concurrency", concurrency=12)
DAG(dag_id="class_fail_stop", fail_stop=True)

View File

@@ -70,3 +70,12 @@ builtins.getattr(foo, "bar")
# Regression test for: https://github.com/astral-sh/ruff/issues/18353
setattr(foo, "__debug__", 0)
# Regression test for: https://github.com/astral-sh/ruff/issues/21126
# Non-NFKC attribute names should be marked as unsafe. Python normalizes identifiers in
# attribute access (obj.attr) using NFKC, but does not normalize string
# arguments passed to getattr/setattr. Rewriting `getattr(ns, "ſ")` to
# `ns.ſ` would be interpreted as `ns.s` at runtime, changing behavior.
# Example: the long s character "ſ" normalizes to "s" under NFKC.
getattr(foo, "ſ")
setattr(foo, "ſ", 1)

View File

@@ -145,3 +145,11 @@ with open("file.txt", "w") as f:
with open("file.txt", "w") as f:
for line in text:
f.write(line)
# See: https://github.com/astral-sh/ruff/issues/20785
import json
data = {"price": 100}
with open("test.json", "wb") as f:
f.write(json.dumps(data, indent=4).encode("utf-8"))

View File

@@ -85,3 +85,9 @@ Decimal("1234_5678") # Safe fix: preserves non-thousands separators
Decimal("0001_2345")
Decimal("000_1_2345")
Decimal("000_000")
# Test cases for underscores before sign
# https://github.com/astral-sh/ruff/issues/21186
Decimal("_-1") # Should flag as verbose
Decimal("_+1") # Should flag as verbose
Decimal("_-1_000") # Should flag as verbose

View File

@@ -64,3 +64,8 @@ _ = Decimal.from_float(True)
_ = Decimal.from_float(float("-nan"))
_ = Decimal.from_float(float("\x2dnan"))
_ = Decimal.from_float(float("\N{HYPHEN-MINUS}nan"))
# See: https://github.com/astral-sh/ruff/issues/21257
# fixes must be safe
_ = Fraction.from_float(f=4.2)
_ = Fraction.from_decimal(dec=4)

View File

@@ -81,3 +81,7 @@ round(# a comment
round(
17 # a comment
)
# See: https://github.com/astral-sh/ruff/issues/21209
print(round(125, **{"ndigits": -2}))
print(round(125, *[-2]))

View File

@@ -43,9 +43,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pycodestyle::rules::ambiguous_variable_name(checker, name, name.range());
}
}
if checker.is_rule_enabled(Rule::NonlocalWithoutBinding) {
pylint::rules::nonlocal_without_binding(checker, nonlocal);
}
if checker.is_rule_enabled(Rule::NonlocalAndGlobal) {
pylint::rules::nonlocal_and_global(checker, nonlocal);
}

View File

@@ -73,7 +73,8 @@ use crate::rules::pyflakes::rules::{
UndefinedLocalWithNestedImportStarUsage, YieldOutsideFunction,
};
use crate::rules::pylint::rules::{
AwaitOutsideAsync, LoadBeforeGlobalDeclaration, YieldFromInAsyncFunction,
AwaitOutsideAsync, LoadBeforeGlobalDeclaration, NonlocalWithoutBinding,
YieldFromInAsyncFunction,
};
use crate::rules::{flake8_pyi, flake8_type_checking, pyflakes, pyupgrade};
use crate::settings::rule_table::RuleTable;
@@ -641,6 +642,10 @@ impl SemanticSyntaxContext for Checker<'_> {
self.semantic.global(name)
}
fn has_nonlocal_binding(&self, name: &str) -> bool {
self.semantic.nonlocal(name).is_some()
}
fn report_semantic_error(&self, error: SemanticSyntaxError) {
match error.kind {
SemanticSyntaxErrorKind::LateFutureImport => {
@@ -717,6 +722,12 @@ impl SemanticSyntaxContext for Checker<'_> {
self.report_diagnostic(pyflakes::rules::ContinueOutsideLoop, error.range);
}
}
SemanticSyntaxErrorKind::NonlocalWithoutBinding(name) => {
// PLE0117
if self.is_rule_enabled(Rule::NonlocalWithoutBinding) {
self.report_diagnostic(NonlocalWithoutBinding { name }, error.range);
}
}
SemanticSyntaxErrorKind::ReboundComprehensionVariable
| SemanticSyntaxErrorKind::DuplicateTypeParameter
| SemanticSyntaxErrorKind::MultipleCaseAssignment(_)

View File

@@ -4,4 +4,4 @@ expression: content
---
syntax_errors.py:
1:15 invalid-syntax: Expected one or more symbol names after import
3:12 invalid-syntax: Expected ')', found newline
3:12 invalid-syntax: Expected `)`, found newline

View File

@@ -196,6 +196,7 @@ fn check_call_arguments(checker: &Checker, qualified_name: &QualifiedName, argum
match qualified_name.segments() {
["airflow", .., "DAG" | "dag"] => {
// with replacement
diagnostic_for_argument(checker, arguments, "concurrency", Some("max_active_tasks"));
diagnostic_for_argument(checker, arguments, "fail_stop", Some("fail_fast"));
diagnostic_for_argument(checker, arguments, "schedule_interval", Some("schedule"));
diagnostic_for_argument(checker, arguments, "timetable", Some("schedule"));

View File

@@ -28,6 +28,8 @@ AIR301 [*] `timetable` is removed in Airflow 3.0
22 |
23 | DAG(dag_id="class_timetable", timetable=NullTimetable())
| ^^^^^^^^^
24 |
25 | DAG(dag_id="class_concurrency", concurrency=12)
|
help: Use `schedule` instead
20 |
@@ -36,249 +38,271 @@ help: Use `schedule` instead
- DAG(dag_id="class_timetable", timetable=NullTimetable())
23 + DAG(dag_id="class_timetable", schedule=NullTimetable())
24 |
25 |
26 | DAG(dag_id="class_fail_stop", fail_stop=True)
25 | DAG(dag_id="class_concurrency", concurrency=12)
26 |
AIR301 [*] `fail_stop` is removed in Airflow 3.0
--> AIR301_args.py:26:31
AIR301 [*] `concurrency` is removed in Airflow 3.0
--> AIR301_args.py:25:33
|
26 | DAG(dag_id="class_fail_stop", fail_stop=True)
| ^^^^^^^^^
27 |
28 | DAG(dag_id="class_default_view", default_view="dag_default_view")
23 | DAG(dag_id="class_timetable", timetable=NullTimetable())
24 |
25 | DAG(dag_id="class_concurrency", concurrency=12)
| ^^^^^^^^^^^
26 |
27 | DAG(dag_id="class_fail_stop", fail_stop=True)
|
help: Use `fail_fast` instead
help: Use `max_active_tasks` instead
22 |
23 | DAG(dag_id="class_timetable", timetable=NullTimetable())
24 |
25 |
- DAG(dag_id="class_concurrency", concurrency=12)
25 + DAG(dag_id="class_concurrency", max_active_tasks=12)
26 |
27 | DAG(dag_id="class_fail_stop", fail_stop=True)
28 |
AIR301 [*] `fail_stop` is removed in Airflow 3.0
--> AIR301_args.py:27:31
|
25 | DAG(dag_id="class_concurrency", concurrency=12)
26 |
27 | DAG(dag_id="class_fail_stop", fail_stop=True)
| ^^^^^^^^^
28 |
29 | DAG(dag_id="class_default_view", default_view="dag_default_view")
|
help: Use `fail_fast` instead
24 |
25 | DAG(dag_id="class_concurrency", concurrency=12)
26 |
- DAG(dag_id="class_fail_stop", fail_stop=True)
26 + DAG(dag_id="class_fail_stop", fail_fast=True)
27 |
28 | DAG(dag_id="class_default_view", default_view="dag_default_view")
29 |
27 + DAG(dag_id="class_fail_stop", fail_fast=True)
28 |
29 | DAG(dag_id="class_default_view", default_view="dag_default_view")
30 |
AIR301 `default_view` is removed in Airflow 3.0
--> AIR301_args.py:28:34
--> AIR301_args.py:29:34
|
26 | DAG(dag_id="class_fail_stop", fail_stop=True)
27 |
28 | DAG(dag_id="class_default_view", default_view="dag_default_view")
27 | DAG(dag_id="class_fail_stop", fail_stop=True)
28 |
29 | DAG(dag_id="class_default_view", default_view="dag_default_view")
| ^^^^^^^^^^^^
29 |
30 | DAG(dag_id="class_orientation", orientation="BT")
30 |
31 | DAG(dag_id="class_orientation", orientation="BT")
|
AIR301 `orientation` is removed in Airflow 3.0
--> AIR301_args.py:30:33
--> AIR301_args.py:31:33
|
28 | DAG(dag_id="class_default_view", default_view="dag_default_view")
29 |
30 | DAG(dag_id="class_orientation", orientation="BT")
29 | DAG(dag_id="class_default_view", default_view="dag_default_view")
30 |
31 | DAG(dag_id="class_orientation", orientation="BT")
| ^^^^^^^^^^^
31 |
32 | allow_future_exec_dates_dag = DAG(dag_id="class_allow_future_exec_dates")
32 |
33 | allow_future_exec_dates_dag = DAG(dag_id="class_allow_future_exec_dates")
|
AIR301 [*] `schedule_interval` is removed in Airflow 3.0
--> AIR301_args.py:41:6
--> AIR301_args.py:42:6
|
41 | @dag(schedule_interval="0 * * * *")
42 | @dag(schedule_interval="0 * * * *")
| ^^^^^^^^^^^^^^^^^
42 | def decorator_schedule_interval():
43 | pass
43 | def decorator_schedule_interval():
44 | pass
|
help: Use `schedule` instead
38 | pass
39 |
39 | pass
40 |
41 |
- @dag(schedule_interval="0 * * * *")
41 + @dag(schedule="0 * * * *")
42 | def decorator_schedule_interval():
43 | pass
44 |
42 + @dag(schedule="0 * * * *")
43 | def decorator_schedule_interval():
44 | pass
45 |
AIR301 [*] `timetable` is removed in Airflow 3.0
--> AIR301_args.py:46:6
--> AIR301_args.py:47:6
|
46 | @dag(timetable=NullTimetable())
47 | @dag(timetable=NullTimetable())
| ^^^^^^^^^
47 | def decorator_timetable():
48 | pass
48 | def decorator_timetable():
49 | pass
|
help: Use `schedule` instead
43 | pass
44 |
44 | pass
45 |
46 |
- @dag(timetable=NullTimetable())
46 + @dag(schedule=NullTimetable())
47 | def decorator_timetable():
48 | pass
49 |
47 + @dag(schedule=NullTimetable())
48 | def decorator_timetable():
49 | pass
50 |
AIR301 [*] `execution_date` is removed in Airflow 3.0
--> AIR301_args.py:54:62
--> AIR301_args.py:55:62
|
52 | def decorator_deprecated_operator_args():
53 | trigger_dagrun_op = trigger_dagrun.TriggerDagRunOperator(
54 | task_id="trigger_dagrun_op1", trigger_dag_id="test", execution_date="2024-12-04"
53 | def decorator_deprecated_operator_args():
54 | trigger_dagrun_op = trigger_dagrun.TriggerDagRunOperator(
55 | task_id="trigger_dagrun_op1", trigger_dag_id="test", execution_date="2024-12-04"
| ^^^^^^^^^^^^^^
55 | )
56 | trigger_dagrun_op2 = TriggerDagRunOperator(
56 | )
57 | trigger_dagrun_op2 = TriggerDagRunOperator(
|
help: Use `logical_date` instead
51 | @dag()
52 | def decorator_deprecated_operator_args():
53 | trigger_dagrun_op = trigger_dagrun.TriggerDagRunOperator(
52 | @dag()
53 | def decorator_deprecated_operator_args():
54 | trigger_dagrun_op = trigger_dagrun.TriggerDagRunOperator(
- task_id="trigger_dagrun_op1", trigger_dag_id="test", execution_date="2024-12-04"
54 + task_id="trigger_dagrun_op1", trigger_dag_id="test", logical_date="2024-12-04"
55 | )
56 | trigger_dagrun_op2 = TriggerDagRunOperator(
57 | task_id="trigger_dagrun_op2", trigger_dag_id="test", execution_date="2024-12-04"
55 + task_id="trigger_dagrun_op1", trigger_dag_id="test", logical_date="2024-12-04"
56 | )
57 | trigger_dagrun_op2 = TriggerDagRunOperator(
58 | task_id="trigger_dagrun_op2", trigger_dag_id="test", execution_date="2024-12-04"
AIR301 [*] `execution_date` is removed in Airflow 3.0
--> AIR301_args.py:57:62
--> AIR301_args.py:58:62
|
55 | )
56 | trigger_dagrun_op2 = TriggerDagRunOperator(
57 | task_id="trigger_dagrun_op2", trigger_dag_id="test", execution_date="2024-12-04"
56 | )
57 | trigger_dagrun_op2 = TriggerDagRunOperator(
58 | task_id="trigger_dagrun_op2", trigger_dag_id="test", execution_date="2024-12-04"
| ^^^^^^^^^^^^^^
58 | )
59 | )
|
help: Use `logical_date` instead
54 | task_id="trigger_dagrun_op1", trigger_dag_id="test", execution_date="2024-12-04"
55 | )
56 | trigger_dagrun_op2 = TriggerDagRunOperator(
55 | task_id="trigger_dagrun_op1", trigger_dag_id="test", execution_date="2024-12-04"
56 | )
57 | trigger_dagrun_op2 = TriggerDagRunOperator(
- task_id="trigger_dagrun_op2", trigger_dag_id="test", execution_date="2024-12-04"
57 + task_id="trigger_dagrun_op2", trigger_dag_id="test", logical_date="2024-12-04"
58 | )
59 |
60 | branch_dt_op = datetime.BranchDateTimeOperator(
58 + task_id="trigger_dagrun_op2", trigger_dag_id="test", logical_date="2024-12-04"
59 | )
60 |
61 | branch_dt_op = datetime.BranchDateTimeOperator(
AIR301 [*] `use_task_execution_day` is removed in Airflow 3.0
--> AIR301_args.py:61:33
--> AIR301_args.py:62:33
|
60 | branch_dt_op = datetime.BranchDateTimeOperator(
61 | task_id="branch_dt_op", use_task_execution_day=True, task_concurrency=5
61 | branch_dt_op = datetime.BranchDateTimeOperator(
62 | task_id="branch_dt_op", use_task_execution_day=True, task_concurrency=5
| ^^^^^^^^^^^^^^^^^^^^^^
62 | )
63 | branch_dt_op2 = BranchDateTimeOperator(
63 | )
64 | branch_dt_op2 = BranchDateTimeOperator(
|
help: Use `use_task_logical_date` instead
58 | )
59 |
60 | branch_dt_op = datetime.BranchDateTimeOperator(
59 | )
60 |
61 | branch_dt_op = datetime.BranchDateTimeOperator(
- task_id="branch_dt_op", use_task_execution_day=True, task_concurrency=5
61 + task_id="branch_dt_op", use_task_logical_date=True, task_concurrency=5
62 | )
63 | branch_dt_op2 = BranchDateTimeOperator(
64 | task_id="branch_dt_op2",
62 + task_id="branch_dt_op", use_task_logical_date=True, task_concurrency=5
63 | )
64 | branch_dt_op2 = BranchDateTimeOperator(
65 | task_id="branch_dt_op2",
AIR301 [*] `task_concurrency` is removed in Airflow 3.0
--> AIR301_args.py:61:62
--> AIR301_args.py:62:62
|
60 | branch_dt_op = datetime.BranchDateTimeOperator(
61 | task_id="branch_dt_op", use_task_execution_day=True, task_concurrency=5
61 | branch_dt_op = datetime.BranchDateTimeOperator(
62 | task_id="branch_dt_op", use_task_execution_day=True, task_concurrency=5
| ^^^^^^^^^^^^^^^^
62 | )
63 | branch_dt_op2 = BranchDateTimeOperator(
63 | )
64 | branch_dt_op2 = BranchDateTimeOperator(
|
help: Use `max_active_tis_per_dag` instead
58 | )
59 |
60 | branch_dt_op = datetime.BranchDateTimeOperator(
59 | )
60 |
61 | branch_dt_op = datetime.BranchDateTimeOperator(
- task_id="branch_dt_op", use_task_execution_day=True, task_concurrency=5
61 + task_id="branch_dt_op", use_task_execution_day=True, max_active_tis_per_dag=5
62 | )
63 | branch_dt_op2 = BranchDateTimeOperator(
64 | task_id="branch_dt_op2",
62 + task_id="branch_dt_op", use_task_execution_day=True, max_active_tis_per_dag=5
63 | )
64 | branch_dt_op2 = BranchDateTimeOperator(
65 | task_id="branch_dt_op2",
AIR301 [*] `use_task_execution_day` is removed in Airflow 3.0
--> AIR301_args.py:65:9
--> AIR301_args.py:66:9
|
63 | branch_dt_op2 = BranchDateTimeOperator(
64 | task_id="branch_dt_op2",
65 | use_task_execution_day=True,
64 | branch_dt_op2 = BranchDateTimeOperator(
65 | task_id="branch_dt_op2",
66 | use_task_execution_day=True,
| ^^^^^^^^^^^^^^^^^^^^^^
66 | sla=timedelta(seconds=10),
67 | )
67 | sla=timedelta(seconds=10),
68 | )
|
help: Use `use_task_logical_date` instead
62 | )
63 | branch_dt_op2 = BranchDateTimeOperator(
64 | task_id="branch_dt_op2",
63 | )
64 | branch_dt_op2 = BranchDateTimeOperator(
65 | task_id="branch_dt_op2",
- use_task_execution_day=True,
65 + use_task_logical_date=True,
66 | sla=timedelta(seconds=10),
67 | )
68 |
66 + use_task_logical_date=True,
67 | sla=timedelta(seconds=10),
68 | )
69 |
AIR301 [*] `use_task_execution_day` is removed in Airflow 3.0
--> AIR301_args.py:92:9
--> AIR301_args.py:93:9
|
90 | follow_task_ids_if_true=None,
91 | week_day=1,
92 | use_task_execution_day=True,
91 | follow_task_ids_if_true=None,
92 | week_day=1,
93 | use_task_execution_day=True,
| ^^^^^^^^^^^^^^^^^^^^^^
93 | )
94 | )
|
help: Use `use_task_logical_date` instead
89 | follow_task_ids_if_false=None,
90 | follow_task_ids_if_true=None,
91 | week_day=1,
90 | follow_task_ids_if_false=None,
91 | follow_task_ids_if_true=None,
92 | week_day=1,
- use_task_execution_day=True,
92 + use_task_logical_date=True,
93 | )
94 |
95 | trigger_dagrun_op >> trigger_dagrun_op2
93 + use_task_logical_date=True,
94 | )
95 |
96 | trigger_dagrun_op >> trigger_dagrun_op2
AIR301 `filename_template` is removed in Airflow 3.0
--> AIR301_args.py:102:15
--> AIR301_args.py:103:15
|
101 | # deprecated filename_template argument in FileTaskHandler
102 | S3TaskHandler(filename_template="/tmp/test")
102 | # deprecated filename_template argument in FileTaskHandler
103 | S3TaskHandler(filename_template="/tmp/test")
| ^^^^^^^^^^^^^^^^^
103 | HdfsTaskHandler(filename_template="/tmp/test")
104 | ElasticsearchTaskHandler(filename_template="/tmp/test")
104 | HdfsTaskHandler(filename_template="/tmp/test")
105 | ElasticsearchTaskHandler(filename_template="/tmp/test")
|
AIR301 `filename_template` is removed in Airflow 3.0
--> AIR301_args.py:103:17
--> AIR301_args.py:104:17
|
101 | # deprecated filename_template argument in FileTaskHandler
102 | S3TaskHandler(filename_template="/tmp/test")
103 | HdfsTaskHandler(filename_template="/tmp/test")
102 | # deprecated filename_template argument in FileTaskHandler
103 | S3TaskHandler(filename_template="/tmp/test")
104 | HdfsTaskHandler(filename_template="/tmp/test")
| ^^^^^^^^^^^^^^^^^
104 | ElasticsearchTaskHandler(filename_template="/tmp/test")
105 | GCSTaskHandler(filename_template="/tmp/test")
105 | ElasticsearchTaskHandler(filename_template="/tmp/test")
106 | GCSTaskHandler(filename_template="/tmp/test")
|
AIR301 `filename_template` is removed in Airflow 3.0
--> AIR301_args.py:104:26
--> AIR301_args.py:105:26
|
102 | S3TaskHandler(filename_template="/tmp/test")
103 | HdfsTaskHandler(filename_template="/tmp/test")
104 | ElasticsearchTaskHandler(filename_template="/tmp/test")
103 | S3TaskHandler(filename_template="/tmp/test")
104 | HdfsTaskHandler(filename_template="/tmp/test")
105 | ElasticsearchTaskHandler(filename_template="/tmp/test")
| ^^^^^^^^^^^^^^^^^
105 | GCSTaskHandler(filename_template="/tmp/test")
106 | GCSTaskHandler(filename_template="/tmp/test")
|
AIR301 `filename_template` is removed in Airflow 3.0
--> AIR301_args.py:105:16
--> AIR301_args.py:106:16
|
103 | HdfsTaskHandler(filename_template="/tmp/test")
104 | ElasticsearchTaskHandler(filename_template="/tmp/test")
105 | GCSTaskHandler(filename_template="/tmp/test")
104 | HdfsTaskHandler(filename_template="/tmp/test")
105 | ElasticsearchTaskHandler(filename_template="/tmp/test")
106 | GCSTaskHandler(filename_template="/tmp/test")
| ^^^^^^^^^^^^^^^^^
106 |
107 | FabAuthManager(None)
107 |
108 | FabAuthManager(None)
|
AIR301 `appbuilder` is removed in Airflow 3.0
--> AIR301_args.py:107:15
--> AIR301_args.py:108:15
|
105 | GCSTaskHandler(filename_template="/tmp/test")
106 |
107 | FabAuthManager(None)
106 | GCSTaskHandler(filename_template="/tmp/test")
107 |
108 | FabAuthManager(None)
| ^^^^^^
|
help: The constructor takes no parameter now

View File

@@ -3,6 +3,7 @@ use ruff_python_ast::{self as ast, Expr};
use ruff_python_stdlib::identifiers::{is_identifier, is_mangled_private};
use ruff_source_file::LineRanges;
use ruff_text_size::Ranged;
use unicode_normalization::UnicodeNormalization;
use crate::checkers::ast::Checker;
use crate::fix::edits::pad;
@@ -29,6 +30,21 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
/// obj.foo
/// ```
///
/// ## Fix safety
/// The fix is marked as unsafe for attribute names that are not in NFKC (Normalization Form KC)
/// normalization. Python normalizes identifiers using NFKC when using attribute access syntax
/// (e.g., `obj.attr`), but does not normalize string arguments passed to `getattr`. Rewriting
/// `getattr(obj, "ſ")` to `obj.ſ` would be interpreted as `obj.s` at runtime, changing behavior.
///
/// For example, the long s character `"ſ"` normalizes to `"s"` under NFKC, so:
/// ```python
/// # This accesses an attribute with the exact name "ſ" (if it exists)
/// value = getattr(obj, "ſ")
///
/// # But this would normalize to "s" and access a different attribute
/// obj.ſ # This is interpreted as obj.s, not obj.ſ
/// ```
///
/// ## References
/// - [Python documentation: `getattr`](https://docs.python.org/3/library/functions.html#getattr)
#[derive(ViolationMetadata)]
@@ -69,8 +85,14 @@ pub(crate) fn getattr_with_constant(checker: &Checker, expr: &Expr, func: &Expr,
return;
}
// Mark fixes as unsafe for non-NFKC attribute names. Python normalizes identifiers using NFKC, so using
// attribute syntax (e.g., `obj.attr`) would normalize the name and potentially change
// program behavior.
let attr_name = value.to_str();
let is_unsafe = attr_name.nfkc().collect::<String>() != attr_name;
let mut diagnostic = checker.report_diagnostic(GetAttrWithConstant, expr.range());
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
let edit = Edit::range_replacement(
pad(
if matches!(
obj,
@@ -88,5 +110,11 @@ pub(crate) fn getattr_with_constant(checker: &Checker, expr: &Expr, func: &Expr,
checker.locator(),
),
expr.range(),
)));
);
let fix = if is_unsafe {
Fix::unsafe_edit(edit)
} else {
Fix::safe_edit(edit)
};
diagnostic.set_fix(fix);
}

View File

@@ -4,6 +4,7 @@ use ruff_text_size::{Ranged, TextRange};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_codegen::Generator;
use ruff_python_stdlib::identifiers::{is_identifier, is_mangled_private};
use unicode_normalization::UnicodeNormalization;
use crate::checkers::ast::Checker;
use crate::{AlwaysFixableViolation, Edit, Fix};
@@ -28,6 +29,23 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
/// obj.foo = 42
/// ```
///
/// ## Fix safety
/// The fix is marked as unsafe for attribute names that are not in NFKC (Normalization Form KC)
/// normalization. Python normalizes identifiers using NFKC when using attribute access syntax
/// (e.g., `obj.attr = value`), but does not normalize string arguments passed to `setattr`.
/// Rewriting `setattr(obj, "ſ", 1)` to `obj.ſ = 1` would be interpreted as `obj.s = 1` at
/// runtime, changing behavior.
///
/// For example, the long s character `"ſ"` normalizes to `"s"` under NFKC, so:
/// ```python
/// # This creates an attribute with the exact name "ſ"
/// setattr(obj, "ſ", 1)
/// getattr(obj, "ſ") # Returns 1
///
/// # But this would normalize to "s" and set a different attribute
/// obj.ſ = 1 # This is interpreted as obj.s = 1, not obj.ſ = 1
/// ```
///
/// ## References
/// - [Python documentation: `setattr`](https://docs.python.org/3/library/functions.html#setattr)
#[derive(ViolationMetadata)]
@@ -89,6 +107,12 @@ pub(crate) fn setattr_with_constant(checker: &Checker, expr: &Expr, func: &Expr,
return;
}
// Mark fixes as unsafe for non-NFKC attribute names. Python normalizes identifiers using NFKC, so using
// attribute syntax (e.g., `obj.attr = value`) would normalize the name and potentially change
// program behavior.
let attr_name = name.to_str();
let is_unsafe = attr_name.nfkc().collect::<String>() != attr_name;
// We can only replace a `setattr` call (which is an `Expr`) with an assignment
// (which is a `Stmt`) if the `Expr` is already being used as a `Stmt`
// (i.e., it's directly within an `Stmt::Expr`).
@@ -100,10 +124,16 @@ pub(crate) fn setattr_with_constant(checker: &Checker, expr: &Expr, func: &Expr,
{
if expr == child.as_ref() {
let mut diagnostic = checker.report_diagnostic(SetAttrWithConstant, expr.range());
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
let edit = Edit::range_replacement(
assignment(obj, name.to_str(), value, checker.generator()),
expr.range(),
)));
);
let fix = if is_unsafe {
Fix::unsafe_edit(edit)
} else {
Fix::safe_edit(edit)
};
diagnostic.set_fix(fix);
}
}
}

View File

@@ -360,3 +360,21 @@ help: Replace `getattr` with attribute access
70 |
71 | # Regression test for: https://github.com/astral-sh/ruff/issues/18353
72 | setattr(foo, "__debug__", 0)
B009 [*] Do not call `getattr` with a constant attribute value. It is not any safer than normal property access.
--> B009_B010.py:80:1
|
78 | # `ns.ſ` would be interpreted as `ns.s` at runtime, changing behavior.
79 | # Example: the long s character "ſ" normalizes to "s" under NFKC.
80 | getattr(foo, "ſ")
| ^^^^^^^^^^^^^^^^^
81 | setattr(foo, "ſ", 1)
|
help: Replace `getattr` with attribute access
77 | # arguments passed to getattr/setattr. Rewriting `getattr(ns, "ſ")` to
78 | # `ns.ſ` would be interpreted as `ns.s` at runtime, changing behavior.
79 | # Example: the long s character "ſ" normalizes to "s" under NFKC.
- getattr(foo, "ſ")
80 + foo.ſ
81 | setattr(foo, "ſ", 1)
note: This is an unsafe fix and may change runtime behavior

View File

@@ -118,3 +118,19 @@ help: Replace `setattr` with assignment
56 |
57 | # Regression test for: https://github.com/astral-sh/ruff/issues/7455#issuecomment-1722458885
58 | assert getattr(func, '_rpc')is True
B010 [*] Do not call `setattr` with a constant attribute value. It is not any safer than normal property access.
--> B009_B010.py:81:1
|
79 | # Example: the long s character "ſ" normalizes to "s" under NFKC.
80 | getattr(foo, "ſ")
81 | setattr(foo, "ſ", 1)
| ^^^^^^^^^^^^^^^^^^^^
|
help: Replace `setattr` with assignment
78 | # `ns.ſ` would be interpreted as `ns.s` at runtime, changing behavior.
79 | # Example: the long s character "ſ" normalizes to "s" under NFKC.
80 | getattr(foo, "ſ")
- setattr(foo, "ſ", 1)
81 + foo.ſ = 1
note: This is an unsafe fix and may change runtime behavior

View File

@@ -43,7 +43,7 @@ use crate::rules::flake8_comprehensions::fixes;
/// >>> {x: y for x, y in d1} # Iterates over the keys of a mapping
/// {1: 2, 4: 5}
/// >>> dict(d1) # Ruff's incorrect suggested fix
/// (1, 2): 3, (4, 5): 6}
/// {(1, 2): 3, (4, 5): 6}
/// >>> dict(d1.keys()) # Correct fix
/// {1: 2, 4: 5}
/// ```

View File

@@ -78,7 +78,7 @@ pub(crate) fn unconventional_import_alias(
let mut diagnostic = checker.report_diagnostic(
UnconventionalImportAlias {
name: qualified_name,
asname: expected_alias.to_string(),
asname: expected_alias.clone(),
},
binding.range(),
);

View File

@@ -6,21 +6,17 @@ use ruff_macros::CacheKey;
#[derive(Clone, Copy, Debug, CacheKey, PartialEq, Eq, Serialize, Deserialize)]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Default)]
pub enum ParametrizeNameType {
#[serde(rename = "csv")]
Csv,
#[serde(rename = "tuple")]
#[default]
Tuple,
#[serde(rename = "list")]
List,
}
impl Default for ParametrizeNameType {
fn default() -> Self {
Self::Tuple
}
}
impl Display for ParametrizeNameType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
@@ -33,19 +29,15 @@ impl Display for ParametrizeNameType {
#[derive(Clone, Copy, Debug, CacheKey, PartialEq, Eq, Serialize, Deserialize)]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Default)]
pub enum ParametrizeValuesType {
#[serde(rename = "tuple")]
Tuple,
#[serde(rename = "list")]
#[default]
List,
}
impl Default for ParametrizeValuesType {
fn default() -> Self {
Self::List
}
}
impl Display for ParametrizeValuesType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
@@ -57,19 +49,15 @@ impl Display for ParametrizeValuesType {
#[derive(Clone, Copy, Debug, CacheKey, PartialEq, Eq, Serialize, Deserialize)]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Default)]
pub enum ParametrizeValuesRowType {
#[serde(rename = "tuple")]
#[default]
Tuple,
#[serde(rename = "list")]
List,
}
impl Default for ParametrizeValuesRowType {
fn default() -> Self {
Self::Tuple
}
}
impl Display for ParametrizeValuesRowType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {

View File

@@ -9,19 +9,15 @@ use ruff_macros::CacheKey;
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Default)]
pub enum Quote {
/// Use double quotes.
#[default]
Double,
/// Use single quotes.
Single,
}
impl Default for Quote {
fn default() -> Self {
Self::Double
}
}
impl From<ruff_python_ast::str::Quote> for Quote {
fn from(value: ruff_python_ast::str::Quote) -> Self {
match value {

View File

@@ -116,7 +116,7 @@ pub(crate) fn convert_for_loop_to_any_all(checker: &Checker, stmt: &Stmt) {
let mut diagnostic = checker.report_diagnostic(
ReimplementedBuiltin {
replacement: contents.to_string(),
replacement: contents.clone(),
},
TextRange::new(stmt.start(), terminal.stmt.end()),
);
@@ -212,7 +212,7 @@ pub(crate) fn convert_for_loop_to_any_all(checker: &Checker, stmt: &Stmt) {
let mut diagnostic = checker.report_diagnostic(
ReimplementedBuiltin {
replacement: contents.to_string(),
replacement: contents.clone(),
},
TextRange::new(stmt.start(), terminal.stmt.end()),
);

View File

@@ -47,7 +47,7 @@ pub(crate) fn banned_api<T: Ranged>(checker: &Checker, policy: &NameMatchPolicy,
checker.report_diagnostic(
BannedApi {
name: banned_module,
message: reason.msg.to_string(),
message: reason.msg.clone(),
},
node.range(),
);
@@ -74,8 +74,8 @@ pub(crate) fn banned_attribute_access(checker: &Checker, expr: &Expr) {
{
checker.report_diagnostic(
BannedApi {
name: banned_path.to_string(),
message: ban.msg.to_string(),
name: banned_path.clone(),
message: ban.msg.clone(),
},
expr.range(),
);

View File

@@ -20,21 +20,17 @@ use super::categorize::ImportSection;
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Default)]
pub enum RelativeImportsOrder {
/// Place "closer" imports (fewer `.` characters, most local) before
/// "further" imports (more `.` characters, least local).
ClosestToFurthest,
/// Place "further" imports (more `.` characters, least local) imports
/// before "closer" imports (fewer `.` characters, most local).
#[default]
FurthestToClosest,
}
impl Default for RelativeImportsOrder {
fn default() -> Self {
Self::FurthestToClosest
}
}
impl Display for RelativeImportsOrder {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {

View File

@@ -427,7 +427,7 @@ pub(crate) fn literal_comparisons(checker: &Checker, compare: &ast::ExprCompare)
for diagnostic in &mut diagnostics {
diagnostic.set_fix(Fix::unsafe_edit(Edit::range_replacement(
content.to_string(),
content.clone(),
compare.range(),
)));
}

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:2:7
|
1 | #: E231
@@ -18,7 +18,7 @@ help: Add missing whitespace
4 | a[b1,:]
5 | #: E231
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:4:5
|
2 | a = (1,2)
@@ -38,7 +38,7 @@ help: Add missing whitespace
6 | a = [{'a':''}]
7 | #: Okay
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:6:10
|
4 | a[b1,:]
@@ -58,7 +58,7 @@ help: Add missing whitespace
8 | a = (4,)
9 | b = (5, )
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:19:10
|
17 | def foo() -> None:
@@ -77,7 +77,7 @@ help: Add missing whitespace
21 |
22 | #: Okay
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:29:20
|
27 | mdtypes_template = {
@@ -96,7 +96,7 @@ help: Add missing whitespace
31 |
32 | # E231
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:33:6
|
32 | # E231
@@ -115,7 +115,7 @@ help: Add missing whitespace
35 | # Okay because it's hard to differentiate between the usages of a colon in a f-string
36 | f"{a:=1}"
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:47:37
|
46 | #: E231
@@ -134,7 +134,7 @@ help: Add missing whitespace
49 | #: Okay
50 | a = (1,)
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:60:13
|
58 | results = {
@@ -154,7 +154,7 @@ help: Add missing whitespace
62 | results_in_tuple = (
63 | {
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:65:17
|
63 | {
@@ -174,7 +174,7 @@ help: Add missing whitespace
67 | )
68 | results_in_list = [
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:71:17
|
69 | {
@@ -194,7 +194,7 @@ help: Add missing whitespace
73 | ]
74 | results_in_list_first = [
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:76:17
|
74 | results_in_list_first = [
@@ -214,7 +214,7 @@ help: Add missing whitespace
78 | ]
79 |
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:82:13
|
80 | x = [
@@ -234,7 +234,7 @@ help: Add missing whitespace
84 | "k3":[2], # E231
85 | "k4": [2],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:84:13
|
82 | "k1":[2], # E231
@@ -254,7 +254,7 @@ help: Add missing whitespace
86 | "k5": [2],
87 | "k6": [1, 2, 3, 4,5,6,7] # E231
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:87:26
|
85 | "k4": [2],
@@ -274,7 +274,7 @@ help: Add missing whitespace
89 | {
90 | "k1": [
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:87:28
|
85 | "k4": [2],
@@ -294,7 +294,7 @@ help: Add missing whitespace
89 | {
90 | "k1": [
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:87:30
|
85 | "k4": [2],
@@ -314,7 +314,7 @@ help: Add missing whitespace
89 | {
90 | "k1": [
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:92:21
|
90 | "k1": [
@@ -334,7 +334,7 @@ help: Add missing whitespace
94 | {
95 | "kb": [2,3], # E231
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:92:24
|
90 | "k1": [
@@ -354,7 +354,7 @@ help: Add missing whitespace
94 | {
95 | "kb": [2,3], # E231
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:95:25
|
93 | },
@@ -374,7 +374,7 @@ help: Add missing whitespace
97 | {
98 | "ka":[2, 3], # E231
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:98:21
|
96 | },
@@ -394,7 +394,7 @@ help: Add missing whitespace
100 | "kc": [2, 3], # Ok
101 | "kd": [2,3], # E231
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:101:25
|
99 | "kb": [2, 3], # Ok
@@ -414,7 +414,7 @@ help: Add missing whitespace
103 | },
104 | ]
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:102:21
|
100 | "kc": [2, 3], # Ok
@@ -434,7 +434,7 @@ help: Add missing whitespace
104 | ]
105 | }
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:102:24
|
100 | "kc": [2, 3], # Ok
@@ -454,7 +454,7 @@ help: Add missing whitespace
104 | ]
105 | }
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:109:18
|
108 | # Should be E231 errors on all of these type parameters and function parameters, but not on their (strange) defaults
@@ -473,7 +473,7 @@ help: Add missing whitespace
111 | y:B = [[["foo", "bar"]]],
112 | z:object = "fooo",
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:109:40
|
108 | # Should be E231 errors on all of these type parameters and function parameters, but not on their (strange) defaults
@@ -492,7 +492,7 @@ help: Add missing whitespace
111 | y:B = [[["foo", "bar"]]],
112 | z:object = "fooo",
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:109:70
|
108 | # Should be E231 errors on all of these type parameters and function parameters, but not on their (strange) defaults
@@ -511,7 +511,7 @@ help: Add missing whitespace
111 | y:B = [[["foo", "bar"]]],
112 | z:object = "fooo",
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:110:6
|
108 | # Should be E231 errors on all of these type parameters and function parameters, but not on their (strange) defaults
@@ -531,7 +531,7 @@ help: Add missing whitespace
112 | z:object = "fooo",
113 | ):
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:111:6
|
109 | def pep_696_bad[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes](
@@ -551,7 +551,7 @@ help: Add missing whitespace
113 | ):
114 | pass
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:112:6
|
110 | x:A = "foo"[::-1],
@@ -571,7 +571,7 @@ help: Add missing whitespace
114 | pass
115 |
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:116:18
|
114 | pass
@@ -591,7 +591,7 @@ help: Add missing whitespace
118 | self,
119 | x:A = "foo"[::-1],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:116:40
|
114 | pass
@@ -611,7 +611,7 @@ help: Add missing whitespace
118 | self,
119 | x:A = "foo"[::-1],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:116:70
|
114 | pass
@@ -631,7 +631,7 @@ help: Add missing whitespace
118 | self,
119 | x:A = "foo"[::-1],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:117:29
|
116 | class PEP696Bad[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes]:
@@ -650,7 +650,7 @@ help: Add missing whitespace
119 | x:A = "foo"[::-1],
120 | y:B = [[["foo", "bar"]]],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:117:51
|
116 | class PEP696Bad[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes]:
@@ -669,7 +669,7 @@ help: Add missing whitespace
119 | x:A = "foo"[::-1],
120 | y:B = [[["foo", "bar"]]],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:117:81
|
116 | class PEP696Bad[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes]:
@@ -688,7 +688,7 @@ help: Add missing whitespace
119 | x:A = "foo"[::-1],
120 | y:B = [[["foo", "bar"]]],
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:119:10
|
117 | def pep_696_bad_method[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes](
@@ -708,7 +708,7 @@ help: Add missing whitespace
121 | z:object = "fooo",
122 | ):
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:120:10
|
118 | self,
@@ -728,7 +728,7 @@ help: Add missing whitespace
122 | ):
123 | pass
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:121:10
|
119 | x:A = "foo"[::-1],
@@ -748,7 +748,7 @@ help: Add missing whitespace
123 | pass
124 |
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:125:32
|
123 | pass
@@ -768,7 +768,7 @@ help: Add missing whitespace
127 | pass
128 |
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:125:54
|
123 | pass
@@ -788,7 +788,7 @@ help: Add missing whitespace
127 | pass
128 |
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:125:84
|
123 | pass
@@ -808,7 +808,7 @@ help: Add missing whitespace
127 | pass
128 |
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:126:47
|
125 | class PEP696BadWithEmptyBases[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes]():
@@ -826,7 +826,7 @@ help: Add missing whitespace
128 |
129 | # Should be no E231 errors on any of these:
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:126:69
|
125 | class PEP696BadWithEmptyBases[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes]():
@@ -844,7 +844,7 @@ help: Add missing whitespace
128 |
129 | # Should be no E231 errors on any of these:
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:126:99
|
125 | class PEP696BadWithEmptyBases[A:object="foo"[::-1], B:object =[[["foo", "bar"]]], C:object= bytes]():
@@ -862,7 +862,7 @@ help: Add missing whitespace
128 |
129 | # Should be no E231 errors on any of these:
E231 [*] Missing whitespace after ','
E231 [*] Missing whitespace after `,`
--> E23.py:147:6
|
146 | # E231
@@ -881,7 +881,7 @@ help: Add missing whitespace
149 | # Okay because it's hard to differentiate between the usages of a colon in a t-string
150 | t"{a:=1}"
E231 [*] Missing whitespace after ':'
E231 [*] Missing whitespace after `:`
--> E23.py:161:37
|
160 | #: E231

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
invalid-syntax: Expected ']', found '('
invalid-syntax: Expected `]`, found `(`
--> E30_syntax_error.py:4:15
|
2 | # parenthesis.
@@ -11,7 +11,7 @@ invalid-syntax: Expected ']', found '('
5 | pass
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:13:18
|
12 | class Foo:
@@ -32,7 +32,7 @@ E301 Expected 1 blank line, found 0
|
help: Add missing blank line
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:18:11
|
16 | pass
@@ -41,7 +41,7 @@ invalid-syntax: Expected ')', found newline
| ^
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:21:9
|
21 | def top(

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
invalid-syntax: Expected ']', found '('
invalid-syntax: Expected `]`, found `(`
--> E30_syntax_error.py:4:15
|
2 | # parenthesis.
@@ -22,7 +22,7 @@ E302 Expected 2 blank lines, found 1
|
help: Add missing blank line(s)
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:13:18
|
12 | class Foo:
@@ -32,7 +32,7 @@ invalid-syntax: Expected ')', found newline
15 | def method():
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:18:11
|
16 | pass
@@ -41,7 +41,7 @@ invalid-syntax: Expected ')', found newline
| ^
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:21:9
|
21 | def top(

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
invalid-syntax: Expected ']', found '('
invalid-syntax: Expected `]`, found `(`
--> E30_syntax_error.py:4:15
|
2 | # parenthesis.
@@ -21,7 +21,7 @@ E303 Too many blank lines (3)
|
help: Remove extraneous blank line(s)
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:13:18
|
12 | class Foo:
@@ -31,7 +31,7 @@ invalid-syntax: Expected ')', found newline
15 | def method():
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:18:11
|
16 | pass
@@ -40,7 +40,7 @@ invalid-syntax: Expected ')', found newline
| ^
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:21:9
|
21 | def top(

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
invalid-syntax: Expected ']', found '('
invalid-syntax: Expected `]`, found `(`
--> E30_syntax_error.py:4:15
|
2 | # parenthesis.
@@ -11,7 +11,7 @@ invalid-syntax: Expected ']', found '('
5 | pass
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:13:18
|
12 | class Foo:
@@ -31,7 +31,7 @@ E305 Expected 2 blank lines after class or function definition, found (1)
|
help: Add missing blank line(s)
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:18:11
|
16 | pass
@@ -40,7 +40,7 @@ invalid-syntax: Expected ')', found newline
| ^
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:21:9
|
21 | def top(

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
invalid-syntax: Expected ']', found '('
invalid-syntax: Expected `]`, found `(`
--> E30_syntax_error.py:4:15
|
2 | # parenthesis.
@@ -11,7 +11,7 @@ invalid-syntax: Expected ']', found '('
5 | pass
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:13:18
|
12 | class Foo:
@@ -21,7 +21,7 @@ invalid-syntax: Expected ')', found newline
15 | def method():
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:18:11
|
16 | pass
@@ -30,7 +30,7 @@ invalid-syntax: Expected ')', found newline
| ^
|
invalid-syntax: Expected ')', found newline
invalid-syntax: Expected `)`, found newline
--> E30_syntax_error.py:21:9
|
21 | def top(

View File

@@ -94,7 +94,7 @@ pub(crate) fn capitalized(checker: &Checker, docstring: &Docstring) {
let mut diagnostic = checker.report_diagnostic(
FirstWordUncapitalized {
first_word: first_word.to_string(),
capitalized_word: capitalized_word.to_string(),
capitalized_word: capitalized_word.clone(),
},
docstring.range(),
);

View File

@@ -1,9 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_text_size::Ranged;
use crate::Violation;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for `nonlocal` names without bindings.
@@ -46,19 +43,3 @@ impl Violation for NonlocalWithoutBinding {
format!("Nonlocal name `{name}` found without binding")
}
}
/// PLE0117
pub(crate) fn nonlocal_without_binding(checker: &Checker, nonlocal: &ast::StmtNonlocal) {
if !checker.semantic().scope_id.is_global() {
for name in &nonlocal.names {
if checker.semantic().nonlocal(name).is_none() {
checker.report_diagnostic(
NonlocalWithoutBinding {
name: name.to_string(),
},
name.range(),
);
}
}
}
}

View File

@@ -188,7 +188,7 @@ pub(crate) fn bit_count(checker: &Checker, call: &ExprCall) {
let mut diagnostic = checker.report_diagnostic(
BitCount {
existing: SourceCodeSnippet::from_str(literal_text),
replacement: SourceCodeSnippet::new(replacement.to_string()),
replacement: SourceCodeSnippet::new(replacement.clone()),
},
call.range(),
);

View File

@@ -62,40 +62,11 @@ pub(crate) fn hardcoded_string_charset_literal(checker: &Checker, expr: &ExprStr
struct NamedCharset {
name: &'static str,
bytes: &'static [u8],
ascii_char_set: AsciiCharSet,
}
/// Represents the set of ascii characters in form of a bitset.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
struct AsciiCharSet(u128);
impl AsciiCharSet {
/// Creates the set of ascii characters from `bytes`.
/// Returns None if there is non-ascii byte.
const fn from_bytes(bytes: &[u8]) -> Option<Self> {
// TODO: simplify implementation, when const-traits are supported
// https://github.com/rust-lang/rust-project-goals/issues/106
let mut bitset = 0;
let mut i = 0;
while i < bytes.len() {
if !bytes[i].is_ascii() {
return None;
}
bitset |= 1 << bytes[i];
i += 1;
}
Some(Self(bitset))
}
}
impl NamedCharset {
const fn new(name: &'static str, bytes: &'static [u8]) -> Self {
Self {
name,
bytes,
// SAFETY: The named charset is guaranteed to have only ascii bytes.
ascii_char_set: AsciiCharSet::from_bytes(bytes).unwrap(),
}
Self { name, bytes }
}
}

View File

@@ -149,10 +149,9 @@ pub(crate) fn unnecessary_from_float(checker: &Checker, call: &ExprCall) {
// Check if we should suppress the fix due to type validation concerns
let is_type_safe = is_valid_argument_type(arg_value, method_name, constructor, checker);
let has_keywords = !call.arguments.keywords.is_empty();
// Determine fix safety
let applicability = if is_type_safe && !has_keywords {
let applicability = if is_type_safe {
Applicability::Safe
} else {
Applicability::Unsafe
@@ -210,21 +209,27 @@ fn is_valid_argument_type(
_ => false,
},
// Fraction.from_decimal accepts int, bool, Decimal
(MethodName::FromDecimal, Constructor::Fraction) => match resolved_type {
ResolvedPythonType::Atom(PythonType::Number(
NumberLike::Integer | NumberLike::Bool,
)) => true,
ResolvedPythonType::Unknown => is_int,
_ => {
// Check if it's a Decimal instance
arg_expr
.as_call_expr()
.and_then(|call| semantic.resolve_qualified_name(&call.func))
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["decimal", "Decimal"])
})
(MethodName::FromDecimal, Constructor::Fraction) => {
// First check if it's a Decimal constructor call
let is_decimal_call = arg_expr
.as_call_expr()
.and_then(|call| semantic.resolve_qualified_name(&call.func))
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["decimal", "Decimal"])
});
if is_decimal_call {
return true;
}
},
match resolved_type {
ResolvedPythonType::Atom(PythonType::Number(
NumberLike::Integer | NumberLike::Bool,
)) => true,
ResolvedPythonType::Unknown => is_int,
_ => false,
}
}
_ => false,
}
}
@@ -274,7 +279,7 @@ fn handle_non_finite_float_special_case(
return None;
}
let Expr::Call(ast::ExprCall {
let Expr::Call(ExprCall {
func, arguments, ..
}) = arg_value
else {

View File

@@ -93,16 +93,21 @@ pub(crate) fn verbose_decimal_constructor(checker: &Checker, call: &ast::ExprCal
// https://github.com/python/cpython/blob/ac556a2ad1213b8bb81372fe6fb762f5fcb076de/Lib/_pydecimal.py#L6060-L6077
// _after_ trimming whitespace from the string and removing all occurrences of "_".
let original_str = str_literal.to_str().trim_whitespace();
// Strip leading underscores before extracting the sign, as Python's Decimal parser
// removes underscores before parsing the sign.
let sign_check_str = original_str.trim_start_matches('_');
// Extract the unary sign, if any.
let (unary, original_str) = if let Some(trimmed) = original_str.strip_prefix('+') {
let (unary, sign_check_str) = if let Some(trimmed) = sign_check_str.strip_prefix('+') {
("+", trimmed)
} else if let Some(trimmed) = original_str.strip_prefix('-') {
} else if let Some(trimmed) = sign_check_str.strip_prefix('-') {
("-", trimmed)
} else {
("", original_str)
("", sign_check_str)
};
let mut rest = Cow::from(original_str);
let has_digit_separators = memchr::memchr(b'_', rest.as_bytes()).is_some();
// Save the string after the sign for normalization (before removing underscores)
let str_after_sign_for_normalization = sign_check_str;
let mut rest = Cow::from(sign_check_str);
let has_digit_separators = memchr::memchr(b'_', original_str.as_bytes()).is_some();
if has_digit_separators {
rest = Cow::from(rest.replace('_', ""));
}
@@ -123,7 +128,7 @@ pub(crate) fn verbose_decimal_constructor(checker: &Checker, call: &ast::ExprCal
// If the original string had digit separators, normalize them
let rest = if has_digit_separators {
Cow::from(normalize_digit_separators(original_str))
Cow::from(normalize_digit_separators(str_after_sign_for_normalization))
} else {
Cow::from(rest)
};

View File

@@ -5,7 +5,6 @@ use ruff_python_ast::{
relocate::relocate_expr,
visitor::{self, Visitor},
};
use ruff_python_codegen::Generator;
use ruff_text_size::{Ranged, TextRange};

View File

@@ -134,3 +134,14 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(fo
75 | f.write(foobar)
|
help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
FURB103 `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
|
152 | data = {"price": 100}
153 |
154 | with open("test.json", "wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
155 | f.write(json.dumps(data, indent=4).encode("utf-8"))
|
help: Replace with `Path("test.json")....`

View File

@@ -669,6 +669,7 @@ help: Replace with `1_2345`
85 + Decimal(1_2345)
86 | Decimal("000_1_2345")
87 | Decimal("000_000")
88 |
FURB157 [*] Verbose expression in `Decimal` constructor
--> FURB157.py:86:9
@@ -686,6 +687,8 @@ help: Replace with `1_2345`
- Decimal("000_1_2345")
86 + Decimal(1_2345)
87 | Decimal("000_000")
88 |
89 | # Test cases for underscores before sign
FURB157 [*] Verbose expression in `Decimal` constructor
--> FURB157.py:87:9
@@ -694,6 +697,8 @@ FURB157 [*] Verbose expression in `Decimal` constructor
86 | Decimal("000_1_2345")
87 | Decimal("000_000")
| ^^^^^^^^^
88 |
89 | # Test cases for underscores before sign
|
help: Replace with `0`
84 | # Separators _and_ leading zeros
@@ -701,3 +706,57 @@ help: Replace with `0`
86 | Decimal("000_1_2345")
- Decimal("000_000")
87 + Decimal(0)
88 |
89 | # Test cases for underscores before sign
90 | # https://github.com/astral-sh/ruff/issues/21186
FURB157 [*] Verbose expression in `Decimal` constructor
--> FURB157.py:91:9
|
89 | # Test cases for underscores before sign
90 | # https://github.com/astral-sh/ruff/issues/21186
91 | Decimal("_-1") # Should flag as verbose
| ^^^^^
92 | Decimal("_+1") # Should flag as verbose
93 | Decimal("_-1_000") # Should flag as verbose
|
help: Replace with `-1`
88 |
89 | # Test cases for underscores before sign
90 | # https://github.com/astral-sh/ruff/issues/21186
- Decimal("_-1") # Should flag as verbose
91 + Decimal(-1) # Should flag as verbose
92 | Decimal("_+1") # Should flag as verbose
93 | Decimal("_-1_000") # Should flag as verbose
FURB157 [*] Verbose expression in `Decimal` constructor
--> FURB157.py:92:9
|
90 | # https://github.com/astral-sh/ruff/issues/21186
91 | Decimal("_-1") # Should flag as verbose
92 | Decimal("_+1") # Should flag as verbose
| ^^^^^
93 | Decimal("_-1_000") # Should flag as verbose
|
help: Replace with `+1`
89 | # Test cases for underscores before sign
90 | # https://github.com/astral-sh/ruff/issues/21186
91 | Decimal("_-1") # Should flag as verbose
- Decimal("_+1") # Should flag as verbose
92 + Decimal(+1) # Should flag as verbose
93 | Decimal("_-1_000") # Should flag as verbose
FURB157 [*] Verbose expression in `Decimal` constructor
--> FURB157.py:93:9
|
91 | Decimal("_-1") # Should flag as verbose
92 | Decimal("_+1") # Should flag as verbose
93 | Decimal("_-1_000") # Should flag as verbose
| ^^^^^^^^^
|
help: Replace with `-1_000`
90 | # https://github.com/astral-sh/ruff/issues/21186
91 | Decimal("_-1") # Should flag as verbose
92 | Decimal("_+1") # Should flag as verbose
- Decimal("_-1_000") # Should flag as verbose
93 + Decimal(-1_000) # Should flag as verbose

View File

@@ -99,7 +99,6 @@ help: Replace with `Fraction` constructor
12 | _ = Fraction.from_decimal(Decimal("-4.2"))
13 | _ = Fraction.from_decimal(Decimal.from_float(4.2))
14 | _ = Decimal.from_float(0.1)
note: This is an unsafe fix and may change runtime behavior
FURB164 [*] Verbose method `from_decimal` in `Fraction` construction
--> FURB164.py:12:5
@@ -120,7 +119,6 @@ help: Replace with `Fraction` constructor
13 | _ = Fraction.from_decimal(Decimal.from_float(4.2))
14 | _ = Decimal.from_float(0.1)
15 | _ = Decimal.from_float(-0.5)
note: This is an unsafe fix and may change runtime behavior
FURB164 [*] Verbose method `from_decimal` in `Fraction` construction
--> FURB164.py:13:5
@@ -484,7 +482,6 @@ help: Replace with `Fraction` constructor
32 | _ = Decimal.from_float(f=4.2)
33 |
34 | # Cases with invalid argument counts - should not get fixes
note: This is an unsafe fix and may change runtime behavior
FURB164 Verbose method `from_float` in `Decimal` construction
--> FURB164.py:32:5
@@ -658,6 +655,7 @@ help: Replace with `Decimal` constructor
64 + _ = Decimal("nan")
65 | _ = Decimal.from_float(float("\x2dnan"))
66 | _ = Decimal.from_float(float("\N{HYPHEN-MINUS}nan"))
67 |
FURB164 [*] Verbose method `from_float` in `Decimal` construction
--> FURB164.py:65:5
@@ -675,6 +673,8 @@ help: Replace with `Decimal` constructor
- _ = Decimal.from_float(float("\x2dnan"))
65 + _ = Decimal("nan")
66 | _ = Decimal.from_float(float("\N{HYPHEN-MINUS}nan"))
67 |
68 | # See: https://github.com/astral-sh/ruff/issues/21257
FURB164 [*] Verbose method `from_float` in `Decimal` construction
--> FURB164.py:66:5
@@ -683,6 +683,8 @@ FURB164 [*] Verbose method `from_float` in `Decimal` construction
65 | _ = Decimal.from_float(float("\x2dnan"))
66 | _ = Decimal.from_float(float("\N{HYPHEN-MINUS}nan"))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
67 |
68 | # See: https://github.com/astral-sh/ruff/issues/21257
|
help: Replace with `Decimal` constructor
63 | # Cases with non-finite floats - should produce safe fixes
@@ -690,3 +692,38 @@ help: Replace with `Decimal` constructor
65 | _ = Decimal.from_float(float("\x2dnan"))
- _ = Decimal.from_float(float("\N{HYPHEN-MINUS}nan"))
66 + _ = Decimal("nan")
67 |
68 | # See: https://github.com/astral-sh/ruff/issues/21257
69 | # fixes must be safe
FURB164 [*] Verbose method `from_float` in `Fraction` construction
--> FURB164.py:70:5
|
68 | # See: https://github.com/astral-sh/ruff/issues/21257
69 | # fixes must be safe
70 | _ = Fraction.from_float(f=4.2)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
71 | _ = Fraction.from_decimal(dec=4)
|
help: Replace with `Fraction` constructor
67 |
68 | # See: https://github.com/astral-sh/ruff/issues/21257
69 | # fixes must be safe
- _ = Fraction.from_float(f=4.2)
70 + _ = Fraction(4.2)
71 | _ = Fraction.from_decimal(dec=4)
FURB164 [*] Verbose method `from_decimal` in `Fraction` construction
--> FURB164.py:71:5
|
69 | # fixes must be safe
70 | _ = Fraction.from_float(f=4.2)
71 | _ = Fraction.from_decimal(dec=4)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Replace with `Fraction` constructor
68 | # See: https://github.com/astral-sh/ruff/issues/21257
69 | # fixes must be safe
70 | _ = Fraction.from_float(f=4.2)
- _ = Fraction.from_decimal(dec=4)
71 + _ = Fraction(4)

View File

@@ -257,4 +257,25 @@ help: Replace with `Path("file.txt").write_text(foobar, newline="\r\n")`
75 + pathlib.Path("file.txt").write_text(foobar, newline="\r\n")
76 |
77 | # Non-errors.
78 |
78 |
FURB103 [*] `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
|
152 | data = {"price": 100}
153 |
154 | with open("test.json", "wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
155 | f.write(json.dumps(data, indent=4).encode("utf-8"))
|
help: Replace with `Path("test.json")....`
148 |
149 | # See: https://github.com/astral-sh/ruff/issues/20785
150 | import json
151 + import pathlib
152 |
153 | data = {"price": 100}
154 |
- with open("test.json", "wb") as f:
- f.write(json.dumps(data, indent=4).encode("utf-8"))
155 + pathlib.Path("test.json").write_bytes(json.dumps(data, indent=4).encode("utf-8"))

View File

@@ -104,3 +104,14 @@ FURB103 `open` and `write` should be replaced by `Path("file.txt").write_text(ba
51 | # writes a single time to file and that bit they can replace.
|
help: Replace with `Path("file.txt").write_text(bar(bar(a + x)))`
FURB103 `open` and `write` should be replaced by `Path("test.json")....`
--> FURB103.py:154:6
|
152 | data = {"price": 100}
153 |
154 | with open("test.json", "wb") as f:
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
155 | f.write(json.dumps(data, indent=4).encode("utf-8"))
|
help: Replace with `Path("test.json")....`

View File

@@ -143,6 +143,15 @@ pub(super) fn rounded_and_ndigits<'a>(
return None;
}
// *args
if arguments.args.iter().any(Expr::is_starred_expr) {
return None;
}
// **kwargs
if arguments.keywords.iter().any(|kw| kw.arg.is_none()) {
return None;
}
let rounded = arguments.find_argument_value("number", 0)?;
let ndigits = arguments.find_argument_value("ndigits", 1);

View File

@@ -253,6 +253,8 @@ RUF057 [*] Value being rounded is already an integer
82 | | 17 # a comment
83 | | )
| |_^
84 |
85 | # See: https://github.com/astral-sh/ruff/issues/21209
|
help: Remove unnecessary `round` call
78 | round(# a comment
@@ -262,4 +264,7 @@ help: Remove unnecessary `round` call
- 17 # a comment
- )
81 + 17
82 |
83 | # See: https://github.com/astral-sh/ruff/issues/21209
84 | print(round(125, **{"ndigits": -2}))
note: This is an unsafe fix and may change runtime behavior

View File

@@ -3372,7 +3372,7 @@ impl Arguments {
pub fn arguments_source_order(&self) -> impl Iterator<Item = ArgOrKeyword<'_>> {
let args = self.args.iter().map(ArgOrKeyword::Arg);
let keywords = self.keywords.iter().map(ArgOrKeyword::Keyword);
args.merge_by(keywords, |left, right| left.start() < right.start())
args.merge_by(keywords, |left, right| left.start() <= right.start())
}
pub fn inner_range(&self) -> TextRange {

View File

@@ -335,3 +335,96 @@ def overload4():
# trailing comment
def overload4(a: int): ...
# In preview, we preserve these newlines at the start of functions:
def preserved1():
return 1
def preserved2():
pass
def preserved3():
def inner(): ...
def preserved4():
def inner():
print("with a body")
return 1
return 2
def preserved5():
...
# trailing comment prevents collapsing the stub
def preserved6():
# Comment
return 1
def preserved7():
# comment
# another line
# and a third
return 0
def preserved8(): # this also prevents collapsing the stub
...
# But we still discard these newlines:
def removed1():
"Docstring"
return 1
def removed2():
...
def removed3():
... # trailing same-line comment does not prevent collapsing the stub
# And we discard empty lines after the first:
def partially_preserved1():
return 1
# We only preserve blank lines, not add new ones
def untouched1():
# comment
return 0
def untouched2():
# comment
return 0
def untouched3():
# comment
# another line
# and a third
return 0

View File

@@ -61,3 +61,9 @@ def test6 ():
print("Format" )
print(3 + 4)<RANGE_END>
print("Format to fix indentation" )
def test7 ():
<RANGE_START>print("Format" )
print(3 + 4)<RANGE_END>
print("Format to fix indentation" )

View File

@@ -613,3 +613,58 @@ match guard_comments:
):
pass
# regression tests from https://github.com/astral-sh/ruff/issues/17796
match class_pattern:
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
pass
case Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as capture:
pass
case Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as capture:
pass
case Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture:
pass
match sequence_pattern_brackets:
case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
pass
case [
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
] as capture:
pass
case [
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
] as capture:
pass
match class_pattern:
# 1
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture: # 2
# 3
pass # 4
# 5
case Class( # 6
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 7
) as capture: # 8
pass
case Class( # 9
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 10
) as capture: # 11
pass
case Class( # 12
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 13
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture: # 14
pass
case Class( # 0
# 1
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 2
# 3
) as capture:
pass

View File

@@ -334,7 +334,7 @@ class A: ...
let options = PyFormatOptions::from_source_type(source_type);
let printed = format_range(&source, TextRange::new(start, end), options).unwrap();
let mut formatted = source.to_string();
let mut formatted = source.clone();
formatted.replace_range(
std::ops::Range::<usize>::from(printed.source_range()),
printed.as_code(),

View File

@@ -1,5 +1,5 @@
use ruff_formatter::{FormatOwnedWithRule, FormatRefWithRule, FormatRule, FormatRuleWithOptions};
use ruff_python_ast::{AnyNodeRef, Expr};
use ruff_python_ast::{AnyNodeRef, Expr, PatternMatchAs};
use ruff_python_ast::{MatchCase, Pattern};
use ruff_python_trivia::CommentRanges;
use ruff_python_trivia::{
@@ -14,6 +14,7 @@ use crate::expression::parentheses::{
NeedsParentheses, OptionalParentheses, Parentheses, optional_parentheses, parenthesized,
};
use crate::prelude::*;
use crate::preview::is_avoid_parens_for_long_as_captures_enabled;
pub(crate) mod pattern_arguments;
pub(crate) mod pattern_keyword;
@@ -242,8 +243,14 @@ pub(crate) fn can_pattern_omit_optional_parentheses(
Pattern::MatchValue(_)
| Pattern::MatchSingleton(_)
| Pattern::MatchStar(_)
| Pattern::MatchAs(_)
| Pattern::MatchOr(_) => false,
Pattern::MatchAs(PatternMatchAs { pattern, .. }) => match pattern {
Some(pattern) => {
is_avoid_parens_for_long_as_captures_enabled(context)
&& has_parentheses_and_is_non_empty(pattern, context)
}
None => false,
},
Pattern::MatchSequence(sequence) => {
!sequence.patterns.is_empty() || context.comments().has_dangling(pattern)
}
@@ -299,7 +306,7 @@ impl<'a> CanOmitOptionalParenthesesVisitor<'a> {
}
// `case 4+3j:` or `case 4-3j:
// Can not contain arbitrary expressions. Limited to complex numbers.
// Cannot contain arbitrary expressions. Limited to complex numbers.
Expr::BinOp(_) => {
self.update_max_precedence(OperatorPrecedence::Additive, 1);
}
@@ -318,7 +325,14 @@ impl<'a> CanOmitOptionalParenthesesVisitor<'a> {
// The pattern doesn't start with a parentheses pattern, but with the class's identifier.
self.first.set_if_none(First::Token);
}
Pattern::MatchStar(_) | Pattern::MatchSingleton(_) | Pattern::MatchAs(_) => {}
Pattern::MatchAs(PatternMatchAs { pattern, .. }) => {
if let Some(pattern) = pattern
&& is_avoid_parens_for_long_as_captures_enabled(context)
{
self.visit_sub_pattern(pattern, context);
}
}
Pattern::MatchStar(_) | Pattern::MatchSingleton(_) => {}
Pattern::MatchOr(or_pattern) => {
self.update_max_precedence(
OperatorPrecedence::Or,

View File

@@ -36,3 +36,19 @@ pub(crate) const fn is_remove_parens_around_except_types_enabled(
) -> bool {
context.is_preview()
}
/// Returns `true` if the
/// [`allow_newline_after_block_open`](https://github.com/astral-sh/ruff/pull/21110) preview style
/// is enabled.
pub(crate) const fn is_allow_newline_after_block_open_enabled(context: &PyFormatContext) -> bool {
context.is_preview()
}
/// Returns `true` if the
/// [`avoid_parens_for_long_as_captures`](https://github.com/astral-sh/ruff/pull/21176) preview
/// style is enabled.
pub(crate) const fn is_avoid_parens_for_long_as_captures_enabled(
context: &PyFormatContext,
) -> bool {
context.is_preview()
}

View File

@@ -8,7 +8,7 @@ use ruff_python_trivia::{SimpleToken, SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::comments::{SourceComment, leading_alternate_branch_comments, trailing_comments};
use crate::statement::suite::{SuiteKind, contains_only_an_ellipsis};
use crate::statement::suite::{SuiteKind, as_only_an_ellipsis};
use crate::verbatim::write_suppressed_clause_header;
use crate::{has_skip_comment, prelude::*};
@@ -449,17 +449,10 @@ impl Format<PyFormatContext<'_>> for FormatClauseBody<'_> {
|| matches!(self.kind, SuiteKind::Function | SuiteKind::Class);
if should_collapse_stub
&& contains_only_an_ellipsis(self.body, f.context().comments())
&& let Some(ellipsis) = as_only_an_ellipsis(self.body, f.context().comments())
&& self.trailing_comments.is_empty()
{
write!(
f,
[
space(),
self.body.format().with_options(self.kind),
hard_line_break()
]
)
write!(f, [space(), ellipsis.format(), hard_line_break()])
} else {
write!(
f,

View File

@@ -13,7 +13,9 @@ use crate::comments::{
use crate::context::{NodeLevel, TopLevelStatementPosition, WithIndentLevel, WithNodeLevel};
use crate::other::string_literal::StringLiteralKind;
use crate::prelude::*;
use crate::preview::is_blank_line_before_decorated_class_in_stub_enabled;
use crate::preview::{
is_allow_newline_after_block_open_enabled, is_blank_line_before_decorated_class_in_stub_enabled,
};
use crate::statement::stmt_expr::FormatStmtExpr;
use crate::verbatim::{
suppressed_node, write_suppressed_statements_starting_with_leading_comment,
@@ -169,6 +171,22 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
false,
)
} else {
// Allow an empty line after a function header in preview, if the function has no
// docstring and no initial comment.
let allow_newline_after_block_open =
is_allow_newline_after_block_open_enabled(f.context())
&& matches!(self.kind, SuiteKind::Function)
&& matches!(first, SuiteChildStatement::Other(_));
let start = comments
.leading(first)
.first()
.map_or_else(|| first.start(), Ranged::start);
if allow_newline_after_block_open && lines_before(start, f.context().source()) > 1 {
empty_line().fmt(f)?;
}
first.fmt(f)?;
let empty_line_after_docstring = if matches!(first, SuiteChildStatement::Docstring(_))
@@ -218,7 +236,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
)?;
} else {
// Preserve empty lines after a stub implementation but don't insert a new one if there isn't any present in the source.
// This is useful when having multiple function overloads that should be grouped to getter by omitting new lines between them.
// This is useful when having multiple function overloads that should be grouped together by omitting new lines between them.
let is_preceding_stub_function_without_empty_line = following
.is_function_def_stmt()
&& preceding
@@ -728,17 +746,21 @@ fn stub_suite_can_omit_empty_line(preceding: &Stmt, following: &Stmt, f: &PyForm
/// Returns `true` if a function or class body contains only an ellipsis with no comments.
pub(crate) fn contains_only_an_ellipsis(body: &[Stmt], comments: &Comments) -> bool {
match body {
[Stmt::Expr(ast::StmtExpr { value, .. })] => {
let [node] = body else {
return false;
};
value.is_ellipsis_literal_expr()
&& !comments.has_leading(node)
&& !comments.has_trailing_own_line(node)
}
_ => false,
as_only_an_ellipsis(body, comments).is_some()
}
/// Returns `Some(Stmt::Ellipsis)` if a function or class body contains only an ellipsis with no
/// comments.
pub(crate) fn as_only_an_ellipsis<'a>(body: &'a [Stmt], comments: &Comments) -> Option<&'a Stmt> {
if let [node @ Stmt::Expr(ast::StmtExpr { value, .. })] = body
&& value.is_ellipsis_literal_expr()
&& !comments.has_leading(node)
&& !comments.has_trailing_own_line(node)
{
return Some(node);
}
None
}
/// Returns `true` if a [`Stmt`] is a class or function definition.

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/newlines.py
snapshot_kind: text
---
## Input
```python
@@ -342,6 +341,99 @@ def overload4():
# trailing comment
def overload4(a: int): ...
# In preview, we preserve these newlines at the start of functions:
def preserved1():
return 1
def preserved2():
pass
def preserved3():
def inner(): ...
def preserved4():
def inner():
print("with a body")
return 1
return 2
def preserved5():
...
# trailing comment prevents collapsing the stub
def preserved6():
# Comment
return 1
def preserved7():
# comment
# another line
# and a third
return 0
def preserved8(): # this also prevents collapsing the stub
...
# But we still discard these newlines:
def removed1():
"Docstring"
return 1
def removed2():
...
def removed3():
... # trailing same-line comment does not prevent collapsing the stub
# And we discard empty lines after the first:
def partially_preserved1():
return 1
# We only preserve blank lines, not add new ones
def untouched1():
# comment
return 0
def untouched2():
# comment
return 0
def untouched3():
# comment
# another line
# and a third
return 0
```
## Output
@@ -732,6 +824,88 @@ def overload4():
def overload4(a: int): ...
# In preview, we preserve these newlines at the start of functions:
def preserved1():
return 1
def preserved2():
pass
def preserved3():
def inner(): ...
def preserved4():
def inner():
print("with a body")
return 1
return 2
def preserved5():
...
# trailing comment prevents collapsing the stub
def preserved6():
# Comment
return 1
def preserved7():
# comment
# another line
# and a third
return 0
def preserved8(): # this also prevents collapsing the stub
...
# But we still discard these newlines:
def removed1():
"Docstring"
return 1
def removed2(): ...
def removed3(): ... # trailing same-line comment does not prevent collapsing the stub
# And we discard empty lines after the first:
def partially_preserved1():
return 1
# We only preserve blank lines, not add new ones
def untouched1():
# comment
return 0
def untouched2():
# comment
return 0
def untouched3():
# comment
# another line
# and a third
return 0
```
@@ -739,7 +913,15 @@ def overload4(a: int): ...
```diff
--- Stable
+++ Preview
@@ -277,6 +277,7 @@
@@ -253,6 +253,7 @@
def fakehttp():
+
class FakeHTTPConnection:
if mock_close:
@@ -277,6 +278,7 @@
def a():
return 1
@@ -747,7 +929,7 @@ def overload4(a: int): ...
else:
pass
@@ -293,6 +294,7 @@
@@ -293,6 +295,7 @@
def a():
return 1
@@ -755,7 +937,7 @@ def overload4(a: int): ...
case 1:
def a():
@@ -303,6 +305,7 @@
@@ -303,6 +306,7 @@
def a():
return 1
@@ -763,7 +945,7 @@ def overload4(a: int): ...
except RuntimeError:
def a():
@@ -313,6 +316,7 @@
@@ -313,6 +317,7 @@
def a():
return 1
@@ -771,7 +953,7 @@ def overload4(a: int): ...
finally:
def a():
@@ -323,18 +327,22 @@
@@ -323,18 +328,22 @@
def a():
return 1
@@ -794,4 +976,64 @@ def overload4(a: int): ...
finally:
def a():
@@ -388,18 +397,22 @@
# In preview, we preserve these newlines at the start of functions:
def preserved1():
+
return 1
def preserved2():
+
pass
def preserved3():
+
def inner(): ...
def preserved4():
+
def inner():
print("with a body")
return 1
@@ -408,17 +421,20 @@
def preserved5():
+
...
# trailing comment prevents collapsing the stub
def preserved6():
+
# Comment
return 1
def preserved7():
+
# comment
# another line
# and a third
@@ -427,6 +443,7 @@
def preserved8(): # this also prevents collapsing the stub
+
...
@@ -445,6 +462,7 @@
# And we discard empty lines after the first:
def partially_preserved1():
+
return 1
```

View File

@@ -67,6 +67,12 @@ def test6 ():
print("Format" )
print(3 + 4)<RANGE_END>
print("Format to fix indentation" )
def test7 ():
<RANGE_START>print("Format" )
print(3 + 4)<RANGE_END>
print("Format to fix indentation" )
```
## Outputs
@@ -146,6 +152,27 @@ def test6 ():
print("Format")
print(3 + 4)
print("Format to fix indentation" )
def test7 ():
print("Format")
print(3 + 4)
print("Format to fix indentation" )
```
#### Preview changes
```diff
--- Stable
+++ Preview
@@ -55,6 +55,7 @@
def test6 ():
+
print("Format")
print(3 + 4)
print("Format to fix indentation" )
```
@@ -225,6 +252,27 @@ def test6 ():
print("Format")
print(3 + 4)
print("Format to fix indentation")
def test7 ():
print("Format")
print(3 + 4)
print("Format to fix indentation")
```
#### Preview changes
```diff
--- Stable
+++ Preview
@@ -55,6 +55,7 @@
def test6 ():
+
print("Format")
print(3 + 4)
print("Format to fix indentation")
```
@@ -304,4 +352,25 @@ def test6 ():
print("Format")
print(3 + 4)
print("Format to fix indentation")
def test7 ():
print("Format")
print(3 + 4)
print("Format to fix indentation")
```
#### Preview changes
```diff
--- Stable
+++ Preview
@@ -55,6 +55,7 @@
def test6 ():
+
print("Format")
print(3 + 4)
print("Format to fix indentation")
```

View File

@@ -1,7 +1,6 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/statement/match.py
snapshot_kind: text
---
## Input
```python
@@ -620,6 +619,61 @@ match guard_comments:
):
pass
# regression tests from https://github.com/astral-sh/ruff/issues/17796
match class_pattern:
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
pass
case Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as capture:
pass
case Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as capture:
pass
case Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture:
pass
match sequence_pattern_brackets:
case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
pass
case [
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
] as capture:
pass
case [
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
] as capture:
pass
match class_pattern:
# 1
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture: # 2
# 3
pass # 4
# 5
case Class( # 6
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 7
) as capture: # 8
pass
case Class( # 9
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 10
) as capture: # 11
pass
case Class( # 12
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 13
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture: # 14
pass
case Class( # 0
# 1
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 2
# 3
) as capture:
pass
```
## Output
@@ -1285,4 +1339,175 @@ match guard_comments:
# trailing own line comment
):
pass
# regression tests from https://github.com/astral-sh/ruff/issues/17796
match class_pattern:
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
pass
case (
Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
):
pass
case (
Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as capture
):
pass
case (
Class(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture
):
pass
match sequence_pattern_brackets:
case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
pass
case (
[xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture
):
pass
case (
[
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
] as capture
):
pass
match class_pattern:
# 1
case (
Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
): # 2
# 3
pass # 4
# 5
case (
Class( # 6
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 7
) as capture
): # 8
pass
case (
Class( # 9
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 10
) as capture
): # 11
pass
case (
Class( # 12
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 13
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture
): # 14
pass
case (
Class( # 0
# 1
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 2
# 3
) as capture
):
pass
```
## Preview changes
```diff
--- Stable
+++ Preview
@@ -665,15 +665,13 @@
match class_pattern:
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
pass
- case (
- Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
- ):
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture:
pass
- case (
- Class(
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- ) as capture
- ):
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture:
pass
case (
Class(
@@ -685,37 +683,31 @@
match sequence_pattern_brackets:
case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
pass
- case (
- [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture
- ):
+ case [
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ] as capture:
pass
- case (
- [
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- ] as capture
- ):
+ case [
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ] as capture:
pass
match class_pattern:
# 1
- case (
- Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
- ): # 2
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture: # 2
# 3
pass # 4
# 5
- case (
- Class( # 6
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 7
- ) as capture
- ): # 8
+ case Class( # 6
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 7
+ ) as capture: # 8
pass
- case (
- Class( # 9
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 10
- ) as capture
- ): # 11
+ case Class( # 9
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 10
+ ) as capture: # 11
pass
case (
Class( # 12
@@ -723,11 +715,9 @@
) as really_really_really_really_really_really_really_really_really_really_really_really_long_capture
): # 14
pass
- case (
- Class( # 0
- # 1
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 2
- # 3
- ) as capture
- ):
+ case Class( # 0
+ # 1
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # 2
+ # 3
+ ) as capture:
pass
```

View File

@@ -78,9 +78,9 @@ pub enum InterpolatedStringErrorType {
impl std::fmt::Display for InterpolatedStringErrorType {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
Self::UnclosedLbrace => write!(f, "expecting '}}'"),
Self::UnclosedLbrace => write!(f, "expecting `}}`"),
Self::InvalidConversionFlag => write!(f, "invalid conversion character"),
Self::SingleRbrace => write!(f, "single '}}' is not allowed"),
Self::SingleRbrace => write!(f, "single `}}` is not allowed"),
Self::UnterminatedString => write!(f, "unterminated string"),
Self::UnterminatedTripleQuotedString => write!(f, "unterminated triple-quoted string"),
Self::LambdaWithoutParentheses => {
@@ -232,7 +232,7 @@ impl std::fmt::Display for ParseErrorType {
ParseErrorType::UnexpectedTokenAfterAsync(kind) => {
write!(
f,
"Expected 'def', 'with' or 'for' to follow 'async', found {kind}",
"Expected `def`, `with` or `for` to follow `async`, found {kind}",
)
}
ParseErrorType::InvalidArgumentUnpackingOrder => {
@@ -286,10 +286,10 @@ impl std::fmt::Display for ParseErrorType {
f.write_str("Parameter without a default cannot follow a parameter with a default")
}
ParseErrorType::ExpectedKeywordParam => {
f.write_str("Expected one or more keyword parameter after '*' separator")
f.write_str("Expected one or more keyword parameter after `*` separator")
}
ParseErrorType::VarParameterWithDefault => {
f.write_str("Parameter with '*' or '**' cannot have default value")
f.write_str("Parameter with `*` or `**` cannot have default value")
}
ParseErrorType::InvalidStarPatternUsage => {
f.write_str("Star pattern cannot be used here")

View File

@@ -219,7 +219,7 @@ impl SemanticSyntaxChecker {
AwaitOutsideAsyncFunctionKind::AsyncWith,
);
}
Stmt::Nonlocal(ast::StmtNonlocal { range, .. }) => {
Stmt::Nonlocal(ast::StmtNonlocal { names, range, .. }) => {
// test_ok nonlocal_declaration_at_module_level
// def _():
// nonlocal x
@@ -234,6 +234,18 @@ impl SemanticSyntaxChecker {
*range,
);
}
if !ctx.in_module_scope() {
for name in names {
if !ctx.has_nonlocal_binding(name) {
Self::add_error(
ctx,
SemanticSyntaxErrorKind::NonlocalWithoutBinding(name.to_string()),
name.range,
);
}
}
}
}
Stmt::Break(ast::StmtBreak { range, .. }) => {
if !ctx.in_loop_context() {
@@ -1154,6 +1166,9 @@ impl Display for SemanticSyntaxError {
SemanticSyntaxErrorKind::DifferentMatchPatternBindings => {
write!(f, "alternative patterns bind different names")
}
SemanticSyntaxErrorKind::NonlocalWithoutBinding(name) => {
write!(f, "no binding for nonlocal `{name}` found")
}
}
}
}
@@ -1554,6 +1569,9 @@ pub enum SemanticSyntaxErrorKind {
/// ...
/// ```
DifferentMatchPatternBindings,
/// Represents a nonlocal statement for a name that has no binding in an enclosing scope.
NonlocalWithoutBinding(String),
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, get_size2::GetSize)]
@@ -2004,6 +2022,9 @@ pub trait SemanticSyntaxContext {
/// Return the [`TextRange`] at which a name is declared as `global` in the current scope.
fn global(&self, name: &str) -> Option<TextRange>;
/// Returns `true` if `name` has a binding in an enclosing scope.
fn has_nonlocal_binding(&self, name: &str) -> bool;
/// Returns `true` if the visitor is currently in an async context, i.e. an async function.
fn in_async_context(&self) -> bool;

View File

@@ -635,93 +635,93 @@ impl fmt::Display for TokenKind {
TokenKind::TStringEnd => "TStringEnd",
TokenKind::IpyEscapeCommand => "IPython escape command",
TokenKind::Comment => "comment",
TokenKind::Question => "'?'",
TokenKind::Exclamation => "'!'",
TokenKind::Lpar => "'('",
TokenKind::Rpar => "')'",
TokenKind::Lsqb => "'['",
TokenKind::Rsqb => "']'",
TokenKind::Lbrace => "'{'",
TokenKind::Rbrace => "'}'",
TokenKind::Equal => "'='",
TokenKind::ColonEqual => "':='",
TokenKind::Dot => "'.'",
TokenKind::Colon => "':'",
TokenKind::Semi => "';'",
TokenKind::Comma => "','",
TokenKind::Rarrow => "'->'",
TokenKind::Plus => "'+'",
TokenKind::Minus => "'-'",
TokenKind::Star => "'*'",
TokenKind::DoubleStar => "'**'",
TokenKind::Slash => "'/'",
TokenKind::DoubleSlash => "'//'",
TokenKind::Percent => "'%'",
TokenKind::Vbar => "'|'",
TokenKind::Amper => "'&'",
TokenKind::CircumFlex => "'^'",
TokenKind::LeftShift => "'<<'",
TokenKind::RightShift => "'>>'",
TokenKind::Tilde => "'~'",
TokenKind::At => "'@'",
TokenKind::Less => "'<'",
TokenKind::Greater => "'>'",
TokenKind::EqEqual => "'=='",
TokenKind::NotEqual => "'!='",
TokenKind::LessEqual => "'<='",
TokenKind::GreaterEqual => "'>='",
TokenKind::PlusEqual => "'+='",
TokenKind::MinusEqual => "'-='",
TokenKind::StarEqual => "'*='",
TokenKind::DoubleStarEqual => "'**='",
TokenKind::SlashEqual => "'/='",
TokenKind::DoubleSlashEqual => "'//='",
TokenKind::PercentEqual => "'%='",
TokenKind::VbarEqual => "'|='",
TokenKind::AmperEqual => "'&='",
TokenKind::CircumflexEqual => "'^='",
TokenKind::LeftShiftEqual => "'<<='",
TokenKind::RightShiftEqual => "'>>='",
TokenKind::AtEqual => "'@='",
TokenKind::Ellipsis => "'...'",
TokenKind::False => "'False'",
TokenKind::None => "'None'",
TokenKind::True => "'True'",
TokenKind::And => "'and'",
TokenKind::As => "'as'",
TokenKind::Assert => "'assert'",
TokenKind::Async => "'async'",
TokenKind::Await => "'await'",
TokenKind::Break => "'break'",
TokenKind::Class => "'class'",
TokenKind::Continue => "'continue'",
TokenKind::Def => "'def'",
TokenKind::Del => "'del'",
TokenKind::Elif => "'elif'",
TokenKind::Else => "'else'",
TokenKind::Except => "'except'",
TokenKind::Finally => "'finally'",
TokenKind::For => "'for'",
TokenKind::From => "'from'",
TokenKind::Global => "'global'",
TokenKind::If => "'if'",
TokenKind::Import => "'import'",
TokenKind::In => "'in'",
TokenKind::Is => "'is'",
TokenKind::Lambda => "'lambda'",
TokenKind::Nonlocal => "'nonlocal'",
TokenKind::Not => "'not'",
TokenKind::Or => "'or'",
TokenKind::Pass => "'pass'",
TokenKind::Raise => "'raise'",
TokenKind::Return => "'return'",
TokenKind::Try => "'try'",
TokenKind::While => "'while'",
TokenKind::Match => "'match'",
TokenKind::Type => "'type'",
TokenKind::Case => "'case'",
TokenKind::With => "'with'",
TokenKind::Yield => "'yield'",
TokenKind::Question => "`?`",
TokenKind::Exclamation => "`!`",
TokenKind::Lpar => "`(`",
TokenKind::Rpar => "`)`",
TokenKind::Lsqb => "`[`",
TokenKind::Rsqb => "`]`",
TokenKind::Lbrace => "`{`",
TokenKind::Rbrace => "`}`",
TokenKind::Equal => "`=`",
TokenKind::ColonEqual => "`:=`",
TokenKind::Dot => "`.`",
TokenKind::Colon => "`:`",
TokenKind::Semi => "`;`",
TokenKind::Comma => "`,`",
TokenKind::Rarrow => "`->`",
TokenKind::Plus => "`+`",
TokenKind::Minus => "`-`",
TokenKind::Star => "`*`",
TokenKind::DoubleStar => "`**`",
TokenKind::Slash => "`/`",
TokenKind::DoubleSlash => "`//`",
TokenKind::Percent => "`%`",
TokenKind::Vbar => "`|`",
TokenKind::Amper => "`&`",
TokenKind::CircumFlex => "`^`",
TokenKind::LeftShift => "`<<`",
TokenKind::RightShift => "`>>`",
TokenKind::Tilde => "`~`",
TokenKind::At => "`@`",
TokenKind::Less => "`<`",
TokenKind::Greater => "`>`",
TokenKind::EqEqual => "`==`",
TokenKind::NotEqual => "`!=`",
TokenKind::LessEqual => "`<=`",
TokenKind::GreaterEqual => "`>=`",
TokenKind::PlusEqual => "`+=`",
TokenKind::MinusEqual => "`-=`",
TokenKind::StarEqual => "`*=`",
TokenKind::DoubleStarEqual => "`**=`",
TokenKind::SlashEqual => "`/=`",
TokenKind::DoubleSlashEqual => "`//=`",
TokenKind::PercentEqual => "`%=`",
TokenKind::VbarEqual => "`|=`",
TokenKind::AmperEqual => "`&=`",
TokenKind::CircumflexEqual => "`^=`",
TokenKind::LeftShiftEqual => "`<<=`",
TokenKind::RightShiftEqual => "`>>=`",
TokenKind::AtEqual => "`@=`",
TokenKind::Ellipsis => "`...`",
TokenKind::False => "`False`",
TokenKind::None => "`None`",
TokenKind::True => "`True`",
TokenKind::And => "`and`",
TokenKind::As => "`as`",
TokenKind::Assert => "`assert`",
TokenKind::Async => "`async`",
TokenKind::Await => "`await`",
TokenKind::Break => "`break`",
TokenKind::Class => "`class`",
TokenKind::Continue => "`continue`",
TokenKind::Def => "`def`",
TokenKind::Del => "`del`",
TokenKind::Elif => "`elif`",
TokenKind::Else => "`else`",
TokenKind::Except => "`except`",
TokenKind::Finally => "`finally`",
TokenKind::For => "`for`",
TokenKind::From => "`from`",
TokenKind::Global => "`global`",
TokenKind::If => "`if`",
TokenKind::Import => "`import`",
TokenKind::In => "`in`",
TokenKind::Is => "`is`",
TokenKind::Lambda => "`lambda`",
TokenKind::Nonlocal => "`nonlocal`",
TokenKind::Not => "`not`",
TokenKind::Or => "`or`",
TokenKind::Pass => "`pass`",
TokenKind::Raise => "`raise`",
TokenKind::Return => "`return`",
TokenKind::Try => "`try`",
TokenKind::While => "`while`",
TokenKind::Match => "`match`",
TokenKind::Type => "`type`",
TokenKind::Case => "`case`",
TokenKind::With => "`with`",
TokenKind::Yield => "`yield`",
};
f.write_str(value)
}

View File

@@ -527,6 +527,10 @@ impl SemanticSyntaxContext for SemanticSyntaxCheckerVisitor<'_> {
None
}
fn has_nonlocal_binding(&self, _name: &str) -> bool {
true
}
fn in_async_context(&self) -> bool {
if let Some(scope) = self.scopes.iter().next_back() {
match scope {

View File

@@ -131,7 +131,7 @@ Module(
|
1 | assert *x
2 | assert assert x
| ^^^^^^ Syntax Error: Expected an identifier, but found a keyword 'assert' that cannot be used here
| ^^^^^^ Syntax Error: Expected an identifier, but found a keyword `assert` that cannot be used here
3 | assert yield x
4 | assert x := 1
|

View File

@@ -148,7 +148,7 @@ Module(
|
1 | a = pass = c
| ^^^^ Syntax Error: Expected an identifier, but found a keyword 'pass' that cannot be used here
| ^^^^ Syntax Error: Expected an identifier, but found a keyword `pass` that cannot be used here
2 | a + b
3 | a = b = pass = c
|
@@ -158,6 +158,6 @@ Module(
1 | a = pass = c
2 | a + b
3 | a = b = pass = c
| ^^^^ Syntax Error: Expected an identifier, but found a keyword 'pass' that cannot be used here
| ^^^^ Syntax Error: Expected an identifier, but found a keyword `pass` that cannot be used here
4 | a + b
|

View File

@@ -181,7 +181,7 @@ Module(
|
1 | async class Foo: ...
| ^^^^^ Syntax Error: Expected 'def', 'with' or 'for' to follow 'async', found 'class'
| ^^^^^ Syntax Error: Expected `def`, `with` or `for` to follow `async`, found `class`
2 | async while test: ...
3 | async x = 1
|
@@ -190,7 +190,7 @@ Module(
|
1 | async class Foo: ...
2 | async while test: ...
| ^^^^^ Syntax Error: Expected 'def', 'with' or 'for' to follow 'async', found 'while'
| ^^^^^ Syntax Error: Expected `def`, `with` or `for` to follow `async`, found `while`
3 | async x = 1
4 | async async def foo(): ...
|
@@ -200,7 +200,7 @@ Module(
1 | async class Foo: ...
2 | async while test: ...
3 | async x = 1
| ^ Syntax Error: Expected 'def', 'with' or 'for' to follow 'async', found name
| ^ Syntax Error: Expected `def`, `with` or `for` to follow `async`, found name
4 | async async def foo(): ...
5 | async match test:
|
@@ -210,7 +210,7 @@ Module(
2 | async while test: ...
3 | async x = 1
4 | async async def foo(): ...
| ^^^^^ Syntax Error: Expected 'def', 'with' or 'for' to follow 'async', found 'async'
| ^^^^^ Syntax Error: Expected `def`, `with` or `for` to follow `async`, found `async`
5 | async match test:
6 | case _: ...
|
@@ -220,6 +220,6 @@ Module(
3 | async x = 1
4 | async async def foo(): ...
5 | async match test:
| ^^^^^ Syntax Error: Expected 'def', 'with' or 'for' to follow 'async', found 'match'
| ^^^^^ Syntax Error: Expected `def`, `with` or `for` to follow `async`, found `match`
6 | case _: ...
|

View File

@@ -245,7 +245,7 @@ Module(
3 | *x += 1
4 | pass += 1
5 | x += pass
| ^^^^ Syntax Error: Expected an identifier, but found a keyword 'pass' that cannot be used here
| ^^^^ Syntax Error: Expected an identifier, but found a keyword `pass` that cannot be used here
6 | (x + y) += 1
|

View File

@@ -121,7 +121,7 @@ Module(
|
1 | class Foo[T1, *T2(a, b):
| ^ Syntax Error: Expected ']', found '('
| ^ Syntax Error: Expected `]`, found `(`
2 | pass
3 | x = 10
|

View File

@@ -68,7 +68,7 @@ Module(
|
1 | call(**x := 1)
| ^^ Syntax Error: Expected ',', found ':='
| ^^ Syntax Error: Expected `,`, found `:=`
|

View File

@@ -61,5 +61,5 @@ Module(
|
1 | # The comma between the first two elements is expected in `parse_list_expression`.
2 | [0, 1 2]
| ^ Syntax Error: Expected ',', found int
| ^ Syntax Error: Expected `,`, found int
|

View File

@@ -77,7 +77,7 @@ Module(
|
1 | (async)
| ^^^^^ Syntax Error: Expected an identifier, but found a keyword 'async' that cannot be used here
| ^^^^^ Syntax Error: Expected an identifier, but found a keyword `async` that cannot be used here
2 | (x async x in iter)
|
@@ -85,5 +85,5 @@ Module(
|
1 | (async)
2 | (x async x in iter)
| ^ Syntax Error: Expected 'for', found name
| ^ Syntax Error: Expected `for`, found name
|

View File

@@ -169,7 +169,7 @@ Module(
|
1 | @def foo(): ...
| ^^^ Syntax Error: Expected an identifier, but found a keyword 'def' that cannot be used here
| ^^^ Syntax Error: Expected an identifier, but found a keyword `def` that cannot be used here
2 | @
3 | def foo(): ...
|

View File

@@ -161,7 +161,7 @@ Module(
|
1 | @x def foo(): ...
| ^^^ Syntax Error: Expected newline, found 'def'
| ^^^ Syntax Error: Expected newline, found `def`
2 | @x async def foo(): ...
3 | @x class Foo: ...
|
@@ -170,7 +170,7 @@ Module(
|
1 | @x def foo(): ...
2 | @x async def foo(): ...
| ^^^^^ Syntax Error: Expected newline, found 'async'
| ^^^^^ Syntax Error: Expected newline, found `async`
3 | @x class Foo: ...
|
@@ -179,5 +179,5 @@ Module(
1 | @x def foo(): ...
2 | @x async def foo(): ...
3 | @x class Foo: ...
| ^^^^^ Syntax Error: Expected newline, found 'class'
| ^^^^^ Syntax Error: Expected newline, found `class`
|

View File

@@ -238,7 +238,7 @@ Module(
3 | call(***x)
4 |
5 | call(**x := 1)
| ^^ Syntax Error: Expected ',', found ':='
| ^^ Syntax Error: Expected `,`, found `:=`
|

View File

@@ -61,5 +61,5 @@ Module(
|
1 | call(x y)
| ^ Syntax Error: Expected ',', found name
| ^ Syntax Error: Expected `,`, found name
|

View File

@@ -76,7 +76,7 @@ Module(
|
1 | call(
| ^ Syntax Error: Expected ')', found newline
| ^ Syntax Error: Expected `)`, found newline
2 |
3 | def foo():
4 | pass

Some files were not shown because too many files have changed in this diff Show More