Compare commits

...

85 Commits

Author SHA1 Message Date
David Peter
352628e986 [ty] Temporary SQLAlchemy special-case 2025-12-09 10:34:09 +01:00
Amethyst Reese
4e67a219bb apply range suppressions to filter diagnostics (#21623)
Builds on range suppressions from
https://github.com/astral-sh/ruff/pull/21441

Filters diagnostics based on parsed valid range suppressions.

Issue: #3711
2025-12-08 16:11:59 -08:00
Aria Desires
8ea18966cf [ty] followup: add-import action for reveal_type too (#21668) 2025-12-08 22:44:17 +00:00
Rasmus Nygren
e548ce1ca9 [ty] Enrich function argument auto-complete suggestions with annotated types 2025-12-08 14:19:44 -05:00
Rasmus Nygren
eac8a90cc4 [ty] Add autocomplete suggestions for function arguments
This adds autocomplete suggestions for function arguments. For example,
`okay` in:

```python
def foo(okay=None):

foo(o<CURSOR>
```

This also ensures that we don't suggest a keyword argument if it has
already been used.

Closes astral-sh/issues#1550
2025-12-08 14:19:44 -05:00
Loïc Riegel
2d3466eccf [flake8-bugbear] Accept immutable slice default arguments (B008) (#21823)
Closes issue #21565

## Summary

As pointed out in the issue, slices are currently flagged by B008 but
this behavior is incorrect because slices are immutable.

## Test Plan

Added a test case in the "B006_B008.py" fixture. Sorry for the diff in
the snapshots, the only thing that changes in those flies is the line
numbers, though.

You can also test this manually with this file:
```py
# test_slice.py
def c(d=slice(0, 3)): ...
```

```sh
> target/debug/ruff check tmp/test_slice.py --no-cache --select B008
All checks passed!
```
2025-12-08 14:00:43 -05:00
Phong Do
45fb3732a4 [pydocstyle] Suppress D417 for parameters with Unpack annotations (#21816)
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
  requests.)
- Does this pull request include references to any relevant issues?
-->

## Summary

Fixes https://github.com/astral-sh/ruff/issues/8774

This PR fixes `pydocstyle` incorrectly flagging missing argument for
arguments with `Unpack` type annotation by extracting the `kwarg` `D417`
suppression logic into a helper function for future rules as needed.

## Problem Statement

The below example was incorrectly triggering `D417` error for missing
`**kwargs` doc.

```python
class User(TypedDict):
    id: int
    name: str

def do_something(some_arg: str, **kwargs: Unpack[User]):
    """Some doc
    
    Args:
        some_arg: Some argument
    """
```

<img width="1135" height="276" alt="image"
src="https://github.com/user-attachments/assets/42fa4bb9-61a5-4a70-a79c-0c8922a3ee66"
/>

`**kwargs: Unpack[User]` indicates the function expects keyword
arguments that will be unpacked. Ideally, the individual fields of the
User `TypedDict` should be documented, not in the `**kwargs` itself. The
`**kwargs` parameter acts as a semantic grouping rather than a parameter
requiring documentation.

## Solution

As discussed in the linked issue, it makes sense to suppress the `D417`
for parameters with `Unpack` annotation. I extract a helper function to
solely check `D417` should be suppressed with `**kwarg: Unpack[T]`
parameter, this function can also be unit tested independently and
reduce complexity of current `missing_args` check function. This also
makes it easier to add additional rules in the future.

_✏️ Note:_ This is my first PR in this repo, as I've learned a ton from
it, please call out anything that could be improved. Thanks for making
this excellent tool 👏

## Test Plan

Add 2 test cases in `D417.py` and update snapshots.

---------

Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-08 19:00:05 +00:00
Micha Reiser
0ab8521171 [ty] Remove legacy concise_message fallback behavior (#21847) 2025-12-08 16:19:01 +00:00
Alex Waygood
0ccd84136a [ty] Make Python-version subdiagnostics less verbose (#21849) 2025-12-08 15:58:23 +00:00
Aria Desires
3981a23ee9 [ty] Supress inlay hints when assigning a trivial initializer call (#21848)
## Summary

By taking a purely syntactic approach to the problem of trivial
initializer calls we can supress `x: T = T()`, `x: T = x.y.T()` and `x:
MyNewType = MyNewType(0)` but still display `x: T[U] = T()`.

The place where we drop a ball is this does not compose with our
analysis for supressing `x = (0, "hello")` as `x = (0, T())` and `x =
(T(), T())` will still get inlay hints (I don't think this is a huge
deal).

* fixes https://github.com/astral-sh/ty/issues/1516

## Test Plan

Existing snapshots cover this well.
2025-12-08 10:54:30 -05:00
Charlie Marsh
385dd2770b [ty] Avoid double-inference on non-tuple argument to Annotated (#21837)
## Summary

If you pass a non-tuple to `Annotated`, we end up running inference on
it twice. I _think_ the only case here is `Annotated[]`, where we insert
a (fake) empty `Name` node in the slice.

Closes https://github.com/astral-sh/ty/issues/1801.
2025-12-08 10:24:05 -05:00
Alex Waygood
7519f6c27b Print Python version and Python platform in the fuzzer output when fuzzing fails (#21844) 2025-12-08 14:35:36 +00:00
David Peter
4686111681 [ty] More SQLAlchemy test updates (#21846)
Minor updates to the SQLAlchemy test suite. I verified all expected
results using pyright.
2025-12-08 15:22:55 +01:00
Micha Reiser
4364ffbdd3 [ty] Don't create a related diagnostic for the primary annotation of sub-diagnostics (#21845) 2025-12-08 14:22:11 +00:00
Charlie Marsh
b845e81c4a Use memchr for computing line indexes (#21838)
## Summary

Some benchmarks with Claude's help:

| File | Size | Baseline | Optimized | Speedup |

|---------------------|-------|----------------------|----------------------|---------|
| numpy/globals.py | 3 KB | 1.48 µs (1.95 GiB/s) | 740 ns (3.89 GiB/s) |
2.0x |
| unicode/pypinyin.py | 4 KB | 2.04 µs (2.01 GiB/s) | 1.18 µs (3.49
GiB/s) | 1.7x |
| pydantic/types.py | 26 KB | 13.1 µs (1.90 GiB/s) | 5.88 µs (4.23
GiB/s) | 2.2x |
| numpy/ctypeslib.py | 17 KB | 8.45 µs (1.92 GiB/s) | 3.94 µs (4.13
GiB/s) | 2.1x |
| large/dataset.py | 41 KB | 21.6 µs (1.84 GiB/s) | 11.2 µs (3.55 GiB/s)
| 1.9x |

I think that I originally thought we _had_ to iterate
character-by-character here because we needed to do the ASCII check, but
the ASCII check can be vectorized by LLVM (and the "search for newlines"
can be done with `memchr`).
2025-12-08 08:50:51 -05:00
David Peter
c99e10eedc [ty] Increase SQLAlchemy test coverage (#21843)
## Summary

Increase our SQLAlchemy test coverage to make sure we understand
`Session.scalar`, `Session.scalars`, `Session.execute` (and their async
equivalents), as well as `Result.tuples`, `Result.one_or_none`,
`Row._tuple`.
2025-12-08 14:36:13 +01:00
Dhruv Manilawala
a364195335 [ty] Avoid diagnostic when typing_extensions.ParamSpec uses default parameter (#21839)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1798

## Test Plan

Add mdtest.
2025-12-08 12:34:30 +00:00
David Peter
dfd6ed0524 [ty] mdtests with external dependencies (#20904)
## Summary

This PR adds the possibility to write mdtests that specify external
dependencies in a `project` section of TOML blocks. For example, here is
a test that makes sure that we understand Pydantic's dataclass-transform
setup:

````markdown
```toml
[environment]
python-version = "3.12"
python-platform = "linux"

[project]
dependencies = ["pydantic==2.12.2"]
```

```py
from pydantic import BaseModel

class User(BaseModel):
    id: int
    name: str

user = User(id=1, name="Alice")
reveal_type(user.id)  # revealed: int
reveal_type(user.name)  # revealed: str

# error: [missing-argument] "No argument provided for required parameter
`name`"
invalid_user = User(id=2)
```
````

## How?

Using the `python-version` and the `dependencies` fields from the
Markdown section, we generate a `pyproject.toml` file, write it to a
temporary directory, and use `uv sync` to install the dependencies into
a virtual environment. We then copy the Python source files from that
venv's `site-packages` folder to a corresponding directory structure in
the in-memory filesystem. Finally, we configure the search paths
accordingly, and run the mdtest as usual.

I fully understand that there are valid concerns here:
* Doesn't this require network access? (yes, it does)
* Is this fast enough? (`uv` caching makes this almost unnoticeable,
actually)
* Is this deterministic? ~~(probably not, package resolution can depend
on the platform you're on)~~ (yes, hopefully)

For this reason, this first version is opt-in, locally. ~~We don't even
run these tests in CI (even though they worked fine in a previous
iteration of this PR).~~ You need to set `MDTEST_EXTERNAL=1`, or use the
new `-e/--enable-external` command line option of the `mdtest.py`
runner. For example:
```bash
# Skip mdtests with external dependencies (default):
uv run crates/ty_python_semantic/mdtest.py

# Run all mdtests, including those with external dependencies:
uv run crates/ty_python_semantic/mdtest.py -e

# Only run the `pydantic` tests. Use `-e` to make sure it is not skipped:
uv run crates/ty_python_semantic/mdtest.py -e pydantic
```

## Why?

I believe that this can be a useful addition to our testing strategy,
which lies somewhere between ecosystem tests and normal mdtests.
Ecosystem tests cover much more code, but they have the disadvantage
that we only see second- or third-order effects via diagnostic diffs. If
we unexpectedly gain or lose type coverage somewhere, we might not even
notice (assuming the gradual guarantee holds, and ecosystem code is
mostly correct). Another disadvantage of ecosystem checks is that they
only test checked-in code that is usually correct. However, we also want
to test what happens on wrong code, like the code that is momentarily
written in an editor, before fixing it. On the other end of the spectrum
we have normal mdtests, which have the disadvantage that they do not
reflect the reality of complex real-world code. We experience this
whenever we're surprised by an ecosystem report on a PR.

That said, these tests should not be seen as a replacement for either of
these things. For example, we should still strive to write detailed
self-contained mdtests for user-reported issues. But we might use this
new layer for regression tests, or simply as a debugging tool. It can
also serve as a tool to document our support for popular third-party
libraries.

## Test Plan

* I've been locally using this for a couple of weeks now.
* `uv run crates/ty_python_semantic/mdtest.py -e`
2025-12-08 11:44:20 +01:00
Dhruv Manilawala
ac882f7e63 [ty] Handle various invalid explicit specializations for ParamSpec (#21821)
## Summary

fixes: https://github.com/astral-sh/ty/issues/1788

## Test Plan

Add new mdtests.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-12-08 05:20:41 +00:00
Alex Waygood
857fd4f683 [ty] Add test case for fixed panic (#21832) 2025-12-07 15:58:11 +00:00
Charlie Marsh
285d6410d3 [ty] Avoid double-analyzing tuple in Final subscript (#21828)
## Summary

As-is, a single-element tuple gets destructured via:

```rust
let arguments = if let ast::Expr::Tuple(tuple) = slice {
    &*tuple.elts
} else {
    std::slice::from_ref(slice)
};
```

But then, because it's a single element, we call
`infer_annotation_expression_impl`, passing in the tuple, rather than
the first element.

Closes https://github.com/astral-sh/ty/issues/1793.
Closes https://github.com/astral-sh/ty/issues/1768.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-07 14:27:14 +00:00
Prakhar Pratyush
cbff09b9af [flake8-bandit] Fix false positive when using non-standard CSafeLoader path (S506). (#21830) 2025-12-07 11:40:46 +01:00
Louis Maddox
6e0e49eda8 Add minimal-size build profile (#21826)
This PR adds the same `minimal-size` profile as `uv` repo workspace has

```toml
# Profile to build a minimally sized binary for uv-build
[profile.minimal-size]
inherits = "release"
opt-level = "z"
# This will still show a panic message, we only skip the unwind
panic = "abort"
codegen-units = 1
```
but removes its `panic = "abort"` setting

- As discussed in #21825

Compared to the ones pre-built via `uv tool install`, this builds 35%
smaller ruff and 24% smaller ty binaries
(as measured
[here](https://github.com/lmmx/just-pre-commit/blob/master/refresh_binaries.sh))
2025-12-06 13:19:04 -05:00
Charlie Marsh
ef45c97dab [ty] Allow tuple[Any, ...] to assign to tuple[int, *tuple[int, ...]] (#21803)
## Summary

Closes https://github.com/astral-sh/ty/issues/1750.
2025-12-05 19:04:23 +00:00
Micha Reiser
9714c589e1 [ty] Support renaming import aliases (#21792) 2025-12-05 19:12:13 +01:00
Micha Reiser
b2fb421ddd [ty] Add redeclaration LSP tests (#21812) 2025-12-05 18:02:34 +00:00
Shunsuke Shibayama
2f05ffa2c8 [ty] more detailed description of "Size limit on unions of literals" in mdtest (#21804) 2025-12-05 17:34:39 +00:00
Dhruv Manilawala
b623189560 [ty] Complete support for ParamSpec (#21445)
## Summary

Closes: https://github.com/astral-sh/ty/issues/157

This PR adds support for the following capabilities involving a
`ParamSpec` type variable:
- Representing `P.args` and `P.kwargs` in the type system
- Matching against a callable containing `P` to create a type mapping
- Specializing `P` against the stored parameters

The value of a `ParamSpec` type variable is being represented using
`CallableType` with a `CallableTypeKind::ParamSpecValue` variant. This
`CallableTypeKind` is expanded from the existing `is_function_like`
boolean flag. An `enum` is used as these variants are mutually
exclusive.

For context, an initial iteration made an attempt to expand the
`Specialization` to use `TypeOrParameters` enum that represents that a
type variable can specialize into either a `Type` or `Parameters` but
that increased the complexity of the code as all downstream usages would
need to handle both the variants appropriately. Additionally, we'd have
also need to establish an invariant that a regular type variable always
maps to a `Type` while a paramspec type variable always maps to a
`Parameters`.

I've intentionally left out checking and raising diagnostics when the
`ParamSpec` type variable and it's components are not being used
correctly to avoid scope increase and it can easily be done as a
follow-up. This would also include the scoping rules which I don't think
a regular type variable implements either.

## Test Plan

Add new mdtest cases and update existing test cases.

Ran this branch on pyx, no new diagnostics.

### Ecosystem analysis

There's a case where in an annotated assignment like:
```py
type CustomType[P] = Callable[...]

def value[**P](...): ...

def another[**P](...):
	target: CustomType[P] = value
```
The type of `value` is a callable and it has a paramspec that's bound to
`value`, `CustomType` is a type alias that's a callable and `P` that's
used in it's specialization is bound to `another`. Now, ty infers the
type of `target` same as `value` and does not use the declared type
`CustomType[P]`. [This is the
assignment](0980b9d9ab/src/async_utils/gen_transform.py (L108))
that I'm referring to which then leads to error in downstream usage.
Pyright and mypy does seem to use the declared type.

There are multiple diagnostics in `dd-trace-py` that requires support
for `cls`.

I'm seeing `Divergent` type for an example like which ~~I'm not sure
why, I'll look into it tomorrow~~ is because of a cycle as mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974:
```py
from typing import Callable

def decorator[**P](c: Callable[P, int]) -> Callable[P, str]: ...

@decorator
def func(a: int) -> int: ...

# ((a: int) -> str) | ((a: Divergent) -> str)
reveal_type(func)
```

I ~~need to look into why are the parameters not being specialized
through multiple decorators in the following code~~ think this is also
because of the cycle mentioned in
https://github.com/astral-sh/ty/issues/1729#issuecomment-3612279974 and
the fact that we don't support `staticmethod` properly:
```py
from contextlib import contextmanager

class Foo:
    @staticmethod
    @contextmanager
    def method(x: int):
        yield

foo = Foo()
# ty: Revealed type: `() -> _GeneratorContextManager[Unknown, None, None]` [revealed-type]
reveal_type(foo.method)
```

There's some issue related to `Protocol` that are generic over a
`ParamSpec` in `starlette` which might be related to
https://github.com/astral-sh/ty/issues/1635 but I'm not sure. Here's a
minimal example to reproduce:

<details><summary>Code snippet:</summary>
<p>

```py
from collections.abc import Awaitable, Callable, MutableMapping
from typing import Any, Callable, ParamSpec, Protocol

P = ParamSpec("P")

Scope = MutableMapping[str, Any]
Message = MutableMapping[str, Any]
Receive = Callable[[], Awaitable[Message]]
Send = Callable[[Message], Awaitable[None]]

ASGIApp = Callable[[Scope, Receive, Send], Awaitable[None]]

_Scope = Any
_Receive = Callable[[], Awaitable[Any]]
_Send = Callable[[Any], Awaitable[None]]

# Since `starlette.types.ASGIApp` type differs from `ASGIApplication` from `asgiref`
# we need to define a more permissive version of ASGIApp that doesn't cause type errors.
_ASGIApp = Callable[[_Scope, _Receive, _Send], Awaitable[None]]


class _MiddlewareFactory(Protocol[P]):
    def __call__(
        self, app: _ASGIApp, *args: P.args, **kwargs: P.kwargs
    ) -> _ASGIApp: ...


class Middleware:
    def __init__(
        self, factory: _MiddlewareFactory[P], *args: P.args, **kwargs: P.kwargs
    ) -> None:
        self.factory = factory
        self.args = args
        self.kwargs = kwargs


class ServerErrorMiddleware:
    def __init__(
        self,
        app: ASGIApp,
        value: int | None = None,
        flag: bool = False,
    ) -> None:
        self.app = app
        self.value = value
        self.flag = flag

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: ...


# ty: Argument to bound method `__init__` is incorrect: Expected `_MiddlewareFactory[(...)]`, found `<class 'ServerErrorMiddleware'>` [invalid-argument-type]
Middleware(ServerErrorMiddleware, value=500, flag=True)
```

</p>
</details> 

### Conformance analysis

> ```diff
> -constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(...) -> Unknown`
> +constructors_callable.py:36:13: info[revealed-type] Revealed type:
`(x: int) -> Unknown`
> ```

Requires return type inference i.e.,
https://github.com/astral-sh/ruff/pull/21551

> ```diff
> +constructors_callable.py:194:16: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:194:22: error[invalid-argument-type]
Argument is incorrect: Expected `list[T@__init__]`, found `list[Unknown
| str]`
> +constructors_callable.py:195:4: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | int]`
> +constructors_callable.py:195:9: error[invalid-argument-type] Argument
is incorrect: Expected `list[T@__init__]`, found `list[Unknown | str]`
> ```

I might need to look into why this is happening...

> ```diff
> +generics_defaults.py:79:1: error[type-assertion-failure] Type
`type[Class_ParamSpec[(str, int, /)]]` does not match asserted type
`<class 'Class_ParamSpec'>`
> ```

which is on the following code
```py
DefaultP = ParamSpec("DefaultP", default=[str, int])

class Class_ParamSpec(Generic[DefaultP]): ...

assert_type(Class_ParamSpec, type[Class_ParamSpec[str, int]])
```

It's occurring because there's no equivalence relationship defined
between `ClassLiteral` and `KnownInstanceType::TypeGenericAlias` which
is what these types are.

Everything else looks good to me!
2025-12-05 22:00:06 +05:30
Micha Reiser
f29436ca9e [ty] Update benchmark dependencies (#21815) 2025-12-05 17:23:18 +01:00
Douglas Creager
e42cdf8495 [ty] Carry generic context through when converting class into Callable (#21798)
When converting a class (whether specialized or not) into a `Callable`
type, we should carry through any generic context that the constructor
has. This includes both the generic context of the class itself (if it's
generic) and of the constructor methods (if they are separately
generic).

To help test this, this also updates the `generic_context` extension
function to work on `Callable` types and unions; and adds a new
`into_callable` extension function that works just like
`CallableTypeOf`, but on value forms instead of type forms.

Pulled this out of #21551 for separate review.
2025-12-05 08:57:21 -05:00
Alex Waygood
71a7a03ad4 [ty] Add more tests for renamings (#21810) 2025-12-05 12:41:31 +00:00
Alex Waygood
48f7f42784 [ty] Minor improvements to assert_type diagnostics (#21811) 2025-12-05 12:33:30 +00:00
Micha Reiser
3deb7e1b90 [ty] Add some attribute/method renaming test cases (#21809) 2025-12-05 11:56:28 +01:00
mahiro
5df8a959f5 Update mkdocs-material to 9.7.0 (Insiders now free) (#21797) 2025-12-05 08:53:08 +01:00
Dhruv Manilawala
6f03afe318 Remove unused whitespaces in test cases (#21806)
These aren't used in the tests themselves. There are more instances of
them in other files but those require code changes so I've left them as
it is.
2025-12-05 12:51:40 +05:30
Shunsuke Shibayama
1951f1bbb8 [ty] fix panic when instantiating a type variable with invalid constraints (#21663) 2025-12-04 18:48:38 -08:00
Shunsuke Shibayama
10de342991 [ty] fix build failure caused by conflicts between #21683 and #21800 (#21802) 2025-12-04 18:20:24 -08:00
Shunsuke Shibayama
3511b7a06b [ty] do nothing with store_expression_type if inner_expression_inference_state is Get (#21718)
## Summary

Fixes https://github.com/astral-sh/ty/issues/1688

## Test Plan

N/A
2025-12-04 18:05:41 -08:00
Shunsuke Shibayama
f3e5713d90 [ty] increase the limit on the number of elements in a non-recursively defined literal union (#21683)
## Summary

Closes https://github.com/astral-sh/ty/issues/957

As explained in https://github.com/astral-sh/ty/issues/957, literal
union types for recursively defined values ​​can be widened early to
speed up the convergence of fixed-point iterations.
This PR achieves this by embedding a marker in `UnionType` that
distinguishes whether a value is recursively defined.

This also allows us to identify values ​​that are not recursively
defined, so I've increased the limit on the number of elements in a
literal union type for such values.

Edit: while this PR doesn't provide the significant performance
improvement initially hoped for, it does have the benefit of allowing
the number of elements in a literal union to be raised above the salsa
limit, and indeed mypy_primer results revealed that a literal union of
220 elements was actually being used.

## Test Plan

`call/union.md` has been updated
2025-12-04 18:01:48 -08:00
Carl Meyer
a9de6b5c3e [ty] normalize typevar bounds/constraints in cycles (#21800)
Fixes https://github.com/astral-sh/ty/issues/1587

## Summary

Perform cycle normalization on typevar bounds and constraints (similar
to how it was already done for typevar defaults) in order to ensure
convergence in cyclic cases.

There might be another fix here that could avoid the cycle in many more
cases, where we don't eagerly evaluate typevar bounds/constraints on
explicit specialization, but just accept the given specialization and
later evaluate to see whether we need to emit a diagnostic on it. But
the current fix here is sufficient to solve the problem and matches the
patterns we use to ensure cycle convergence elsewhere, so it seems good
for now; left a TODO for the other idea.

This fix is sufficient to make us not panic, but not sufficient to get
the semantics fully correct; see the TODOs in the tests. I have ideas
for fixing that as well, but it seems worth at least getting this in to
fix the panic.

## Test Plan

Test that previously panicked now does not.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-12-04 15:17:57 -08:00
Andrew Gallant
06415b1877 [ty] Update completion eval to include modules
Our parsing and confirming of symbol names is highly suspect, but
I think it's fine for now.
2025-12-04 17:37:37 -05:00
Andrew Gallant
518d11b33f [ty] Add modules to auto-import
This makes auto-import include modules in suggestions.

In this initial implementation, we permit this to include submodules as
well. This is in contrast to what we do in `import ...` completions.
It's easy to change this behavior, but I think it'd be interesting to
run with this for now to see how well it works.
2025-12-04 17:37:37 -05:00
Andrew Gallant
da94b99248 [ty] Add support for module-only import requests
The existing importer functionality always required
an import request with a module and a member in that
module. But we want to be able to insert import statements
for a module itself and not any members in the module.

This is basically changing `member: &str` to an
`Option<&str>` and fixing the fallout in a way that
makes sense for module-only imports.
2025-12-04 17:37:37 -05:00
Andrew Gallant
3c2cf49f60 [ty] Refactor auto-import symbol info
This just encapsulates the representation so that
we can make changes to it more easily.
2025-12-04 17:37:37 -05:00
Andrew Gallant
fdcb5a7e73 [ty] Clarify the use of SymbolKind in auto-import 2025-12-04 13:21:26 -05:00
Andrew Gallant
6a025d1925 [ty] Redact ranking of completions from e2e LSP tests
I think changes to this value are generally noise. It's hard to tell
what it means and it isn't especially actionable. We already have an
eval running in CI for completion ranking, so I don't think it's
terribly important to care about ranking here in e2e tests _generally_.
2025-12-04 13:21:26 -05:00
Andrew Gallant
f054e7edf8 [ty] Tweaks tests to use clearer language
A completion lacking a module reference doesn't necessarily mean that
the symbol is defined within the current module. I believe the intent
here is that it means that no import is required to use it.
2025-12-04 13:21:26 -05:00
Andrew Gallant
e154efa229 [ty] Update evaluation results
These are all improvements here with one slight regression on
`reveal_type` ranking. The previous completions offered were:

```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: _sqlite3)
SupportsGetItemViewable (module: _typeshed)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 8/10)
_remove_original_values (module: _osx_support)
_remove_universal_flags (module: _osx_support)
-----
found 10 completions
```

And now they are:

```
$ cargo r -q -p ty_completion_eval show-one ty-extensions-lower-stdlib
ENOTRECOVERABLE (module: errno)
REG_WHOLE_HIVE_VOLATILE (module: winreg)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3)
SQLITE_NOTICE_RECOVER_WAL (module: sqlite3.dbapi2)
removeHandler (module: unittest)
removeHandler (module: unittest.signals)
reveal_mro (module: ty_extensions)
reveal_protocol_interface (module: ty_extensions)
reveal_type (module: typing) (*, 9/9)
-----
found 9 completions
```

Some completions were removed (because they are now considered
unexported) and some were added (likely do to better re-export support).

This particular case probably warrants more special attention anyway.
So I think this is fine. (It's only a one-ranking regression.)
2025-12-04 13:21:26 -05:00
Andrew Gallant
32f400a457 [ty] Make auto-import ignore symbols in modules starting with a _
This applies recursively. So if *any* component of a module name starts
with a `_`, then symbols from that module are excluded from auto-import.

The exception is when it's a module within first party code. Then we
want to include it in auto-import.
2025-12-04 13:21:26 -05:00
Andrew Gallant
2a38395bc8 [ty] Add some tests for re-exports and __all__ to completions
Note that the `Deprecated` symbols from `importlib.metadata` are no
longer offered because 1) `importlib.metadata` defined `__all__` and 2)
the `Deprecated` symbols aren't in it. These seem to not be a part of
its public API according to the docs, so this seems right to me.
2025-12-04 13:21:26 -05:00
Andrew Gallant
8c72b296c9 [ty] Add support for re-exports and __all__ to auto-import
This commit (mostly) re-implements the support for `__all__` in
ty-proper, but inside the auto-import AST scanner.

When `__all__` isn't present in a module, we fall back to conventions to
determine whether a symbol is exported or not:
https://docs.python.org/3/library/index.html

However, in keeping with current practice for non-auto-import
completions, we continue to provide sunder and dunder names as
re-exports.

When `__all__` is present, we respect it strictly. That is, a symbol is
exported *if and only if* it's in `__all__`. This is somewhat stricter
than pylance seemingly is. I felt like it was a good idea to start here,
and we can relax it based on user demand (perhaps through a setting).
2025-12-04 13:21:26 -05:00
Andrew Gallant
086f1e0b89 [ty] Skip over expressions in auto-import AST scanning 2025-12-04 13:21:26 -05:00
Andrew Gallant
5da45f8ec7 [ty] Simplify auto-import AST visitor slightly and add tests
This simplifies the existing visitor by DRYing it up slightly.
We also add tests for the existing functionality. In particular,
we want to add support for re-export conventions, and that
warrants more careful testing.
2025-12-04 13:21:26 -05:00
Andrew Gallant
62f20b1e86 [ty] Re-arrange imports in symbol extraction
I like using a qualified `ast::` prefix for things from
`ruff_python_ast`, so switch over to that convention.
2025-12-04 13:21:26 -05:00
Aria Desires
cccb0bbaa4 [ty] Add tests for implicit submodule references (#21793)
## Summary

I realized we don't really test `DefinitionKind::ImportFromSubmodule` in
the IDE at all, so here's a bunch of them, just recording our current
behaviour.

## Test Plan

*stares at the camera*
2025-12-04 15:46:23 +00:00
Brent Westbrook
9d4f1c6ae2 Bump 0.14.8 (#21791) 2025-12-04 09:45:53 -05:00
Micha Reiser
326025d45f [ty] Always register rename provider if client doesn't support dynamic registration (#21789) 2025-12-04 14:40:16 +01:00
Micha Reiser
3aefe85b32 [ty] Ensure rename CursorTest calls can_rename before renaming (#21790) 2025-12-04 14:19:48 +01:00
Dhruv Manilawala
b8ecc83a54 Fix clippy errors on main (#21788)
https://github.com/astral-sh/ruff/actions/runs/19922070773/job/57112827024#step:5:62
2025-12-04 16:20:37 +05:30
Aria Desires
6491932757 [ty] Fix crash when hovering an unknown string annotation (#21782)
## Summary

I have no idea what I'm doing with the fix (all the interesting stuff is
in the second commit).

The basic problem is the compiler emits the diagnostic:

```
x: "foobar"
    ^^^^^^
```

Which the suppression code-action hands the end of to `Tokens::after`
which then panics because that function panics if handed an offset that
is in the middle of a token.

Fixes https://github.com/astral-sh/ty/issues/1748

## Test Plan

Many tests added (only the e2e test matters).
2025-12-04 09:11:40 +01:00
Micha Reiser
a9f2bb41bd [ty] Don't send publish diagnostics for clients supporting pull diagnostics (#21772) 2025-12-04 08:12:04 +01:00
Aria Desires
e2b72fbf99 [ty] cleanup test path (#21781)
Fixes
https://github.com/astral-sh/ruff/pull/21745#discussion_r2586552295
2025-12-03 21:54:50 +00:00
Alex Waygood
14fce0d440 [ty] Improve the display of various special-form types (#21775) 2025-12-03 21:19:59 +00:00
Alex Waygood
8ebecb2a88 [ty] Add subdiagnostic hint if the user wrote X = Any rather than X: Any (#21777) 2025-12-03 20:42:21 +00:00
Aria Desires
45ac30a4d7 [ty] Teach ty the meaning of desperation (try ancestor pyproject.tomls as search-paths if module resolution fails) (#21745)
## Summary

This makes an importing file a required argument to module resolution,
and if the fast-path cached query fails to resolve the module, take the
slow-path uncached (could be cached if we want)
`desperately_resolve_module` which will walk up from the importing file
until it finds a `pyproject.toml` (arbitrary decision, we could try
every ancestor directory), at which point it takes one last desperate
attempt to use that directory as a search-path. We do not continue
walking up once we've found a `pyproject.toml` (arbitrary decision, we
could keep going up).

Running locally, this fixes every broken-for-workspace-reasons import in
pyx's workspace!

* Fixes https://github.com/astral-sh/ty/issues/1539
* Improves https://github.com/astral-sh/ty/issues/839

## Test Plan

The workspace tests see a huge improvement on most absolute imports.
2025-12-03 15:04:36 -05:00
Alex Waygood
0280949000 [ty] fix panic when attempting to infer the variance of a PEP-695 class that depends on a recursive type aliases and also somehow protocols (#21778)
Fixes https://github.com/astral-sh/ty/issues/1716.

## Test plan

I added a corpus snippet that causes us to panic on `main` (I tested by
running `cargo run -p ty_python_semantic --test=corpus` without the fix
applied).
2025-12-03 19:01:42 +00:00
Bhuminjay Soni
c722f498fe [flake8-bugbear] Catch yield expressions within other statements (B901) (#21200)
## Summary

This PR re-implements [return-in-generator
(B901)](https://docs.astral.sh/ruff/rules/return-in-generator/#return-in-generator-b901)
for async generators as a semantic syntax error. This is not a syntax
error for sync generators, so we'll need to preserve both the lint rule
and the syntax error in this case.

It also updates B901 and the new implementation to catch cases where the
generator's `yield` or `yield from` expression is part of another
statement, as in:

```py
def foo():
    return (yield)
```

These were previously not caught because we only looked for
`Stmt::Expr(Expr::Yield)` in `visit_stmt` instead of visiting `yield`
expressions directly. I think this modification is within the spirit of
the rule and safe to try out since the rule is in preview.

## Test Plan

<!-- How was it tested? -->
I have written tests as directed in #17412

---------

Signed-off-by: 11happy <soni5happy@gmail.com>
Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
2025-12-03 12:05:15 -05:00
David Peter
1f4f8d9950 [ty] Fix flow of associated member states during star imports (#21776)
## Summary

Star-imports can not just affect the state of symbols that they pull in,
they can also affect the state of members that are associated with those
symbols. For example, if `obj.attr` was previously narrowed from `int |
None` to `int`, and a star-import now overwrites `obj`, then the
narrowing on `obj.attr` should be "reset".

This PR keeps track of the state of associated members during star
imports and properly models the flow of their corresponding state
through the control flow structure that we artificially create for
star-imports.

See [this
comment](https://github.com/astral-sh/ty/issues/1355#issuecomment-3607125005)
for an explanation why this caused ty to see certain `asyncio` symbols
as not being accessible on Python 3.14.

closes https://github.com/astral-sh/ty/issues/1355

## Ecosystem impact

```diff
async-utils (https://github.com/mikeshardmind/async-utils)
- src/async_utils/bg_loop.py:115:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
- Found 30 diagnostics
+ Found 29 diagnostics

mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ mitmproxy/utils/asyncio_utils.py:96:60: warning[unused-ignore-comment] Unused blanket `type: ignore` directive
- test/conftest.py:37:31: error[invalid-argument-type] Argument to bound method `set_task_factory` is incorrect: Expected `_TaskFactory | None`, found `def eager_task_factory[_T_co](loop: AbstractEventLoop | None, coro: Coroutine[Any, Any, _T_co@eager_task_factory], *, name: str | None = None, context: Context | None = None) -> Task[_T_co@eager_task_factory]`
```

All of these seem to be correct, they give us a different type for
`asyncio` symbols that are now imported from different
`sys.version_info` branches (where we previously failed to recognize
some of these as statically true/false).

```diff
dd-trace-py (https://github.com/DataDog/dd-trace-py)
- ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Coroutine[Any, Any, _T@create_task] | Generator[Any, None, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
+ ddtrace/contrib/internal/asyncio/patch.py:39:12: error[invalid-argument-type] Argument to function `unwrap` is incorrect: Expected `WrappedFunction`, found `def create_task[_T](self, coro: Generator[Any, None, _T@create_task] | Coroutine[Any, Any, _T@create_task], *, name: object = None) -> Task[_T@create_task]`
```

Similar, but only results in a diagnostic change.

## Test Plan

Added a regression test
2025-12-03 17:52:31 +01:00
William Woodruff
4488e9d47d Revert "Enable PEP 740 attestations when publishing to PyPI" (#21768) 2025-12-03 11:07:29 -05:00
github-actions[bot]
b08f0b2caa [ty] Sync vendored typeshed stubs (#21715)
Co-authored-by: typeshedbot <>
Co-authored-by: David Peter <mail@david-peter.de>
2025-12-03 15:49:51 +00:00
David Peter
d6e472f297 [ty] Reachability constraints: minor documentation fixes (#21774) 2025-12-03 16:40:11 +01:00
Douglas Creager
45842cc034 [ty] Fix non-determinism in ConstraintSet.specialize_constrained (#21744)
This fixes a non-determinism that we were seeing in the constraint set
tests in https://github.com/astral-sh/ruff/pull/21715.

In this test, we create the following constraint set, and then try to
create a specialization from it:

```
(T@constrained_by_gradual_list = list[Base])
  ∨
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
```

That is, `T` is either specifically `list[Base]`, or it's any `list`.
Our current heuristics say that, absent other restrictions, we should
specialize `T` to the more specific type (`list[Base]`).

In the correct test output, we end up creating a BDD that looks like
this:

```
(T@constrained_by_gradual_list = list[Base])
┡━₁ always
└─₀ (Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
    ┡━₁ always
    └─₀ never
```

In the incorrect output, the BDD looks like this:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never
```

The difference is the ordering of the two individual constraints. Both
constraints appear in the first BDD, but the second BDD only contains `T
is any list`. If we were to force the second BDD to contain both
constraints, it would look like this:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
    ┡━₁ always
    └─₀ never
```

This is the standard shape for an OR of two constraints. However! Those
two constraints are not independent of each other! If `T` is
specifically `list[Base]`, then it's definitely also "any `list`". From
that, we can infer the contrapositive: that if `T` is not any list, then
it cannot be `list[Base]` specifically. When we encounter impossible
situations like that, we prune that path in the BDD, and treat it as
`false`. That rewrites the second BDD to the following:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ (T@constrained_by_gradual_list = list[Base])
    ┡━₁ never   <-- IMPOSSIBLE, rewritten to never
    └─₀ never
```

We then would see that that BDD node is redundant, since both of its
outgoing edges point at the `never` node. Our BDDs are _reduced_, which
means we have to remove that redundant node, resulting in the BDD we saw
above:

```
(Bottom[list[Any]] ≤ T@constrained_by_gradual_list ≤ Top[list[Any]])
┡━₁ always
└─₀ never       <-- redundant node removed
```

The end result is that we were "forgetting" about the `T = list[Base]`
constraint, but only for some BDD variable orderings.

To fix this, I'm leaning in to the fact that our BDDs really do need to
"remember" all of the constraints that they were created with. Some
combinations might not be possible, but we now have the sequent map,
which is quite good at detecting and pruning those.

So now our BDDs are _quasi-reduced_, which just means that redundant
nodes are allowed. (At first I was worried that allowing redundant nodes
would be an unsound "fix the glitch". But it turns out they're real!
[This](https://ieeexplore.ieee.org/abstract/document/130209) is the
paper that introduces them, though it's very difficult to read. Knuth
mentions them in §7.1.4 of
[TAOCP](https://course.khoury.northeastern.edu/csu690/ssl/bdd-knuth.pdf),
and [this paper](https://par.nsf.gov/servlets/purl/10128966) has a nice
short summary of them in §2.)

While we're here, I've added a bunch of `debug` and `trace` level log
messages to the constraint set implementation. I was getting tired of
having to add these by hands over and over. To enable them, just set
`TY_LOG` in your environment, e.g.

```sh
env TY_LOG=ty_python_semantic::types::constraints::SequentMap=trace ty check ...
```

[Note, this has an `internal` label because are still not using
`specialize_constrained` in anything user-facing yet.]
2025-12-03 10:19:39 -05:00
Alex Waygood
cd079bd92e [ty] Improve @override, @final and Liskov checks in cases where there are multiple reachable definitions (#21767) 2025-12-03 12:51:36 +00:00
Alex Waygood
5756b3809c [ty] Extend invalid-explicit-override to also cover properties decorated with @override that do not override anything (#21756) 2025-12-03 11:27:47 +00:00
Micha Reiser
92c5f62ec0 [ty] Enable LRU collection for parsed module (#21749) 2025-12-03 12:16:18 +01:00
David Peter
21e5a57296 [ty] Support typevar-specialized dynamic types in generic type aliases (#21730)
## Summary

For a type alias like the one below, where `UnknownClass` is something
with a dynamic type, we previously lost track of the fact that this
dynamic type was explicitly specialized *with a type variable*. If that
alias is then later explicitly specialized itself (`MyAlias[int]`), we
would miscount the number of legacy type variables and emit a
`invalid-type-arguments` diagnostic
([playground](https://play.ty.dev/886ae6cc-86c3-4304-a365-510d29211f85)).
```py
T = TypeVar("T")

MyAlias: TypeAlias = UnknownClass[T] | None
```
The solution implemented here is not pretty, but we can hopefully get
rid of it via https://github.com/astral-sh/ty/issues/1711. Also, once we
properly support `ParamSpec` and `Concatenate`, we should be able to
remove some of this code.

This addresses many of the `invalid-type-arguments` false-positives in
https://github.com/astral-sh/ty/issues/1685. With this change, there are
still some diagnostics of this type left. Instead of implementing even
more (rather sophisticated) workarounds for these cases as well, it
might be much easier to wait for full `ParamSpec`/`Concatenate` support
and then try again.

A disadvantage of this implementation is that we lose track of some
`@Todo` types and replace them with `Unknown`. We could spend more
effort and try to preserve them, but I'm unsure if this is the best use
of our time right now.

## Test Plan

New Markdown tests.
2025-12-03 10:00:02 +01:00
Denys Zhak
f4e4229683 Add token based parenthesized_ranges implementation (#21738)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-03 08:15:17 +00:00
David Peter
e6ddeed386 [ty] Default-specialization of generic type aliases (#21765)
## Summary

Implement default-specialization of generic type aliases (implicit or
PEP-613) if they are used in a type expression without an explicit
specialization.

closes https://github.com/astral-sh/ty/issues/1690

## Typing conformance

```diff
-generics_defaults_specialization.py:26:5: error[type-assertion-failure] Type `SomethingWithNoDefaults[int, str]` does not match asserted type `SomethingWithNoDefaults[int, DefaultStrT]`
```

That's exactly what we want ✔️ 

All other tests in this file pass as well, with the exception of this
assertion, which is just wrong (at least according to our
interpretation, `type[Bar] != <class 'Bar'>`). I checked that we do
correctly default-specialize the type parameter which is not displayed
in the diagnostic that we raise.
```py
class Bar(SubclassMe[int, DefaultStrT]): ...

assert_type(Bar, type[Bar[str]])  # ty: Type `type[Bar[str]]` does not match asserted type `<class 'Bar'>`
```

## Ecosystem impact

Looks like I should have included this last week 😎 

## Test Plan

Updated pre-existing tests and add a few new ones.
2025-12-03 09:10:45 +01:00
Alex Waygood
c5b8d551df [ty] Suppress false positives when dataclasses.dataclass(...)(cls) is called imperatively (#21729)
Fixes https://github.com/astral-sh/ty/issues/1705
2025-12-03 08:05:25 +00:00
Bhuminjay Soni
f68080b55e [syntax-error] Default type parameter followed by non-default type parameter (#21657)
## Summary

This PR implements syntax error where a default type parameter is
followed by a non-default type parameter.
https://github.com/astral-sh/ruff/issues/17412#issuecomment-3584088217


## Test Plan

I have written inline tests as directed in #17412

---------

Signed-off-by: 11happy <bhuminjaysoni@gmail.com>
Signed-off-by: 11happy <soni5happy@gmail.com>
2025-12-03 12:01:31 +05:30
Amethyst Reese
abaa49f552 new module for parsing ranged suppressions (#21441)
This adds a new `suppression` module to the `ruff_linter` crate, similar
to the suppression
module for ty, to parse comments for ruff suppression directives, such
as `# ruff: disable[CODE]`.
2025-12-02 15:39:59 -08:00
Ibraheem Ahmed
7b0aab1696 [ty] type[T] is assignable to an inferable typevar (#21766)
## Summary

Resolves https://github.com/astral-sh/ty/issues/1712.
2025-12-02 18:25:09 -05:00
Brent Westbrook
2250fa6f98 Fix syntax error false positives for await outside functions (#21763)
## Summary

Fixes #21750 and a related bug in `PLE1142`. We were not properly
considering generators to be valid `await` contexts, which caused the
`F704` issue. One of the tests I added for this also uncovered an issue
in `PLE1142` for comprehensions nested within async generators because
we were only checking the current scope rather than traversing the
nested context.

## Test Plan

Both of these rules are implemented as semantic syntax errors, so I
added tests (and fixes) in both Ruff and ty.
2025-12-02 21:02:02 +00:00
Alex Waygood
392a8e4e50 [ty] Improve diagnostics for unsupported comparison operations (#21737) 2025-12-02 19:58:45 +00:00
Micha Reiser
515de2d062 Move Token, TokenKind and Tokens to ruff-python-ast (#21760) 2025-12-02 20:10:46 +01:00
370 changed files with 26870 additions and 11220 deletions

View File

@@ -75,14 +75,6 @@
matchManagers: ["cargo"],
enabled: false,
},
{
// `mkdocs-material` requires a manual update to keep the version in sync
// with `mkdocs-material-insider`.
// See: https://squidfunk.github.io/mkdocs-material/insiders/upgrade/
matchManagers: ["pip_requirements"],
matchPackageNames: ["mkdocs-material"],
enabled: false,
},
{
groupName: "pre-commit dependencies",
matchManagers: ["pre-commit"],

View File

@@ -24,6 +24,8 @@ env:
PACKAGE_NAME: ruff
PYTHON_VERSION: "3.14"
NEXTEST_PROFILE: ci
# Enable mdtests that require external dependencies
MDTEST_EXTERNAL: "1"
jobs:
determine_changes:
@@ -779,8 +781,6 @@ jobs:
name: "mkdocs"
runs-on: ubuntu-latest
timeout-minutes: 10
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
@@ -788,11 +788,6 @@ jobs:
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"
run: rustup show
- name: Install uv
@@ -800,11 +795,7 @@ jobs:
with:
python-version: 3.13
activate-environment: true
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: uv pip install -r docs/requirements-insiders.txt
- name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: uv pip install -r docs/requirements.txt
- name: "Update README File"
run: python scripts/transform_readme.py --target mkdocs
@@ -812,12 +803,8 @@ jobs:
run: python scripts/generate_mkdocs.py
- name: "Check docs formatting"
run: python scripts/check_docs_formatted.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.public.yml
run: mkdocs build --strict -f mkdocs.yml
check-formatter-instability-and-black-similarity:
name: "formatter instabilities and black similarity"

View File

@@ -20,8 +20,6 @@ on:
jobs:
mkdocs:
runs-on: ubuntu-latest
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
@@ -59,23 +57,12 @@ jobs:
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> "$GITHUB_ENV"
echo "timestamp=$timestamp" >> "$GITHUB_ENV"
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: pip install -r docs/requirements-insiders.txt
- name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: pip install -r docs/requirements.txt
- name: "Copy README File"
@@ -83,13 +70,8 @@ jobs:
python scripts/transform_readme.py --target mkdocs
python scripts/generate_mkdocs.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.public.yml
run: mkdocs build --strict -f mkdocs.yml
- name: "Clone docs repo"
run: git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs

View File

@@ -18,7 +18,8 @@ jobs:
environment:
name: release
permissions:
id-token: write # For PyPI's trusted publishing + PEP 740 attestations
# For PyPI's trusted publishing.
id-token: write
steps:
- name: "Install uv"
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
@@ -27,8 +28,5 @@ jobs:
pattern: wheels-*
path: wheels
merge-multiple: true
- uses: astral-sh/attest-action@2c727738cea36d6c97dd85eb133ea0e0e8fe754b # v0.0.4
with:
paths: wheels/*
- name: Publish to PyPi
run: uv publish -v wheels/*

View File

@@ -1,5 +1,34 @@
# Changelog
## 0.14.8
Released on 2025-12-04.
### Preview features
- \[`flake8-bugbear`\] Catch `yield` expressions within other statements (`B901`) ([#21200](https://github.com/astral-sh/ruff/pull/21200))
- \[`flake8-use-pathlib`\] Mark fixes unsafe for return type changes (`PTH104`, `PTH105`, `PTH109`, `PTH115`) ([#21440](https://github.com/astral-sh/ruff/pull/21440))
### Bug fixes
- Fix syntax error false positives for `await` outside functions ([#21763](https://github.com/astral-sh/ruff/pull/21763))
- \[`flake8-simplify`\] Fix truthiness assumption for non-iterable arguments in tuple/list/set calls (`SIM222`, `SIM223`) ([#21479](https://github.com/astral-sh/ruff/pull/21479))
### Documentation
- Suggest using `--output-file` option in GitLab integration ([#21706](https://github.com/astral-sh/ruff/pull/21706))
### Other changes
- [syntax-error] Default type parameter followed by non-default type parameter ([#21657](https://github.com/astral-sh/ruff/pull/21657))
### Contributors
- [@kieran-ryan](https://github.com/kieran-ryan)
- [@11happy](https://github.com/11happy)
- [@danparizher](https://github.com/danparizher)
- [@ntBre](https://github.com/ntBre)
## 0.14.7
Released on 2025-11-28.

View File

@@ -331,13 +331,6 @@ you addressed them.
## MkDocs
> [!NOTE]
>
> The documentation uses Material for MkDocs Insiders, which is closed-source software.
> This means only members of the Astral organization can preview the documentation exactly as it
> will appear in production.
> Outside contributors can still preview the documentation, but there will be some differences. Consult [the Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/insiders/benefits/#features) for which features are exclusively available in the insiders version.
To preview any changes to the documentation locally:
1. Install the [Rust toolchain](https://www.rust-lang.org/tools/install).
@@ -351,11 +344,7 @@ To preview any changes to the documentation locally:
1. Run the development server with:
```shell
# For contributors.
uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.public.yml
# For members of the Astral org, which has access to MkDocs Insiders via sponsorship.
uvx --with-requirements docs/requirements-insiders.txt -- mkdocs serve -f mkdocs.insiders.yml
uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.yml
```
The documentation should then be available locally at

8
Cargo.lock generated
View File

@@ -2859,7 +2859,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.7"
version = "0.14.8"
dependencies = [
"anyhow",
"argfile",
@@ -3117,13 +3117,14 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.7"
version = "0.14.8"
dependencies = [
"aho-corasick",
"anyhow",
"bitflags 2.10.0",
"clap",
"colored 3.0.0",
"compact_str",
"fern",
"glob",
"globset",
@@ -3472,7 +3473,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.7"
version = "0.14.8"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -4556,6 +4557,7 @@ dependencies = [
"anyhow",
"camino",
"colored 3.0.0",
"dunce",
"insta",
"memchr",
"path-slash",

View File

@@ -272,6 +272,12 @@ large_stack_arrays = "allow"
lto = "fat"
codegen-units = 16
# Profile to build a minimally sized binary for ruff/ty
[profile.minimal-size]
inherits = "release"
opt-level = "z"
codegen-units = 1
# Some crates don't change as much but benefit more from
# more expensive optimization passes, so we selectively
# decrease codegen-units in some cases.

View File

@@ -147,8 +147,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.7/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.7/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.8/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.8/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -181,7 +181,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.7
rev: v0.14.8
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.7"
version = "0.14.8"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1440,6 +1440,78 @@ def function():
Ok(())
}
#[test]
fn ignore_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
import os # noqa: F401
# ruff: disable[F401]
import sys
"#,
)?;
// without --ignore-noqa
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py"),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 1 error.
[*] 1 fixable with the `--fix` option.
----- stderr -----
");
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--preview"]),
@r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
// with --ignore-noqa --preview
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.args(["--ignore-noqa", "--preview"]),
@r"
success: false
exit_code: 1
----- stdout -----
noqa.py:2:8: F401 [*] `os` imported but unused
noqa.py:5:8: F401 [*] `sys` imported but unused
Found 2 errors.
[*] 2 fixable with the `--fix` option.
----- stderr -----
");
Ok(())
}
#[test]
fn add_noqa() -> Result<()> {
let fixture = CliTest::new()?;
@@ -1632,6 +1704,100 @@ def unused(x): # noqa: ANN001, ARG001, D103
Ok(())
}
#[test]
fn add_noqa_existing_file_level_noqa() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: noqa F401
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: noqa F401
import os
");
Ok(())
}
#[test]
fn add_noqa_existing_range_suppression() -> Result<()> {
let fixture = CliTest::new()?;
fixture.write_file(
"ruff.toml",
r#"
[lint]
select = ["F401"]
"#,
)?;
fixture.write_file(
"noqa.py",
r#"
# ruff: disable[F401]
import os
"#,
)?;
assert_cmd_snapshot!(fixture
.check_command()
.args(["--config", "ruff.toml"])
.arg("noqa.py")
.arg("--preview")
.args(["--add-noqa"])
.arg("-")
.pass_stdin(r#"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
");
let test_code =
fs::read_to_string(fixture.root().join("noqa.py")).expect("should read test file");
insta::assert_snapshot!(test_code, @r"
# ruff: disable[F401]
import os
");
Ok(())
}
#[test]
fn add_noqa_multiline_comment() -> Result<()> {
let fixture = CliTest::new()?;

View File

@@ -6,7 +6,8 @@ use criterion::{
use ruff_benchmark::{
LARGE_DATASET, NUMPY_CTYPESLIB, NUMPY_GLOBALS, PYDANTIC_TYPES, TestCase, UNICODE_PYPINYIN,
};
use ruff_python_parser::{Mode, TokenKind, lexer};
use ruff_python_ast::token::TokenKind;
use ruff_python_parser::{Mode, lexer};
#[cfg(target_os = "windows")]
#[global_allocator]

View File

@@ -166,28 +166,8 @@ impl Diagnostic {
/// Returns the primary message for this diagnostic.
///
/// A diagnostic always has a message, but it may be empty.
///
/// NOTE: At present, this routine will return the first primary
/// annotation's message as the primary message when the main diagnostic
/// message is empty. This is meant to facilitate an incremental migration
/// in ty over to the new diagnostic data model. (The old data model
/// didn't distinguish between messages on the entire diagnostic and
/// messages attached to a particular span.)
pub fn primary_message(&self) -> &str {
if !self.inner.message.as_str().is_empty() {
return self.inner.message.as_str();
}
// FIXME: As a special case, while we're migrating ty
// to the new diagnostic data model, we'll look for a primary
// message from the primary annotation. This is because most
// ty diagnostics are created with an empty diagnostic
// message and instead attach the message to the annotation.
// Fixing this will require touching basically every diagnostic
// in ty, so we do it this way for now to match the old
// semantics. ---AG
self.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default()
self.inner.message.as_str()
}
/// Introspects this diagnostic and returns what kind of "primary" message
@@ -199,18 +179,6 @@ impl Diagnostic {
/// contains *essential* information or context for understanding the
/// diagnostic.
///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
@@ -224,11 +192,10 @@ impl Diagnostic {
.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) {
(false, true) => ConciseMessage::MainDiagnostic(main),
(true, false) => ConciseMessage::PrimaryAnnotation(annotation),
(false, false) => ConciseMessage::Both { main, annotation },
(true, true) => ConciseMessage::Empty,
if annotation.is_empty() {
ConciseMessage::MainDiagnostic(main)
} else {
ConciseMessage::Both { main, annotation }
}
}
@@ -693,18 +660,6 @@ impl SubDiagnostic {
/// contains *essential* information or context for understanding the
/// diagnostic.
///
/// The reason why we don't just always return both the main diagnostic
/// message and the primary annotation message is because this was written
/// in the midst of an incremental migration of ty over to the new
/// diagnostic data model. At time of writing, diagnostics were still
/// constructed in the old model where the main diagnostic message and the
/// primary annotation message were not distinguished from each other. So
/// for now, we carefully return what kind of messages this diagnostic
/// contains. In effect, if this diagnostic has a non-empty main message
/// *and* a non-empty primary annotation message, then the diagnostic is
/// 100% using the new diagnostic data model and we can format things
/// appropriately.
///
/// The type returned implements the `std::fmt::Display` trait. In most
/// cases, just converting it to a string (or printing it) will do what
/// you want.
@@ -714,11 +669,10 @@ impl SubDiagnostic {
.primary_annotation()
.and_then(|ann| ann.get_message())
.unwrap_or_default();
match (main.is_empty(), annotation.is_empty()) {
(false, true) => ConciseMessage::MainDiagnostic(main),
(true, false) => ConciseMessage::PrimaryAnnotation(annotation),
(false, false) => ConciseMessage::Both { main, annotation },
(true, true) => ConciseMessage::Empty,
if annotation.is_empty() {
ConciseMessage::MainDiagnostic(main)
} else {
ConciseMessage::Both { main, annotation }
}
}
}
@@ -888,6 +842,10 @@ impl Annotation {
pub fn hide_snippet(&mut self, yes: bool) {
self.hide_snippet = yes;
}
pub fn is_primary(&self) -> bool {
self.is_primary
}
}
/// Tags that can be associated with an annotation.
@@ -1508,28 +1466,10 @@ pub enum DiagnosticFormat {
pub enum ConciseMessage<'a> {
/// A diagnostic contains a non-empty main message and an empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "new" data model.
MainDiagnostic(&'a str),
/// A diagnostic contains an empty main message and a non-empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "old" data model.
PrimaryAnnotation(&'a str),
/// A diagnostic contains a non-empty main message and a non-empty
/// primary annotation message.
///
/// This strongly suggests that the diagnostic is using the
/// "new" data model.
Both { main: &'a str, annotation: &'a str },
/// A diagnostic contains an empty main message and an empty
/// primary annotation message.
///
/// This indicates that the diagnostic is probably using the old
/// model.
Empty,
/// A custom concise message has been provided.
Custom(&'a str),
}
@@ -1540,13 +1480,9 @@ impl std::fmt::Display for ConciseMessage<'_> {
ConciseMessage::MainDiagnostic(main) => {
write!(f, "{main}")
}
ConciseMessage::PrimaryAnnotation(annotation) => {
write!(f, "{annotation}")
}
ConciseMessage::Both { main, annotation } => {
write!(f, "{main}: {annotation}")
}
ConciseMessage::Empty => Ok(()),
ConciseMessage::Custom(message) => {
write!(f, "{message}")
}

View File

@@ -21,7 +21,11 @@ use crate::source::source_text;
/// reflected in the changed AST offsets.
/// The other reason is that Ruff's AST doesn't implement `Eq` which Salsa requires
/// for determining if a query result is unchanged.
#[salsa::tracked(returns(ref), no_eq, heap_size=ruff_memory_usage::heap_size)]
///
/// The LRU capacity of 200 was picked without any empirical evidence that it's optimal,
/// instead it's a wild guess that it should be unlikely that incremental changes involve
/// more than 200 modules. Parsed ASTs within the same revision are never evicted by Salsa.
#[salsa::tracked(returns(ref), no_eq, heap_size=ruff_memory_usage::heap_size, lru=200)]
pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
let _span = tracing::trace_span!("parsed_module", ?file).entered();
@@ -92,14 +96,9 @@ impl ParsedModule {
self.inner.store(None);
}
/// Returns the pointer address of this [`ParsedModule`].
///
/// The pointer uniquely identifies the module within the current Salsa revision,
/// regardless of whether particular [`ParsedModuleRef`] instances are garbage collected.
pub fn addr(&self) -> usize {
// Note that the outer `Arc` in `inner` is stable across garbage collection, while the inner
// `Arc` within the `ArcSwap` may change.
Arc::as_ptr(&self.inner).addr()
/// Returns the file to which this module belongs.
pub fn file(&self) -> File {
self.file
}
}

View File

@@ -667,6 +667,13 @@ impl Deref for SystemPathBuf {
}
}
impl AsRef<Path> for SystemPathBuf {
#[inline]
fn as_ref(&self) -> &Path {
self.0.as_std_path()
}
}
impl<P: AsRef<SystemPath>> FromIterator<P> for SystemPathBuf {
fn from_iter<I: IntoIterator<Item = P>>(iter: I) -> Self {
let mut buf = SystemPathBuf::new();

View File

@@ -49,7 +49,7 @@ impl ModuleImports {
// Resolve the imports.
let mut resolved_imports = ModuleImports::default();
for import in imports {
for resolved in Resolver::new(db).resolve(import) {
for resolved in Resolver::new(db, path).resolve(import) {
if let Some(path) = resolved.as_system_path() {
resolved_imports.insert(path.to_path_buf());
}

View File

@@ -1,5 +1,9 @@
use ruff_db::files::FilePath;
use ty_python_semantic::{ModuleName, resolve_module, resolve_real_module};
use ruff_db::files::{File, FilePath, system_path_to_file};
use ruff_db::system::SystemPath;
use ty_python_semantic::{
ModuleName, resolve_module, resolve_module_confident, resolve_real_module,
resolve_real_module_confident,
};
use crate::ModuleDb;
use crate::collector::CollectedImport;
@@ -7,12 +11,15 @@ use crate::collector::CollectedImport;
/// Collect all imports for a given Python file.
pub(crate) struct Resolver<'a> {
db: &'a ModuleDb,
file: Option<File>,
}
impl<'a> Resolver<'a> {
/// Initialize a [`Resolver`] with a given [`ModuleDb`].
pub(crate) fn new(db: &'a ModuleDb) -> Self {
Self { db }
pub(crate) fn new(db: &'a ModuleDb, path: &SystemPath) -> Self {
// If we know the importing file we can potentially resolve more imports
let file = system_path_to_file(db, path).ok();
Self { db, file }
}
/// Resolve the [`CollectedImport`] into a [`FilePath`].
@@ -70,13 +77,21 @@ impl<'a> Resolver<'a> {
/// Resolves a module name to a module.
pub(crate) fn resolve_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
let module = resolve_module(self.db, module_name)?;
let module = if let Some(file) = self.file {
resolve_module(self.db, file, module_name)?
} else {
resolve_module_confident(self.db, module_name)?
};
Some(module.file(self.db)?.path(self.db))
}
/// Resolves a module name to a module (stubs not allowed).
fn resolve_real_module(&self, module_name: &ModuleName) -> Option<&'a FilePath> {
let module = resolve_real_module(self.db, module_name)?;
let module = if let Some(file) = self.file {
resolve_real_module(self.db, file, module_name)?
} else {
resolve_real_module_confident(self.db, module_name)?
};
Some(module.file(self.db)?.path(self.db))
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.7"
version = "0.14.8"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -35,6 +35,7 @@ anyhow = { workspace = true }
bitflags = { workspace = true }
clap = { workspace = true, features = ["derive", "string"], optional = true }
colored = { workspace = true }
compact_str = { workspace = true }
fern = { workspace = true }
glob = { workspace = true }
globset = { workspace = true }

View File

@@ -28,9 +28,11 @@ yaml.load("{}", SafeLoader)
yaml.load("{}", yaml.SafeLoader)
yaml.load("{}", CSafeLoader)
yaml.load("{}", yaml.CSafeLoader)
yaml.load("{}", yaml.cyaml.CSafeLoader)
yaml.load("{}", NewSafeLoader)
yaml.load("{}", Loader=SafeLoader)
yaml.load("{}", Loader=yaml.SafeLoader)
yaml.load("{}", Loader=CSafeLoader)
yaml.load("{}", Loader=yaml.CSafeLoader)
yaml.load("{}", Loader=yaml.cyaml.CSafeLoader)
yaml.load("{}", Loader=NewSafeLoader)

View File

@@ -199,6 +199,9 @@ def bytes_okay(value=bytes(1)):
def int_okay(value=int("12")):
pass
# Allow immutable slice()
def slice_okay(value=slice(1,2)):
pass
# Allow immutable complex() value
def complex_okay(value=complex(1,2)):

View File

@@ -52,16 +52,16 @@ def not_broken5():
yield inner()
def not_broken6():
def broken3():
return (yield from [])
def not_broken7():
def broken4():
x = yield from []
return x
def not_broken8():
def broken5():
x = None
def inner(ex):
@@ -76,3 +76,13 @@ class NotBroken9(object):
def __await__(self):
yield from function()
return 42
async def broken6():
yield 1
return foo()
async def broken7():
yield 1
return [1, 2, 3]

View File

@@ -218,3 +218,26 @@ def should_not_fail(payload, Args):
Args:
The other arguments.
"""
# Test cases for Unpack[TypedDict] kwargs
from typing import TypedDict
from typing_extensions import Unpack
class User(TypedDict):
id: int
name: str
def function_with_unpack_args_should_not_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs.
Args:
query: some arg
"""
def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
"""Function with Unpack kwargs but missing query arg documentation.
Args:
**kwargs: keyword arguments
"""

View File

@@ -17,3 +17,24 @@ def _():
# Valid yield scope
yield 3
# await is valid in any generator, sync or async
(await cor async for cor in f()) # ok
(await cor for cor in f()) # ok
# but not in comprehensions
[await cor async for cor in f()] # F704
{await cor async for cor in f()} # F704
{await cor: 1 async for cor in f()} # F704
[await cor for cor in f()] # F704
{await cor for cor in f()} # F704
{await cor: 1 for cor in f()} # F704
# or in the iterator of an async generator, which is evaluated in the parent
# scope
(cor async for cor in await f()) # F704
(await cor async for cor in [await c for c in f()]) # F704
# this is also okay because the comprehension is within the generator scope
([await c for c in cor] async for cor in f()) # ok

View File

@@ -0,0 +1,56 @@
def f():
# These should both be ignored by the range suppression.
# ruff: disable[E741, F841]
I = 1
# ruff: enable[E741, F841]
def f():
# These should both be ignored by the implicit range suppression.
# Should also generate an "unmatched suppression" warning.
# ruff:disable[E741,F841]
I = 1
def f():
# Neither warning is ignored, and an "unmatched suppression"
# should be generated.
I = 1
# ruff: enable[E741, F841]
def f():
# One should be ignored by the range suppression, and
# the other logged to the user.
# ruff: disable[E741]
I = 1
# ruff: enable[E741]
def f():
# Test interleaved range suppressions. The first and last
# lines should each log a different warning, while the
# middle line should be completely silenced.
# ruff: disable[E741]
l = 0
# ruff: disable[F841]
O = 1
# ruff: enable[E741]
I = 2
# ruff: enable[F841]
def f():
# Neither of these are ignored and warnings are
# logged to user
# ruff: disable[E501]
I = 1
# ruff: enable[E501]
def f():
# These should both be ignored by the range suppression,
# and an unusued noqa diagnostic should be logged.
# ruff:disable[E741,F841]
I = 1 # noqa: E741,F841
# ruff:enable[E741,F841]

View File

@@ -3,3 +3,5 @@ def func():
# Top-level await
await 1
([await c for c in cor] async for cor in func()) # ok

View File

@@ -0,0 +1,24 @@
async def gen():
yield 1
return 42
def gen(): # B901 but not a syntax error - not an async generator
yield 1
return 42
async def gen(): # ok - no value in return
yield 1
return
async def gen():
yield 1
return foo()
async def gen():
yield 1
return [1, 2, 3]
async def gen():
if True:
yield 1
return 10

View File

@@ -35,6 +35,7 @@ use ruff_python_ast::helpers::{collect_import_from_member, is_docstring_stmt, to
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::str::Quote;
use ruff_python_ast::token::Tokens;
use ruff_python_ast::visitor::{Visitor, walk_except_handler, walk_pattern};
use ruff_python_ast::{
self as ast, AnyParameterRef, ArgOrKeyword, Comprehension, ElifElseClause, ExceptHandler, Expr,
@@ -48,7 +49,7 @@ use ruff_python_parser::semantic_errors::{
SemanticSyntaxChecker, SemanticSyntaxContext, SemanticSyntaxError, SemanticSyntaxErrorKind,
};
use ruff_python_parser::typing::{AnnotationKind, ParsedAnnotation, parse_type_annotation};
use ruff_python_parser::{ParseError, Parsed, Tokens};
use ruff_python_parser::{ParseError, Parsed};
use ruff_python_semantic::all::{DunderAllDefinition, DunderAllFlags};
use ruff_python_semantic::analyze::{imports, typing};
use ruff_python_semantic::{
@@ -68,6 +69,7 @@ use crate::noqa::NoqaMapping;
use crate::package::PackageRoot;
use crate::preview::is_undefined_export_in_dunder_init_enabled;
use crate::registry::Rule;
use crate::rules::flake8_bugbear::rules::ReturnInGenerator;
use crate::rules::pyflakes::rules::{
LateFutureImport, MultipleStarredExpressions, ReturnOutsideFunction,
UndefinedLocalWithNestedImportStarUsage, YieldOutsideFunction,
@@ -728,6 +730,12 @@ impl SemanticSyntaxContext for Checker<'_> {
self.report_diagnostic(NonlocalWithoutBinding { name }, error.range);
}
}
SemanticSyntaxErrorKind::ReturnInGenerator => {
// B901
if self.is_rule_enabled(Rule::ReturnInGenerator) {
self.report_diagnostic(ReturnInGenerator, error.range);
}
}
SemanticSyntaxErrorKind::ReboundComprehensionVariable
| SemanticSyntaxErrorKind::DuplicateTypeParameter
| SemanticSyntaxErrorKind::MultipleCaseAssignment(_)
@@ -746,6 +754,7 @@ impl SemanticSyntaxContext for Checker<'_> {
| SemanticSyntaxErrorKind::LoadBeforeNonlocalDeclaration { .. }
| SemanticSyntaxErrorKind::NonlocalAndGlobal(_)
| SemanticSyntaxErrorKind::AnnotatedGlobal(_)
| SemanticSyntaxErrorKind::TypeParameterDefaultOrder(_)
| SemanticSyntaxErrorKind::AnnotatedNonlocal(_) => {
self.semantic_errors.borrow_mut().push(error);
}
@@ -779,6 +788,10 @@ impl SemanticSyntaxContext for Checker<'_> {
match scope.kind {
ScopeKind::Class(_) => return false,
ScopeKind::Function(_) | ScopeKind::Lambda(_) => return true,
ScopeKind::Generator {
kind: GeneratorKind::Generator,
..
} => return true,
ScopeKind::Generator { .. }
| ScopeKind::Module
| ScopeKind::Type
@@ -828,14 +841,19 @@ impl SemanticSyntaxContext for Checker<'_> {
self.source_type.is_ipynb()
}
fn in_generator_scope(&self) -> bool {
matches!(
&self.semantic.current_scope().kind,
ScopeKind::Generator {
kind: GeneratorKind::Generator,
..
fn in_generator_context(&self) -> bool {
for scope in self.semantic.current_scopes() {
if matches!(
scope.kind,
ScopeKind::Generator {
kind: GeneratorKind::Generator,
..
}
) {
return true;
}
)
}
false
}
fn in_loop_context(&self) -> bool {

View File

@@ -1,6 +1,6 @@
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange};

View File

@@ -12,17 +12,20 @@ use crate::fix::edits::delete_comment;
use crate::noqa::{
Code, Directive, FileExemption, FileNoqaDirectives, NoqaDirectives, NoqaMapping,
};
use crate::preview::is_range_suppressions_enabled;
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::rules::pygrep_hooks;
use crate::rules::ruff;
use crate::rules::ruff::rules::{UnusedCodes, UnusedNOQA};
use crate::settings::LinterSettings;
use crate::suppression::Suppressions;
use crate::{Edit, Fix, Locator};
use super::ast::LintContext;
/// RUF100
#[expect(clippy::too_many_arguments)]
pub(crate) fn check_noqa(
context: &mut LintContext,
path: &Path,
@@ -31,6 +34,7 @@ pub(crate) fn check_noqa(
noqa_line_for: &NoqaMapping,
analyze_directives: bool,
settings: &LinterSettings,
suppressions: &Suppressions,
) -> Vec<usize> {
// Identify any codes that are globally exempted (within the current file).
let file_noqa_directives =
@@ -40,7 +44,7 @@ pub(crate) fn check_noqa(
let mut noqa_directives =
NoqaDirectives::from_commented_ranges(comment_ranges, &settings.external, path, locator);
if file_noqa_directives.is_empty() && noqa_directives.is_empty() {
if file_noqa_directives.is_empty() && noqa_directives.is_empty() && suppressions.is_empty() {
return Vec::new();
}
@@ -60,11 +64,19 @@ pub(crate) fn check_noqa(
continue;
}
// Apply file-level suppressions first
if exemption.contains_secondary_code(code) {
ignored_diagnostics.push(index);
continue;
}
// Apply ranged suppressions next
if is_range_suppressions_enabled(settings) && suppressions.check_diagnostic(diagnostic) {
ignored_diagnostics.push(index);
continue;
}
// Apply end-of-line noqa suppressions last
let noqa_offsets = diagnostic
.parent()
.into_iter()

View File

@@ -4,9 +4,9 @@ use std::path::Path;
use ruff_notebook::CellOffsets;
use ruff_python_ast::PySourceType;
use ruff_python_ast::token::Tokens;
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_parser::Tokens;
use crate::Locator;
use crate::directives::TodoComment;

View File

@@ -5,8 +5,8 @@ use std::str::FromStr;
use bitflags::bitflags;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_index::Indexer;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_trivia::CommentRanges;
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};

View File

@@ -5,8 +5,8 @@ use std::iter::FusedIterator;
use std::slice::Iter;
use ruff_python_ast::statement_visitor::{StatementVisitor, walk_stmt};
use ruff_python_ast::token::{Token, TokenKind, Tokens};
use ruff_python_ast::{self as ast, Stmt, Suite};
use ruff_python_parser::{Token, TokenKind, Tokens};
use ruff_source_file::UniversalNewlineIterator;
use ruff_text_size::{Ranged, TextSize};

View File

@@ -9,10 +9,11 @@ use anyhow::Result;
use libcst_native as cst;
use ruff_diagnostics::Edit;
use ruff_python_ast::token::Tokens;
use ruff_python_ast::{self as ast, Expr, ModModule, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_importer::Insertion;
use ruff_python_parser::{Parsed, Tokens};
use ruff_python_parser::Parsed;
use ruff_python_semantic::{
ImportedName, MemberNameImport, ModuleNameImport, NameImport, SemanticModel,
};

View File

@@ -46,6 +46,7 @@ pub mod rule_selector;
pub mod rules;
pub mod settings;
pub mod source_kind;
pub mod suppression;
mod text_helpers;
pub mod upstream_categories;
mod violation;

View File

@@ -32,6 +32,7 @@ use crate::rules::ruff::rules::test_rules::{self, TEST_RULES, TestRule};
use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, TargetVersion, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Locator, directives, fs};
pub(crate) mod float;
@@ -128,6 +129,7 @@ pub fn check_path(
source_type: PySourceType,
parsed: &Parsed<ModModule>,
target_version: TargetVersion,
suppressions: &Suppressions,
) -> Vec<Diagnostic> {
// Aggregate all diagnostics.
let mut context = LintContext::new(path, locator.contents(), settings);
@@ -339,6 +341,7 @@ pub fn check_path(
&directives.noqa_line_for,
parsed.has_valid_syntax(),
settings,
suppressions,
);
if noqa.is_enabled() {
for index in ignored.iter().rev() {
@@ -400,6 +403,9 @@ pub fn add_noqa_to_path(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics, ignoring any existing `noqa` directives.
let diagnostics = check_path(
path,
@@ -414,6 +420,7 @@ pub fn add_noqa_to_path(
source_type,
&parsed,
target_version,
&suppressions,
);
// Add any missing `# noqa` pragmas.
@@ -427,6 +434,7 @@ pub fn add_noqa_to_path(
&directives.noqa_line_for,
stylist.line_ending(),
reason,
&suppressions,
)
}
@@ -461,6 +469,9 @@ pub fn lint_only(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics.
let diagnostics = check_path(
path,
@@ -475,6 +486,7 @@ pub fn lint_only(
source_type,
&parsed,
target_version,
&suppressions,
);
LinterResult {
@@ -566,6 +578,9 @@ pub fn lint_fix<'a>(
&indexer,
);
// Parse range suppression comments
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
// Generate diagnostics.
let diagnostics = check_path(
path,
@@ -580,6 +595,7 @@ pub fn lint_fix<'a>(
source_type,
&parsed,
target_version,
&suppressions,
);
if iterations == 0 {
@@ -769,6 +785,7 @@ mod tests {
use crate::registry::Rule;
use crate::settings::LinterSettings;
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::test::{TestedNotebook, assert_notebook_path, test_contents, test_snippet};
use crate::{Locator, assert_diagnostics, directives, settings};
@@ -944,6 +961,7 @@ mod tests {
&locator,
&indexer,
);
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
let mut diagnostics = check_path(
path,
None,
@@ -957,6 +975,7 @@ mod tests {
source_type,
&parsed,
target_version,
&suppressions,
);
diagnostics.sort_by(Diagnostic::ruff_start_ordering);
diagnostics
@@ -1043,6 +1062,7 @@ mod tests {
Rule::YieldFromInAsyncFunction,
Path::new("yield_from_in_async_function.py")
)]
#[test_case(Rule::ReturnInGenerator, Path::new("return_in_generator.py"))]
fn test_syntax_errors(rule: Rule, path: &Path) -> Result<()> {
let snapshot = path.to_string_lossy().to_string();
let path = Path::new("resources/test/fixtures/syntax_errors").join(path);

View File

@@ -20,12 +20,14 @@ use crate::Locator;
use crate::fs::relativize_path;
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
use crate::suppression::Suppressions;
/// Generates an array of edits that matches the length of `messages`.
/// Each potential edit in the array is paired, in order, with the associated diagnostic.
/// Each edit will add a `noqa` comment to the appropriate line in the source to hide
/// the diagnostic. These edits may conflict with each other and should not be applied
/// simultaneously.
#[expect(clippy::too_many_arguments)]
pub fn generate_noqa_edits(
path: &Path,
diagnostics: &[Diagnostic],
@@ -34,11 +36,19 @@ pub fn generate_noqa_edits(
external: &[String],
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
suppressions: &Suppressions,
) -> Vec<Option<Edit>> {
let file_directives = FileNoqaDirectives::extract(locator, comment_ranges, external, path);
let exemption = FileExemption::from(&file_directives);
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
build_noqa_edits_by_diagnostic(comments, locator, line_ending, None)
}
@@ -725,6 +735,7 @@ pub(crate) fn add_noqa(
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
suppressions: &Suppressions,
) -> Result<usize> {
let (count, output) = add_noqa_inner(
path,
@@ -735,6 +746,7 @@ pub(crate) fn add_noqa(
noqa_line_for,
line_ending,
reason,
suppressions,
);
fs::write(path, output)?;
@@ -751,6 +763,7 @@ fn add_noqa_inner(
noqa_line_for: &NoqaMapping,
line_ending: LineEnding,
reason: Option<&str>,
suppressions: &Suppressions,
) -> (usize, String) {
let mut count = 0;
@@ -760,7 +773,14 @@ fn add_noqa_inner(
let directives = NoqaDirectives::from_commented_ranges(comment_ranges, external, path, locator);
let comments = find_noqa_comments(diagnostics, locator, &exemption, &directives, noqa_line_for);
let comments = find_noqa_comments(
diagnostics,
locator,
&exemption,
&directives,
noqa_line_for,
suppressions,
);
let edits = build_noqa_edits_by_line(comments, locator, line_ending, reason);
@@ -859,6 +879,7 @@ fn find_noqa_comments<'a>(
exemption: &'a FileExemption,
directives: &'a NoqaDirectives,
noqa_line_for: &NoqaMapping,
suppressions: &Suppressions,
) -> Vec<Option<NoqaComment<'a>>> {
// List of noqa comments, ordered to match up with `messages`
let mut comments_by_line: Vec<Option<NoqaComment<'a>>> = vec![];
@@ -875,6 +896,12 @@ fn find_noqa_comments<'a>(
continue;
}
// Apply ranged suppressions next
if suppressions.check_diagnostic(message) {
comments_by_line.push(None);
continue;
}
// Is the violation ignored by a `noqa` directive on the parent line?
if let Some(parent) = message.parent() {
if let Some(directive_line) =
@@ -1253,6 +1280,7 @@ mod tests {
use crate::rules::pycodestyle::rules::{AmbiguousVariableName, UselessSemicolon};
use crate::rules::pyflakes::rules::UnusedVariable;
use crate::rules::pyupgrade::rules::PrintfStringFormatting;
use crate::suppression::Suppressions;
use crate::{Edit, Violation};
use crate::{Locator, generate_noqa_edits};
@@ -2848,6 +2876,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 0);
assert_eq!(output, format!("{contents}"));
@@ -2872,6 +2901,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: F841\n");
@@ -2903,6 +2933,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 1);
assert_eq!(output, "x = 1 # noqa: E741, F841\n");
@@ -2934,6 +2965,7 @@ mod tests {
&noqa_line_for,
LineEnding::Lf,
None,
&Suppressions::default(),
);
assert_eq!(count, 0);
assert_eq!(output, "x = 1 # noqa");
@@ -2956,6 +2988,7 @@ print(
let messages = [PrintfStringFormatting
.into_diagnostic(TextRange::new(12.into(), 79.into()), &source_file)];
let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits(
path,
&messages,
@@ -2964,6 +2997,7 @@ print(
&[],
&noqa_line_for,
LineEnding::Lf,
&suppressions,
);
assert_eq!(
edits,
@@ -2987,6 +3021,7 @@ bar =
[UselessSemicolon.into_diagnostic(TextRange::new(4.into(), 5.into()), &source_file)];
let noqa_line_for = NoqaMapping::default();
let comment_ranges = CommentRanges::default();
let suppressions = Suppressions::default();
let edits = generate_noqa_edits(
path,
&messages,
@@ -2995,6 +3030,7 @@ bar =
&[],
&noqa_line_for,
LineEnding::Lf,
&suppressions,
);
assert_eq!(
edits,

View File

@@ -286,3 +286,8 @@ pub(crate) const fn is_s310_resolve_string_literal_bindings_enabled(
) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/21623
pub(crate) const fn is_range_suppressions_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -75,6 +75,7 @@ pub(crate) fn unsafe_yaml_load(checker: &Checker, call: &ast::ExprCall) {
qualified_name.segments(),
["yaml", "SafeLoader" | "CSafeLoader"]
| ["yaml", "loader", "SafeLoader" | "CSafeLoader"]
| ["yaml", "cyaml", "CSafeLoader"]
)
})
{

View File

@@ -1,6 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::statement_visitor;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::visitor::{Visitor, walk_expr, walk_stmt};
use ruff_python_ast::{self as ast, Expr, Stmt, StmtFunctionDef};
use ruff_text_size::TextRange;
@@ -96,6 +95,11 @@ pub(crate) fn return_in_generator(checker: &Checker, function_def: &StmtFunction
return;
}
// Async functions are flagged by the `ReturnInGenerator` semantic syntax error.
if function_def.is_async {
return;
}
let mut visitor = ReturnInGeneratorVisitor::default();
visitor.visit_body(&function_def.body);
@@ -112,15 +116,9 @@ struct ReturnInGeneratorVisitor {
has_yield: bool,
}
impl StatementVisitor<'_> for ReturnInGeneratorVisitor {
impl Visitor<'_> for ReturnInGeneratorVisitor {
fn visit_stmt(&mut self, stmt: &Stmt) {
match stmt {
Stmt::Expr(ast::StmtExpr { value, .. }) => match **value {
Expr::Yield(_) | Expr::YieldFrom(_) => {
self.has_yield = true;
}
_ => {}
},
Stmt::FunctionDef(_) => {
// Do not recurse into nested functions; they're evaluated separately.
}
@@ -130,8 +128,19 @@ impl StatementVisitor<'_> for ReturnInGeneratorVisitor {
node_index: _,
}) => {
self.return_ = Some(*range);
walk_stmt(self, stmt);
}
_ => statement_visitor::walk_stmt(self, stmt),
_ => walk_stmt(self, stmt),
}
}
fn visit_expr(&mut self, expr: &Expr) {
match expr {
Expr::Lambda(_) => {}
Expr::Yield(_) | Expr::YieldFrom(_) => {
self.has_yield = true;
}
_ => walk_expr(self, expr),
}
}
}

View File

@@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20
--> B006_B008.py:242:20
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
help: Replace with `None`; initialize within function
236 |
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 |
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None):
240 | pass
241 |
242 |
242 + def nested_combo(a=None):
243 | pass
244 |
245 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27
--> B006_B008.py:279:27
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
278 | def mutable_annotations(
279 | a: list[int] | None = [],
| ^^
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
273 |
274 |
275 | def mutable_annotations(
276 |
277 |
278 | def mutable_annotations(
- a: list[int] | None = [],
276 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + a: list[int] | None = None,
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35
--> B006_B008.py:280:35
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
| ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
274 |
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 |
278 | def mutable_annotations(
279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
280 + b: Optional[Dict[int, int]] = None,
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62
--> B006_B008.py:281:62
|
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
|
help: Replace with `None`; initialize within function
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
281 | pass
281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
284 | pass
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80
--> B006_B008.py:282:80
|
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
280 | ):
281 | pass
283 | ):
284 | pass
|
help: Replace with `None`; initialize within function
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ):
281 | pass
282 |
282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
283 | ):
284 | pass
285 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52
--> B006_B008.py:287:52
|
284 | def single_line_func_wrong(value: dict[str, str] = {}):
287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
285 | """Docstring"""
288 | """Docstring"""
|
help: Replace with `None`; initialize within function
281 | pass
282 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
284 | pass
285 |
286 |
287 |
- def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52
--> B006_B008.py:291:52
|
288 | def single_line_func_wrong(value: dict[str, str] = {}):
291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
289 | """Docstring"""
290 | ...
292 | """Docstring"""
293 | ...
|
help: Replace with `None`; initialize within function
285 | """Docstring"""
286 |
287 |
288 | """Docstring"""
289 |
290 |
- def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring"""
290 | ...
291 |
291 + def single_line_func_wrong(value: dict[str, str] = None):
292 | """Docstring"""
293 | ...
294 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52
--> B006_B008.py:296:52
|
293 | def single_line_func_wrong(value: dict[str, str] = {}):
296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
294 | """Docstring"""; ...
297 | """Docstring"""; ...
|
help: Replace with `None`; initialize within function
290 | ...
291 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
293 | ...
294 |
295 |
296 |
- def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52
--> B006_B008.py:300:52
|
297 | def single_line_func_wrong(value: dict[str, str] = {}):
300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
298 | """Docstring"""; \
299 | ...
301 | """Docstring"""; \
302 | ...
|
help: Replace with `None`; initialize within function
294 | """Docstring"""; ...
295 |
296 |
297 | """Docstring"""; ...
298 |
299 |
- def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \
299 | ...
300 |
300 + def single_line_func_wrong(value: dict[str, str] = None):
301 | """Docstring"""; \
302 | ...
303 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52
--> B006_B008.py:305:52
|
302 | def single_line_func_wrong(value: dict[str, str] = {
305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^
303 | | # This is a comment
304 | | }):
306 | | # This is a comment
307 | | }):
| |_^
305 | """Docstring"""
308 | """Docstring"""
|
help: Replace with `None`; initialize within function
299 | ...
300 |
301 |
302 | ...
303 |
304 |
- def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment
- }):
302 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring"""
304 |
305 |
305 + def single_line_func_wrong(value: dict[str, str] = None):
306 | """Docstring"""
307 |
308 |
note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52
--> B006_B008.py:311:52
|
308 | def single_line_func_wrong(value: dict[str, str] = {}) \
311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^
309 | : \
310 | """Docstring"""
312 | : \
313 | """Docstring"""
|
help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52
--> B006_B008.py:316:52
|
313 | def single_line_func_wrong(value: dict[str, str] = {}):
316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
314 | """Docstring without newline"""
317 | """Docstring without newline"""
|
help: Replace with `None`; initialize within function
310 | """Docstring"""
311 |
312 |
313 | """Docstring"""
314 |
315 |
- def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline"""
316 + def single_line_func_wrong(value: dict[str, str] = None):
317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior

View File

@@ -53,39 +53,39 @@ B008 Do not perform function call in argument defaults; instead, perform the cal
|
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:239:31
--> B006_B008.py:242:31
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
B008 Do not perform function call `map` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:245:22
--> B006_B008.py:248:22
|
243 | # Don't flag nested B006 since we can't guarantee that
244 | # it isn't made mutable by the outer operation.
245 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
246 | # Don't flag nested B006 since we can't guarantee that
247 | # it isn't made mutable by the outer operation.
248 | def no_nested_b006(a=map(lambda s: s.upper(), ["a", "b", "c"])):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
246 | pass
249 | pass
|
B008 Do not perform function call `random.randint` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:19
--> B006_B008.py:253:19
|
249 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
252 | # B008-ception.
253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
251 | pass
254 | pass
|
B008 Do not perform function call `dt.datetime.now` in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
--> B006_B008.py:250:37
--> B006_B008.py:253:37
|
249 | # B008-ception.
250 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
252 | # B008-ception.
253 | def nested_b008(a=random.randint(0, dt.datetime.now().year)):
| ^^^^^^^^^^^^^^^^^
251 | pass
254 | pass
|

View File

@@ -21,3 +21,46 @@ B901 Using `yield` and `return {value}` in a generator function can lead to conf
37 |
38 | yield from not_broken()
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:56:5
|
55 | def broken3():
56 | return (yield from [])
| ^^^^^^^^^^^^^^^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:61:5
|
59 | def broken4():
60 | x = yield from []
61 | return x
| ^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:72:5
|
71 | inner((yield from []))
72 | return x
| ^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:83:5
|
81 | async def broken6():
82 | yield 1
83 | return foo()
| ^^^^^^^^^^^^
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> B901.py:88:5
|
86 | async def broken7():
87 | yield 1
88 | return [1, 2, 3]
| ^^^^^^^^^^^^^^^^
|

View File

@@ -236,227 +236,227 @@ help: Replace with `None`; initialize within function
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:239:20
--> B006_B008.py:242:20
|
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 | def nested_combo(a=[float(3), dt.datetime.now()]):
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
242 | def nested_combo(a=[float(3), dt.datetime.now()]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 | pass
243 | pass
|
help: Replace with `None`; initialize within function
236 |
237 | # B006 and B008
238 | # We should handle arbitrary nesting of these B008.
239 |
240 | # B006 and B008
241 | # We should handle arbitrary nesting of these B008.
- def nested_combo(a=[float(3), dt.datetime.now()]):
239 + def nested_combo(a=None):
240 | pass
241 |
242 |
242 + def nested_combo(a=None):
243 | pass
244 |
245 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:276:27
--> B006_B008.py:279:27
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
278 | def mutable_annotations(
279 | a: list[int] | None = [],
| ^^
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
273 |
274 |
275 | def mutable_annotations(
276 |
277 |
278 | def mutable_annotations(
- a: list[int] | None = [],
276 + a: list[int] | None = None,
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + a: list[int] | None = None,
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:277:35
--> B006_B008.py:280:35
|
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
| ^^
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
|
help: Replace with `None`; initialize within function
274 |
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 |
278 | def mutable_annotations(
279 | a: list[int] | None = [],
- b: Optional[Dict[int, int]] = {},
277 + b: Optional[Dict[int, int]] = None,
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
280 + b: Optional[Dict[int, int]] = None,
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:278:62
--> B006_B008.py:281:62
|
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
|
help: Replace with `None`; initialize within function
275 | def mutable_annotations(
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | def mutable_annotations(
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
- c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
278 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | ):
281 | pass
281 + c: Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
283 | ):
284 | pass
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:279:80
--> B006_B008.py:282:80
|
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
282 | d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
| ^^^^^
280 | ):
281 | pass
283 | ):
284 | pass
|
help: Replace with `None`; initialize within function
276 | a: list[int] | None = [],
277 | b: Optional[Dict[int, int]] = {},
278 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 | a: list[int] | None = [],
280 | b: Optional[Dict[int, int]] = {},
281 | c: Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
- d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = set(),
279 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
280 | ):
281 | pass
282 |
282 + d: typing_extensions.Annotated[Union[Set[str], abc.Sized], "annotation"] = None,
283 | ):
284 | pass
285 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:284:52
--> B006_B008.py:287:52
|
284 | def single_line_func_wrong(value: dict[str, str] = {}):
287 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
285 | """Docstring"""
288 | """Docstring"""
|
help: Replace with `None`; initialize within function
281 | pass
282 |
283 |
- def single_line_func_wrong(value: dict[str, str] = {}):
284 + def single_line_func_wrong(value: dict[str, str] = None):
285 | """Docstring"""
284 | pass
285 |
286 |
287 |
- def single_line_func_wrong(value: dict[str, str] = {}):
287 + def single_line_func_wrong(value: dict[str, str] = None):
288 | """Docstring"""
289 |
290 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:288:52
--> B006_B008.py:291:52
|
288 | def single_line_func_wrong(value: dict[str, str] = {}):
291 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
289 | """Docstring"""
290 | ...
292 | """Docstring"""
293 | ...
|
help: Replace with `None`; initialize within function
285 | """Docstring"""
286 |
287 |
288 | """Docstring"""
289 |
290 |
- def single_line_func_wrong(value: dict[str, str] = {}):
288 + def single_line_func_wrong(value: dict[str, str] = None):
289 | """Docstring"""
290 | ...
291 |
291 + def single_line_func_wrong(value: dict[str, str] = None):
292 | """Docstring"""
293 | ...
294 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:293:52
--> B006_B008.py:296:52
|
293 | def single_line_func_wrong(value: dict[str, str] = {}):
296 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
294 | """Docstring"""; ...
297 | """Docstring"""; ...
|
help: Replace with `None`; initialize within function
290 | ...
291 |
292 |
- def single_line_func_wrong(value: dict[str, str] = {}):
293 + def single_line_func_wrong(value: dict[str, str] = None):
294 | """Docstring"""; ...
293 | ...
294 |
295 |
296 |
- def single_line_func_wrong(value: dict[str, str] = {}):
296 + def single_line_func_wrong(value: dict[str, str] = None):
297 | """Docstring"""; ...
298 |
299 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:297:52
--> B006_B008.py:300:52
|
297 | def single_line_func_wrong(value: dict[str, str] = {}):
300 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
298 | """Docstring"""; \
299 | ...
301 | """Docstring"""; \
302 | ...
|
help: Replace with `None`; initialize within function
294 | """Docstring"""; ...
295 |
296 |
297 | """Docstring"""; ...
298 |
299 |
- def single_line_func_wrong(value: dict[str, str] = {}):
297 + def single_line_func_wrong(value: dict[str, str] = None):
298 | """Docstring"""; \
299 | ...
300 |
300 + def single_line_func_wrong(value: dict[str, str] = None):
301 | """Docstring"""; \
302 | ...
303 |
note: This is an unsafe fix and may change runtime behavior
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:302:52
--> B006_B008.py:305:52
|
302 | def single_line_func_wrong(value: dict[str, str] = {
305 | def single_line_func_wrong(value: dict[str, str] = {
| ____________________________________________________^
303 | | # This is a comment
304 | | }):
306 | | # This is a comment
307 | | }):
| |_^
305 | """Docstring"""
308 | """Docstring"""
|
help: Replace with `None`; initialize within function
299 | ...
300 |
301 |
302 | ...
303 |
304 |
- def single_line_func_wrong(value: dict[str, str] = {
- # This is a comment
- }):
302 + def single_line_func_wrong(value: dict[str, str] = None):
303 | """Docstring"""
304 |
305 |
305 + def single_line_func_wrong(value: dict[str, str] = None):
306 | """Docstring"""
307 |
308 |
note: This is an unsafe fix and may change runtime behavior
B006 Do not use mutable data structures for argument defaults
--> B006_B008.py:308:52
--> B006_B008.py:311:52
|
308 | def single_line_func_wrong(value: dict[str, str] = {}) \
311 | def single_line_func_wrong(value: dict[str, str] = {}) \
| ^^
309 | : \
310 | """Docstring"""
312 | : \
313 | """Docstring"""
|
help: Replace with `None`; initialize within function
B006 [*] Do not use mutable data structures for argument defaults
--> B006_B008.py:313:52
--> B006_B008.py:316:52
|
313 | def single_line_func_wrong(value: dict[str, str] = {}):
316 | def single_line_func_wrong(value: dict[str, str] = {}):
| ^^
314 | """Docstring without newline"""
317 | """Docstring without newline"""
|
help: Replace with `None`; initialize within function
310 | """Docstring"""
311 |
312 |
313 | """Docstring"""
314 |
315 |
- def single_line_func_wrong(value: dict[str, str] = {}):
313 + def single_line_func_wrong(value: dict[str, str] = None):
314 | """Docstring without newline"""
316 + def single_line_func_wrong(value: dict[str, str] = None):
317 | """Docstring without newline"""
note: This is an unsafe fix and may change runtime behavior

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_index::Indexer;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange};
use crate::Locator;

View File

@@ -3,7 +3,7 @@ use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;

View File

@@ -3,7 +3,7 @@ use ruff_python_ast as ast;
use ruff_python_ast::ExprGenerator;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;

View File

@@ -1,7 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::Checker;

View File

@@ -3,8 +3,8 @@ use std::borrow::Cow;
use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::StringFlags;
use ruff_python_ast::token::{Token, TokenKind, Tokens};
use ruff_python_index::Indexer;
use ruff_python_parser::{Token, TokenKind, Tokens};
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextLen, TextRange};

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextLen, TextSize};
use crate::checkers::ast::Checker;

View File

@@ -4,10 +4,10 @@ use ruff_diagnostics::Applicability;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::{is_const_false, is_const_true};
use ruff_python_ast::stmt_if::elif_else_range;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::whitespace::indentation;
use ruff_python_ast::{self as ast, Decorator, ElifElseClause, Expr, Stmt};
use ruff_python_parser::TokenKind;
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::visibility::is_property;
use ruff_python_trivia::{SimpleTokenKind, SimpleTokenizer, is_python_whitespace};

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::token::Tokens;
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_parser::Tokens;
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange};

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::Stmt;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_trivia::PythonWhitespace;
use ruff_source_file::UniversalNewlines;
use ruff_text_size::Ranged;

View File

@@ -11,8 +11,8 @@ use comments::Comment;
use normalize::normalize_imports;
use order::order_imports;
use ruff_python_ast::PySourceType;
use ruff_python_ast::token::Tokens;
use ruff_python_codegen::Stylist;
use ruff_python_parser::Tokens;
use settings::Settings;
use types::EitherImport::{Import, ImportFrom};
use types::{AliasData, ImportBlock, TrailingComma};

View File

@@ -1,11 +1,11 @@
use itertools::{EitherOrBoth, Itertools};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::Tokens;
use ruff_python_ast::whitespace::trailing_lines_end;
use ruff_python_ast::{PySourceType, PythonVersion, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_parser::Tokens;
use ruff_python_trivia::{PythonWhitespace, leading_indentation, textwrap::indent};
use ruff_source_file::{LineRanges, UniversalNewlines};
use ruff_text_size::{Ranged, TextRange};

View File

@@ -1,4 +1,4 @@
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
/// Returns `true` if the name should be considered "ambiguous".
pub(super) fn is_ambiguous_name(name: &str) -> bool {

View File

@@ -8,10 +8,10 @@ use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_notebook::CellOffsets;
use ruff_python_ast::PySourceType;
use ruff_python_ast::token::TokenIterWithContext;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::token::Tokens;
use ruff_python_codegen::Stylist;
use ruff_python_parser::TokenIterWithContext;
use ruff_python_parser::TokenKind;
use ruff_python_parser::Tokens;
use ruff_python_trivia::PythonWhitespace;
use ruff_source_file::{LineRanges, UniversalNewlines};
use ruff_text_size::TextRange;

View File

@@ -1,8 +1,8 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_notebook::CellOffsets;
use ruff_python_ast::PySourceType;
use ruff_python_ast::token::{TokenIterWithContext, TokenKind, Tokens};
use ruff_python_index::Indexer;
use ruff_python_parser::{TokenIterWithContext, TokenKind, Tokens};
use ruff_text_size::{Ranged, TextSize};
use crate::Locator;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange};
use crate::AlwaysFixableViolation;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::TextRange;
use crate::Violation;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::Ranged;
use crate::Edit;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::Ranged;
use crate::checkers::ast::LintContext;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::LintContext;

View File

@@ -9,7 +9,7 @@ pub(crate) use missing_whitespace::*;
pub(crate) use missing_whitespace_after_keyword::*;
pub(crate) use missing_whitespace_around_operator::*;
pub(crate) use redundant_backslash::*;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_trivia::is_python_whitespace;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
pub(crate) use space_around_operator::*;

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::TokenKind;
use ruff_python_index::Indexer;
use ruff_python_parser::TokenKind;
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange, TextSize};

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::LintContext;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::LintContext;

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_python_trivia::PythonWhitespace;
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::ast::LintContext;

View File

@@ -3,7 +3,7 @@ use std::iter::Peekable;
use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_notebook::CellOffsets;
use ruff_python_parser::{Token, TokenKind, Tokens};
use ruff_python_ast::token::{Token, TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::{AlwaysFixableViolation, Edit, Fix, checkers::ast::LintContext};

View File

@@ -4,7 +4,9 @@ use rustc_hash::FxHashSet;
use std::sync::LazyLock;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::Parameter;
use ruff_python_ast::docstrings::{clean_space, leading_space};
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::analyze::visibility::is_staticmethod;
use ruff_python_trivia::textwrap::dedent;
@@ -1184,6 +1186,9 @@ impl AlwaysFixableViolation for MissingSectionNameColon {
/// This rule is enabled when using the `google` convention, and disabled when
/// using the `pep257` and `numpy` conventions.
///
/// Parameters annotated with `typing.Unpack` are exempt from this rule.
/// This follows the Python typing specification for unpacking keyword arguments.
///
/// ## Example
/// ```python
/// def calculate_speed(distance: float, time: float) -> float:
@@ -1233,6 +1238,7 @@ impl AlwaysFixableViolation for MissingSectionNameColon {
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)
/// - [Google Python Style Guide - Docstrings](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings)
/// - [Python - Unpack for keyword arguments](https://typing.python.org/en/latest/spec/callables.html#unpack-kwargs)
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "v0.0.73")]
pub(crate) struct UndocumentedParam {
@@ -1808,7 +1814,9 @@ fn missing_args(checker: &Checker, docstring: &Docstring, docstrings_args: &FxHa
missing_arg_names.insert(starred_arg_name);
}
}
if let Some(arg) = function.parameters.kwarg.as_ref() {
if let Some(arg) = function.parameters.kwarg.as_ref()
&& !has_unpack_annotation(checker, arg)
{
let arg_name = arg.name.as_str();
let starred_arg_name = format!("**{arg_name}");
if !arg_name.starts_with('_')
@@ -1834,6 +1842,15 @@ fn missing_args(checker: &Checker, docstring: &Docstring, docstrings_args: &FxHa
}
}
/// Returns `true` if the parameter is annotated with `typing.Unpack`
fn has_unpack_annotation(checker: &Checker, parameter: &Parameter) -> bool {
parameter.annotation.as_ref().is_some_and(|annotation| {
checker
.semantic()
.match_typing_expr(map_subscript(annotation), "Unpack")
})
}
// See: `GOOGLE_ARGS_REGEX` in `pydocstyle/checker.py`.
static GOOGLE_ARGS_REGEX: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"^\s*(\*?\*?\w+)\s*(\(.*?\))?\s*:(\r\n|\n)?\s*.+").unwrap());

View File

@@ -101,3 +101,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@@ -83,3 +83,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@@ -101,3 +101,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@@ -101,3 +101,13 @@ D417 Missing argument description in the docstring for `should_fail`: `Args`
200 | """
201 | Send a message.
|
D417 Missing argument description in the docstring for `function_with_unpack_and_missing_arg_doc_should_fail`: `query`
--> D417.py:238:5
|
236 | """
237 |
238 | def function_with_unpack_and_missing_arg_doc_should_fail(query: str, **kwargs: Unpack[User]):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239 | """Function with Unpack kwargs but missing query arg documentation.
|

View File

@@ -28,6 +28,7 @@ mod tests {
use crate::settings::types::PreviewMode;
use crate::settings::{LinterSettings, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::test::{test_contents, test_path, test_snippet};
use crate::{Locator, assert_diagnostics, assert_diagnostics_diff, directives};
@@ -955,6 +956,8 @@ mod tests {
&locator,
&indexer,
);
let suppressions =
Suppressions::from_tokens(&settings, locator.contents(), parsed.tokens());
let mut messages = check_path(
Path::new("<filename>"),
None,
@@ -968,6 +971,7 @@ mod tests {
source_type,
&parsed,
target_version,
&suppressions,
);
messages.sort_by(Diagnostic::ruff_start_ordering);
let actual = messages

View File

@@ -2,8 +2,8 @@ use anyhow::{Error, bail};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{CmpOp, Expr};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;

View File

@@ -3,8 +3,8 @@ use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::contains_effect;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, Stmt};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_semantic::Binding;
use ruff_text_size::{Ranged, TextRange, TextSize};

View File

@@ -37,3 +37,88 @@ F704 `await` statement outside of a function
12 |
13 | def _():
|
F704 `await` statement outside of a function
--> F704.py:27:2
|
26 | # but not in comprehensions
27 | [await cor async for cor in f()] # F704
| ^^^^^^^^^
28 | {await cor async for cor in f()} # F704
29 | {await cor: 1 async for cor in f()} # F704
|
F704 `await` statement outside of a function
--> F704.py:28:2
|
26 | # but not in comprehensions
27 | [await cor async for cor in f()] # F704
28 | {await cor async for cor in f()} # F704
| ^^^^^^^^^
29 | {await cor: 1 async for cor in f()} # F704
30 | [await cor for cor in f()] # F704
|
F704 `await` statement outside of a function
--> F704.py:29:2
|
27 | [await cor async for cor in f()] # F704
28 | {await cor async for cor in f()} # F704
29 | {await cor: 1 async for cor in f()} # F704
| ^^^^^^^^^
30 | [await cor for cor in f()] # F704
31 | {await cor for cor in f()} # F704
|
F704 `await` statement outside of a function
--> F704.py:30:2
|
28 | {await cor async for cor in f()} # F704
29 | {await cor: 1 async for cor in f()} # F704
30 | [await cor for cor in f()] # F704
| ^^^^^^^^^
31 | {await cor for cor in f()} # F704
32 | {await cor: 1 for cor in f()} # F704
|
F704 `await` statement outside of a function
--> F704.py:31:2
|
29 | {await cor: 1 async for cor in f()} # F704
30 | [await cor for cor in f()] # F704
31 | {await cor for cor in f()} # F704
| ^^^^^^^^^
32 | {await cor: 1 for cor in f()} # F704
|
F704 `await` statement outside of a function
--> F704.py:32:2
|
30 | [await cor for cor in f()] # F704
31 | {await cor for cor in f()} # F704
32 | {await cor: 1 for cor in f()} # F704
| ^^^^^^^^^
33 |
34 | # or in the iterator of an async generator, which is evaluated in the parent
|
F704 `await` statement outside of a function
--> F704.py:36:23
|
34 | # or in the iterator of an async generator, which is evaluated in the parent
35 | # scope
36 | (cor async for cor in await f()) # F704
| ^^^^^^^^^
37 | (await cor async for cor in [await c for c in f()]) # F704
|
F704 `await` statement outside of a function
--> F704.py:37:30
|
35 | # scope
36 | (cor async for cor in await f()) # F704
37 | (await cor async for cor in [await c for c in f()]) # F704
| ^^^^^^^
38 |
39 | # this is also okay because the comprehension is within the generator scope
|

View File

@@ -1,5 +1,5 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::{Token, TokenKind};
use ruff_python_ast::token::{Token, TokenKind};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use crate::Locator;

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::StmtImportFrom;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange};
use crate::Locator;

View File

@@ -1,10 +1,10 @@
use itertools::Itertools;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::Tokens;
use ruff_python_ast::whitespace::indentation;
use ruff_python_ast::{Alias, StmtImportFrom, StmtRef};
use ruff_python_codegen::Stylist;
use ruff_python_parser::Tokens;
use ruff_text_size::Ranged;
use crate::Locator;

View File

@@ -1,7 +1,7 @@
use std::slice::Iter;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_parser::{Token, TokenKind, Tokens};
use ruff_python_ast::token::{Token, TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange};
use crate::Locator;

View File

@@ -6,11 +6,11 @@ use rustc_hash::{FxHashMap, FxHashSet};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::any_over_expr;
use ruff_python_ast::str::{leading_quote, trailing_quote};
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::{self as ast, Expr, Keyword, StringFlags};
use ruff_python_literal::format::{
FieldName, FieldNamePart, FieldType, FormatPart, FormatString, FromTemplate,
};
use ruff_python_parser::TokenKind;
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange};

View File

@@ -3,12 +3,12 @@ use std::fmt::Write;
use std::str::FromStr;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::{self as ast, AnyStringFlags, Expr, StringFlags, whitespace::indentation};
use ruff_python_codegen::Stylist;
use ruff_python_literal::cformat::{
CConversionFlags, CFormatPart, CFormatPrecision, CFormatQuantity, CFormatString,
};
use ruff_python_parser::TokenKind;
use ruff_python_stdlib::identifiers::is_identifier;
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextRange};

View File

@@ -1,6 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::Stmt;
use ruff_python_parser::TokenKind;
use ruff_python_ast::token::TokenKind;
use ruff_python_semantic::SemanticModel;
use ruff_source_file::LineRanges;
use ruff_text_size::{TextLen, TextRange, TextSize};

View File

@@ -1,7 +1,7 @@
use anyhow::Result;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_stdlib::open_mode::OpenMode;
use ruff_text_size::{Ranged, TextSize};

View File

@@ -1,8 +1,8 @@
use std::fmt::Write as _;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::{self as ast, Arguments, Expr, Keyword};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_text_size::{Ranged, TextRange};
use crate::Locator;

View File

@@ -305,6 +305,25 @@ mod tests {
Ok(())
}
#[test]
fn range_suppressions() -> Result<()> {
assert_diagnostics_diff!(
Path::new("ruff/suppressions.py"),
&settings::LinterSettings::for_rules(vec![
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
]),
&settings::LinterSettings::for_rules(vec![
Rule::UnusedVariable,
Rule::AmbiguousVariableName,
Rule::UnusedNOQA,
])
.with_preview_mode(),
);
Ok(())
}
#[test]
fn ruf100_0() -> Result<()> {
let diagnostics = test_path(

View File

@@ -4,8 +4,8 @@ use anyhow::Result;
use libcst_native::{LeftParen, ParenthesizedNode, RightParen};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::{self as ast, Expr, OperatorPrecedence};
use ruff_python_parser::TokenKind;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;

View File

@@ -2,9 +2,9 @@ use std::cmp::Ordering;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::helpers::comment_indentation_after;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_ast::whitespace::indentation;
use ruff_python_ast::{Stmt, StmtExpr, StmtFor, StmtIf, StmtTry, StmtWhile};
use ruff_python_parser::{TokenKind, Tokens};
use ruff_source_file::LineRanges;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};

View File

@@ -9,8 +9,8 @@ use std::cmp::Ordering;
use itertools::Itertools;
use ruff_python_ast as ast;
use ruff_python_ast::token::{TokenKind, Tokens};
use ruff_python_codegen::Stylist;
use ruff_python_parser::{TokenKind, Tokens};
use ruff_python_stdlib::str::is_cased_uppercase;
use ruff_python_trivia::{SimpleTokenKind, first_non_trivia_token, leading_indentation};
use ruff_source_file::LineRanges;

View File

@@ -1,7 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::PythonVersion;
use ruff_python_ast::token::TokenKind;
use ruff_python_ast::{Expr, ExprCall, parenthesize::parenthesized_range};
use ruff_python_parser::TokenKind;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;

View File

@@ -0,0 +1,168 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
--- Linter settings ---
-linter.preview = disabled
+linter.preview = enabled
--- Summary ---
Removed: 9
Added: 1
--- Removed ---
E741 Ambiguous variable name: `I`
--> suppressions.py:4:5
|
2 | # These should both be ignored by the range suppression.
3 | # ruff: disable[E741, F841]
4 | I = 1
| ^
5 | # ruff: enable[E741, F841]
|
F841 [*] Local variable `I` is assigned to but never used
--> suppressions.py:4:5
|
2 | # These should both be ignored by the range suppression.
3 | # ruff: disable[E741, F841]
4 | I = 1
| ^
5 | # ruff: enable[E741, F841]
|
help: Remove assignment to unused variable `I`
1 | def f():
2 | # These should both be ignored by the range suppression.
3 | # ruff: disable[E741, F841]
- I = 1
4 + pass
5 | # ruff: enable[E741, F841]
6 |
7 |
note: This is an unsafe fix and may change runtime behavior
E741 Ambiguous variable name: `I`
--> suppressions.py:12:5
|
10 | # Should also generate an "unmatched suppression" warning.
11 | # ruff:disable[E741,F841]
12 | I = 1
| ^
|
F841 [*] Local variable `I` is assigned to but never used
--> suppressions.py:12:5
|
10 | # Should also generate an "unmatched suppression" warning.
11 | # ruff:disable[E741,F841]
12 | I = 1
| ^
|
help: Remove assignment to unused variable `I`
9 | # These should both be ignored by the implicit range suppression.
10 | # Should also generate an "unmatched suppression" warning.
11 | # ruff:disable[E741,F841]
- I = 1
12 + pass
13 |
14 |
15 | def f():
note: This is an unsafe fix and may change runtime behavior
E741 Ambiguous variable name: `I`
--> suppressions.py:26:5
|
24 | # the other logged to the user.
25 | # ruff: disable[E741]
26 | I = 1
| ^
27 | # ruff: enable[E741]
|
E741 Ambiguous variable name: `l`
--> suppressions.py:35:5
|
33 | # middle line should be completely silenced.
34 | # ruff: disable[E741]
35 | l = 0
| ^
36 | # ruff: disable[F841]
37 | O = 1
|
E741 Ambiguous variable name: `O`
--> suppressions.py:37:5
|
35 | l = 0
36 | # ruff: disable[F841]
37 | O = 1
| ^
38 | # ruff: enable[E741]
39 | I = 2
|
F841 [*] Local variable `O` is assigned to but never used
--> suppressions.py:37:5
|
35 | l = 0
36 | # ruff: disable[F841]
37 | O = 1
| ^
38 | # ruff: enable[E741]
39 | I = 2
|
help: Remove assignment to unused variable `O`
34 | # ruff: disable[E741]
35 | l = 0
36 | # ruff: disable[F841]
- O = 1
37 | # ruff: enable[E741]
38 | I = 2
39 | # ruff: enable[F841]
note: This is an unsafe fix and may change runtime behavior
F841 [*] Local variable `I` is assigned to but never used
--> suppressions.py:39:5
|
37 | O = 1
38 | # ruff: enable[E741]
39 | I = 2
| ^
40 | # ruff: enable[F841]
|
help: Remove assignment to unused variable `I`
36 | # ruff: disable[F841]
37 | O = 1
38 | # ruff: enable[E741]
- I = 2
39 | # ruff: enable[F841]
40 |
41 |
note: This is an unsafe fix and may change runtime behavior
--- Added ---
RUF100 [*] Unused `noqa` directive (unused: `E741`, `F841`)
--> suppressions.py:55:12
|
53 | # and an unusued noqa diagnostic should be logged.
54 | # ruff:disable[E741,F841]
55 | I = 1 # noqa: E741,F841
| ^^^^^^^^^^^^^^^^^
56 | # ruff:enable[E741,F841]
|
help: Remove unused `noqa` directive
52 | # These should both be ignored by the range suppression,
53 | # and an unusued noqa diagnostic should be logged.
54 | # ruff:disable[E741,F841]
- I = 1 # noqa: E741,F841
55 + I = 1
56 | # ruff:enable[E741,F841]

View File

@@ -465,6 +465,12 @@ impl LinterSettings {
self
}
#[must_use]
pub fn with_preview_mode(mut self) -> Self {
self.preview = PreviewMode::Enabled;
self
}
/// Resolve the [`TargetVersion`] to use for linting.
///
/// This method respects the per-file version overrides in

View File

@@ -17,4 +17,6 @@ PLE1142 `await` should be used within an async function
4 | # Top-level await
5 | await 1
| ^^^^^^^
6 |
7 | ([await c for c in cor] async for cor in func()) # ok
|

View File

@@ -0,0 +1,55 @@
---
source: crates/ruff_linter/src/linter.rs
---
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> resources/test/fixtures/syntax_errors/return_in_generator.py:3:5
|
1 | async def gen():
2 | yield 1
3 | return 42
| ^^^^^^^^^
4 |
5 | def gen(): # B901 but not a syntax error - not an async generator
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> resources/test/fixtures/syntax_errors/return_in_generator.py:7:5
|
5 | def gen(): # B901 but not a syntax error - not an async generator
6 | yield 1
7 | return 42
| ^^^^^^^^^
8 |
9 | async def gen(): # ok - no value in return
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> resources/test/fixtures/syntax_errors/return_in_generator.py:15:5
|
13 | async def gen():
14 | yield 1
15 | return foo()
| ^^^^^^^^^^^^
16 |
17 | async def gen():
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> resources/test/fixtures/syntax_errors/return_in_generator.py:19:5
|
17 | async def gen():
18 | yield 1
19 | return [1, 2, 3]
| ^^^^^^^^^^^^^^^^
20 |
21 | async def gen():
|
B901 Using `yield` and `return {value}` in a generator function can lead to confusing behavior
--> resources/test/fixtures/syntax_errors/return_in_generator.py:24:5
|
22 | if True:
23 | yield 1
24 | return 10
| ^^^^^^^^^
|

File diff suppressed because it is too large Load Diff

View File

@@ -32,6 +32,7 @@ use crate::packaging::detect_package_root;
use crate::settings::types::UnsafeFixes;
use crate::settings::{LinterSettings, flags};
use crate::source_kind::SourceKind;
use crate::suppression::Suppressions;
use crate::{Applicability, FixAvailability};
use crate::{Locator, directives};
@@ -234,6 +235,7 @@ pub(crate) fn test_contents<'a>(
&locator,
&indexer,
);
let suppressions = Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
let messages = check_path(
path,
path.parent()
@@ -249,6 +251,7 @@ pub(crate) fn test_contents<'a>(
source_type,
&parsed,
target_version,
&suppressions,
);
let source_has_errors = parsed.has_invalid_syntax();
@@ -299,6 +302,8 @@ pub(crate) fn test_contents<'a>(
&indexer,
);
let suppressions =
Suppressions::from_tokens(settings, locator.contents(), parsed.tokens());
let fixed_messages = check_path(
path,
None,
@@ -312,6 +317,7 @@ pub(crate) fn test_contents<'a>(
source_type,
&parsed,
target_version,
&suppressions,
);
if parsed.has_invalid_syntax() && !source_has_errors {

View File

@@ -1,17 +1,46 @@
use std::sync::{LazyLock, Mutex};
use std::cell::RefCell;
use get_size2::{GetSize, StandardTracker};
use ordermap::{OrderMap, OrderSet};
thread_local! {
pub static TRACKER: RefCell<Option<StandardTracker>>= const { RefCell::new(None) };
}
struct TrackerGuard(Option<StandardTracker>);
impl Drop for TrackerGuard {
fn drop(&mut self) {
TRACKER.set(self.0.take());
}
}
pub fn attach_tracker<R>(tracker: StandardTracker, f: impl FnOnce() -> R) -> R {
let prev = TRACKER.replace(Some(tracker));
let _guard = TrackerGuard(prev);
f()
}
fn with_tracker<F, R>(f: F) -> R
where
F: FnOnce(Option<&mut StandardTracker>) -> R,
{
TRACKER.with(|tracker| {
let mut tracker = tracker.borrow_mut();
f(tracker.as_mut())
})
}
/// Returns the memory usage of the provided object, using a global tracker to avoid
/// double-counting shared objects.
pub fn heap_size<T: GetSize>(value: &T) -> usize {
static TRACKER: LazyLock<Mutex<StandardTracker>> =
LazyLock::new(|| Mutex::new(StandardTracker::new()));
value
.get_heap_size_with_tracker(&mut *TRACKER.lock().unwrap())
.0
with_tracker(|tracker| {
if let Some(tracker) = tracker {
value.get_heap_size_with_tracker(tracker).0
} else {
value.get_heap_size()
}
})
}
/// An implementation of [`GetSize::get_heap_size`] for [`OrderSet`].

View File

@@ -29,6 +29,7 @@ pub mod statement_visitor;
pub mod stmt_if;
pub mod str;
pub mod str_prefix;
pub mod token;
pub mod traversal;
pub mod types;
pub mod visitor;

Some files were not shown because too many files have changed in this diff Show More