Compare commits

...

39 Commits

Author SHA1 Message Date
Micha Reiser
56a3978479 [ty] Use OrderedSet/Map in more places 2026-01-04 19:54:22 +01:00
Alex Waygood
e1439beab2 [ty] Use UnionType helper methods more consistently (#22357) 2026-01-03 14:19:06 +00:00
Felix Scherz
fd86e699b5 [ty] narrow TypedDict unions with not in (#22349)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2026-01-03 13:12:57 +00:00
Alex Waygood
d0f841bff2 Add help: subdiagnostics for several Ruff rules that can sometimes appear to disagree with ty (#22331) 2026-01-02 22:10:39 +00:00
Alex Waygood
74978cfff2 Error on unused ty: ignore comments when dogfooding ty on our own scripts (#22347) 2026-01-02 20:27:09 +00:00
Alex Waygood
10a417aaf6 [ty] Specify heap_size for SynthesizedTypedDictType (#22345) 2026-01-02 20:09:35 +00:00
Matthew Mckee
a2e0ff57c3 Run cargo sort (#22310) 2026-01-02 19:58:15 +00:00
Nikolas Hearp
0804030ee9 [pylint] Ignore identical members (PLR1714) (#22220)
## Summary

This PR closes #21692. `PLR1714` will no longer flag if all members are
identical. I iterate through the equality comparisons and if they are
all equal the rule does not flag.

## Test Plan

Additional tests were added with identical members.
2026-01-02 12:56:17 -05:00
Alex Waygood
26230b1ed3 [ty] Use IntersectionType::from_elements more (#22329) 2026-01-01 15:01:00 +00:00
github-actions[bot]
295ae836fd [ty] Sync vendored typeshed stubs (#22324)
Co-authored-by: typeshedbot <>
2026-01-01 02:59:55 +00:00
Alex Waygood
9677364847 Bump docstring-adder pin (#22323) 2026-01-01 02:44:24 +00:00
github-actions[bot]
8e45bac3c1 [ty] Sync vendored typeshed stubs (#22321)
Co-authored-by: typeshedbot <>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2026-01-01 01:29:12 +00:00
Alex Waygood
7366a9e951 Bump docstring-adder pin (#22319) 2025-12-31 22:37:53 +00:00
Brent Westbrook
15aa74206e [pylint] Improve diagnostic range for PLC0206 (#22312)
Summary
--

This PR fixes #14900 by:

- Restricting the diagnostic range from the whole `for` loop to only the
`target in iter` part
- Adding secondary annotations to each use of the `dict[key]` accesses
- Adding a `fix_title` suggesting to use `for key in dict.items()`

I thought this approach sounded slightly nicer than the alternative of
renaming the rule to focus on each indexing operation mentioned in
https://github.com/astral-sh/ruff/issues/14900#issuecomment-2543923625,
but I don't feel too strongly. This was easy to implement with our new
diagnostic infrastructure too.

This produces an example annotation like this:

```
PLC0206 Extracting value from dictionary without calling `.items()`
  --> dict_index_missing_items.py:59:5
   |
58 | # A case with multiple uses of the value to show off the secondary annotations
59 | for instrument in ORCHESTRA:
   |     ^^^^^^^^^^^^^^^^^^^^^^^
60 |     data = json.dumps(
61 |         {
62 |             "instrument": instrument,
63 |             "section": ORCHESTRA[instrument],
   |                        ---------------------
64 |         }
65 |     )
66 |
67 |     print(f"saving data for {instrument} in {ORCHESTRA[instrument]}")
   |                                              ---------------------
68 |
69 |     with open(f"{instrument}/{ORCHESTRA[instrument]}.txt", "w") as f:
   |                               ---------------------
70 |         f.write(data)
   |
help: Use `for instrument, value in ORCHESTRA.items()` instead
```

which I think is a big improvement over:

```
PLC0206 Extracting value from dictionary without calling `.items()`
  --> dict_index_missing_items.py:59:1
   |
58 |   # A case with multiple uses of the value to show off the secondary annotations
59 | / for instrument in ORCHESTRA:
60 | |     data = json.dumps(
61 | |         {
62 | |             "instrument": instrument,
63 | |             "section": ORCHESTRA[instrument],
64 | |         }
65 | |     )
66 | |
67 | |     print(f"saving data for {instrument} in {ORCHESTRA[instrument]}")
68 | |
69 | |     with open(f"{instrument}/{ORCHESTRA[instrument]}.txt", "w") as f:
70 | |         f.write(data)
   | |_____________________^
   |
```

The secondary annotation feels a bit bare without a message, but I
thought it
might be too busy to include one. Something like `value extracted here`
or
`indexed here` might work if we do want to include a brief message.

To avoid collecting a `Vec` of annotation ranges, I added a `&Checker`
to the
rule's visitor to emit diagnostics as we go instead of at the end.

Test Plan
--

Existing tests, plus a new case showing off multiple secondary
annotations
2025-12-31 13:54:58 -05:00
ValdonVitijaa
77c2f4c6cb [flake8-unused-arguments] Mark **kwargs in TypeVar as used (ARG001) (#22214)
## Summary

Fixes false positive in ARG001 when `**kwargs` is passed to
`typing.TypeVar`

Closes #22178

When `**kwargs` is used in a `typing.TypeVar` call, the checker was not
recognizing it as a usage, leading to false positive "unused function
argument" warnings.

### Root Cause

In the AST, keyword arguments are represented by the `Keyword` struct
with an `arg` field of type `Option<Identifier>`:
- Named keywords like `bound=int` have `arg = Some("bound")`
- Dictionary unpacking like `**kwargs` has `arg = None`

The existing code only handled the `Some(id)` case, never visiting the
expression when `arg` was `None`, so `**kwargs` was never marked as
used.

### Changes

Added an `else` branch to handle `**kwargs` unpacking by calling
`visit_non_type_definition(value)` when `arg` is `None`. This ensures
the `kwargs` variable is properly visited and marked as used by the
semantic model.

## Test Plan

Tested with the following code:

```python
import typing

def f(
    *args: object,
    default: object = None,
    **kwargs: object,
) -> None:
    typing.TypeVar(*args, **kwargs)
```

Before :

`ARG001 Unused function argument: kwargs
`

After : 

`All checks passed!`

Run the example with the following command(from the root of ruff and
please change the path to the module that contains the code example):

`cargo run -p ruff -- check /path/to/file.py --isolated --select=ARG
--no-cache`
2025-12-31 11:07:56 -05:00
Rob Hand
758926eecd [ty] Add blurb for newer crates to `ty/CONTIBUTING.md (#22309)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-31 07:58:33 +00:00
Matthew Mckee
6433b88ffa Clean up pre-commit config (#22311) 2025-12-31 08:36:20 +01:00
Charlie Marsh
f619783066 [ty] Treat __setattr__ as fallback-only (#22014)
## Summary

Closes https://github.com/astral-sh/ty/issues/1460.
2025-12-30 19:01:10 -05:00
Ibraheem Ahmed
ff05428ce6 [ty] Subtyping for bidirectional inference (#21930)
## Summary

Supersedes https://github.com/astral-sh/ruff/pull/21747. This version
uses the constraint solver directly, which means we should benefit from
constraint solver improvements for free.

Resolves https://github.com/astral-sh/ty/issues/1576.
2025-12-30 17:03:20 -05:00
Matthew Mckee
4f2529f353 [ty] Fix typo in cli docs for respect_ignore_files arg (#22308) 2025-12-30 21:38:50 +01:00
Rob Hand
6f9ea73ac9 [ty] Remove TY_MAX_PARALLELISM as conformance runs no longer panic (#22307) 2025-12-30 20:42:59 +01:00
Charlie Marsh
12dd27da52 [ty] Support narrowing for tuple matches with literal elements (#22303)
## Summary

See:
https://github.com/astral-sh/ruff/pull/22299#issuecomment-3699913849.
2025-12-30 13:45:07 -05:00
Alex Waygood
e0e1e9535e [ty] Convert several comments in ty_extensions.pyi to docstrings (#22305) 2025-12-30 17:41:43 +00:00
github-actions[bot]
7173c7ea3f [ty] Sync vendored typeshed stubs (#22302)
Co-authored-by: typeshedbot <>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2025-12-30 17:24:13 +00:00
Alex Waygood
5013752c6c Bump docstring-adder pin (#22301) 2025-12-30 17:03:54 +00:00
Kevin Yang
b2b9d91859 [airflow] Passing positional argument into airflow.lineage.hook.HookLineageCollector.create_asset is not allowed (AIR303) (#22046)
## Summary

This is a follow up PR to https://github.com/astral-sh/ruff/pull/21096

The new code AIR303 is added for checking function signature change in
Airflow 3.0. The new rule added to AIR303 will check if positional
argument is passed into
`airflow.lineage.hook.HookLineageCollector.create_asset`. Since this
method is updated to accept only keywords argument, passing positional
argument into it is not allowed, and will raise an error. The test is
done by checking whether positional argument with 0 index can be found.

## Test Plan

A new test file is added to the fixtures for the code AIR303. Snapshot
test is updated accordingly.

<img width="1444" height="513" alt="Screenshot from 2025-12-17 20-54-48"
src="https://github.com/user-attachments/assets/bc235195-e986-4743-9bf7-bba65805fb87"
/>

<img width="981" height="433" alt="Screenshot from 2025-12-17 21-34-29"
src="https://github.com/user-attachments/assets/492db71f-58f2-40ba-ad2f-f74852fa5a6b"
/>
2025-12-30 11:39:08 -05:00
Brent Westbrook
c483b59ddd [ruff] Add non-empty-init-module (RUF067) (#22143)
Summary
--

This PR adds a new rule, `non-empty-init-module`, which restricts the
kind of
code that can be included in an `__init__.py` file. By default,
docstrings,
imports, and assignments to `__all__` are allowed. When the new
configuration
option `lint.ruff.strictly-empty-init-modules` is enabled, no code at
all is
allowed.

This closes #9848, where these two variants correspond to different
rules in the

[`flake8-empty-init-modules`](https://github.com/samueljsb/flake8-empty-init-modules/)
linter. The upstream rules are EIM001, which bans all code, and EIM002,
which
bans non-import/docstring/`__all__` code. Since we discussed folding
these into
one rule on [Discord], I just added the rule to the `RUF` group instead
of
adding a new `EIM` plugin.

I'm not really sure we need to flag docstrings even when the strict
setting is
enabled, but I just followed upstream for now. Similarly, as I noted in
a TODO
comment, we could also allow more statements involving `__all__`, such
as
`__all__.append(...)` or `__all__.extend(...)`. The current version only
allows
assignments, like upstream, as well as annotated and augmented
assignments,
unlike upstream.

I think when we discussed this previously, we considered flagging the
module
itself as containing code, but for now I followed the upstream
implementation of
flagging each statement in the module that breaks the rule (actually the
upstream linter flags each _line_, including comments). This will
obviously be a
bit noisier, emitting many diagnostics for the same module. But this
also seems
preferable because it flags every statement that needs to be fixed up
front
instead of only emitting one diagnostic for the whole file that persists
as you
keep removing more lines. It was also easy to implement in
`analyze::statement`
without a separate visitor.

The first commit adds the rule and baseline tests, the second commit
adds the
option and a diff test showing the additional diagnostics when the
setting is
enabled.

I noticed a small (~2%) performance regression on our largest benchmark,
so I also added a cached `Checker::in_init_module` field and method
instead of the `Checker::path` method. This was almost the only reason
for the `Checker::path` field at all, but there's one remaining
reference in a `warn_user!`
[call](https://github.com/astral-sh/ruff/blob/main/crates/ruff_linter/src/checkers/ast/analyze/definitions.rs#L188).

Test Plan
--

New tests adapted from the upstream linter

## Ecosystem Report

I've spot-checked the ecosystem report, and the results look "correct."
This is obviously a very noisy rule if you do include code in
`__init__.py` files. We could make it less noisy by adding more
exceptions (e.g. allowing `if TYPE_CHECKING` blocks, allowing
`__getattr__` functions, allowing imports from `importlib` assignments),
but I'm sort of inclined just to start simple and see what users need.

[Discord]:
https://discord.com/channels/1039017663004942429/1082324250112823306/1440086001035771985

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-30 11:32:10 -05:00
Charlie Marsh
57218753be [ty] Narrow TypedDict literal access in match statements (#22299)
## Summary

Closes https://github.com/astral-sh/ty/issues/2279.
2025-12-30 11:29:09 -05:00
Brent Westbrook
2ada8b6634 Document options for more rules (#22295)
Summary
--

This is a follow up to #22198 documenting more rule options I found
while going
through all of our rules.

The second commit renames the internal
`flake8_gettext::Settings::functions_names` field to `function_names` to
match
the external configuration option. I guess this is technically breaking
because
it's exposed to users via `--show-settings`, but I don't think we
consider that
part of our stable API. I can definitely revert that if needed, though.

The other changes are just like #22198, adding new `## Options` sections
to
rules to document the settings they use. I missed these in the previous
PR
because they were used outside the rule implementations themselves. Most
of
these settings are checked where the rules' implementation functions are
called
instead.

Oh, the last commit also updates the removal date for
`typing.ByteString`, which
got pushed back in the 3.14 release. I snuck that in today since I never
opened
this PR last week.

I also fixed one reference link in RUF041.

Test Plan
--

Docs checks in CI
2025-12-30 08:44:11 -05:00
RasmusNygren
0edd97dd41 [ty] Add autocomplete suggestions for class arguments (#22110) 2025-12-30 13:10:56 +00:00
renovate[bot]
f8f4ca8fbc Update dependency react-resizable-panels to v4 (#22279)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-30 11:04:25 +01:00
renovate[bot]
3d35dbd334 Update actions/upload-artifact digest to b7c566a (#22250)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-30 09:07:58 +00:00
renovate[bot]
a652b411b8 Update NPM Development dependencies (#22289)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-30 10:04:24 +01:00
RasmusNygren
4dac3d105d [ty] Add skip_dunders option to CompletionTestBuilder (#22293)
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-12-30 08:10:14 +00:00
Shunsuke Shibayama
77ad107617 [ty] increase the max number of diagnostics for sympy in ty_walltime benchmark (#22296) 2025-12-30 13:20:19 +09:00
Charlie Marsh
b925ae5061 [ty] Avoid including property in subclasses properties (#22088)
## Summary

As-is, the following rejects `return self.value` in `def other` in the
subclass
([link](https://play.ty.dev/f55b47b2-313e-45d1-ba45-fde410bed32e))
because `self.value` is resolving to `Unknown | int | float | property`:

```python
class Base:
    _value: float = 0.0

    @property
    def value(self) -> float:
        return self._value

    @value.setter
    def value(self, v: float) -> None:
        self._value = v

    @property
    def other(self) -> float:
        return self.value

    @other.setter
    def other(self, v: float) -> None:
        self.value = v

class Derived(Base):
    @property
    def other(self) -> float:
        return self.value

    @other.setter
    def other(self, v: float) -> None:
        reveal_type(self.value)  # revealed: int | float
        self.value = v
```

I believe the root cause is that we're not excluding properties when
searching for class methods, so we're treating the `other` setter as a
classmethod. I don't fully understand how that ends up materializing as
`| property` on the union though.
2025-12-30 03:28:03 +00:00
Charlie Marsh
9333f15433 [ty] Fix match exhaustiveness for enum | None unions (#22290)
## Summary

If we match on an `TestEnum | None`, then when adding a case like
`~Literal[TestEnum.FOO]` (i.e., after `if value == TestEnum.FOO:
return`), we'd distribute `Literal[TestEnum.BAR]` on the entire builder,
creating `None & Literal[TestEnum.BAR]` which simplified to `Never`.
Instead, we should only expand to the remaining members for pieces of
the intersection that contain the enum.

Now, `(TestEnum | None) & ~Literal[TestEnum.FOO] &
~Literal[TestEnum.BAR]` correctly simplifies to `None` instead of
`Never`.

Closes https://github.com/astral-sh/ty/issues/2260.
2025-12-29 22:19:28 -05:00
Shunsuke Shibayama
c429ef8407 [ty] don't expand type aliases via type mappings unless necessary (#22241)
## Summary

`apply_type_mapping` always expands type aliases and operates on the
resulting types, which can lead to cluttered results due to excessive
type alias expansion in places where it is not actually needed.

Specifically, type aliases are expanded when displaying method
signatures, because we use `TypeMapping::BindSelf` to get the method
signature.

```python
type Scalar = int | float
type Array1d = list[Scalar] | tuple[Scalar]

def f(x: Scalar | Array1d) -> None: pass
reveal_type(f)  # revealed: def f(x: Scalar | Array1d) -> None

class Foo:
    def f(self, x: Scalar | Array1d) -> None: pass
# should be `bound method Foo.f(x: Scalar | Array1d) -> None`
reveal_type(Foo().f)  # revealed: bound method Foo.f(x: int | float | list[int | float] | tuple[int | float]) -> None
```

In this PR, when type mapping is performed on a type alias, the
expansion result without type mapping is compared with the expansion
result after type mapping, and if the two are equivalent, the expansion
is deemed redundant and canceled.

## Test Plan

mdtest updated
2025-12-29 19:02:56 -08:00
Eric Mark Martin
8716b4e230 [ty] implement typing.TypeGuard (#20974)
## Summary

Resolve(s) astral-sh/ty#117, astral-sh/ty#1569

Implement `typing.TypeGuard`. Due to the fact that it [overrides
anything previously known about the checked
value](https://typing.python.org/en/latest/spec/narrowing.html#typeguard)---

> When a conditional statement includes a call to a user-defined type
guard function, and that function returns true, the expression passed as
the first positional argument to the type guard function should be
assumed by a static type checker to take on the type specified in the
TypeGuard return type, unless and until it is further narrowed within
the conditional code block.

---we have to substantially rework the constraints system. In
particular, we make constraints represented as a disjunctive normal form
(DNF) where each term includes a regular constraint, and one or more
disjuncts with a typeguard constraint. Some test cases (including some
with more complex boolean logic) are added to `type_guards.md`.


## Test Plan

- update existing tests
- add new tests for more complex boolean logic with `TypeGuard`
- add new tests for `TypeGuard` variance

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2025-12-29 17:54:17 -08:00
235 changed files with 8365 additions and 2605 deletions

View File

@@ -794,7 +794,7 @@ jobs:
echo '```console' > "$GITHUB_STEP_SUMMARY"
# Enable color output for pre-commit and remove it for the summary
# Use --hook-stage=manual to enable slower pre-commit hooks that are skipped by default
SKIP=cargo-fmt,clippy,dev-generate-all uvx --python="${PYTHON_VERSION}" pre-commit run --all-files --show-diff-on-failure --color=always --hook-stage=manual | \
SKIP=cargo-fmt uvx --python="${PYTHON_VERSION}" pre-commit run --all-files --show-diff-on-failure --color=always --hook-stage=manual | \
tee >(sed -E 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})*)?[mGK]//g' >> "$GITHUB_STEP_SUMMARY") >&1
exit_code="${PIPESTATUS[0]}"
echo '```' >> "$GITHUB_STEP_SUMMARY"

View File

@@ -70,7 +70,7 @@ jobs:
shell: bash
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.30.2/cargo-dist-installer.sh | sh"
- name: Cache dist
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
with:
name: cargo-dist-cache
path: ~/.cargo/bin/dist
@@ -86,7 +86,7 @@ jobs:
cat plan-dist-manifest.json
echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT"
- name: "Upload dist-manifest.json"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
with:
name: artifacts-plan-dist-manifest
path: plan-dist-manifest.json
@@ -153,7 +153,7 @@ jobs:
cp dist-manifest.json "$BUILD_MANIFEST_NAME"
- name: "Upload artifacts"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
with:
name: artifacts-build-global
path: |
@@ -200,7 +200,7 @@ jobs:
cat dist-manifest.json
echo "manifest=$(jq -c "." dist-manifest.json)" >> "$GITHUB_OUTPUT"
- name: "Upload dist-manifest.json"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
with:
# Overwrite the previous copy
name: artifacts-dist-manifest

View File

@@ -59,9 +59,6 @@ jobs:
- name: Compute diagnostic diff
shell: bash
env:
# TODO: Remove this once we fixed the remaining panics in the conformance suite.
TY_MAX_PARALLELISM: 1
run: |
RUFF_DIR="$GITHUB_WORKSPACE/ruff"

View File

@@ -135,6 +135,3 @@ repos:
rev: v0.11.0.1
hooks:
- id: shellcheck
ci:
skip: [cargo-fmt, dev-generate-all]

View File

@@ -58,8 +58,8 @@ anstream = { version = "0.6.18" }
anstyle = { version = "1.0.10" }
anyhow = { version = "1.0.80" }
arc-swap = { version = "1.7.1" }
assert_fs = { version = "1.1.0" }
argfile = { version = "0.2.0" }
assert_fs = { version = "1.1.0" }
bincode = { version = "2.0.0" }
bitflags = { version = "2.5.0" }
bitvec = { version = "1.0.1", default-features = false, features = [
@@ -71,30 +71,30 @@ camino = { version = "1.1.7" }
clap = { version = "4.5.3", features = ["derive"] }
clap_complete_command = { version = "0.6.0" }
clearscreen = { version = "4.0.0" }
csv = { version = "1.3.1" }
divan = { package = "codspeed-divan-compat", version = "4.0.4" }
codspeed-criterion-compat = { version = "4.0.4", default-features = false }
colored = { version = "3.0.0" }
compact_str = "0.9.0"
console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
compact_str = "0.9.0"
criterion = { version = "0.8.0", default-features = false }
crossbeam = { version = "0.8.4" }
csv = { version = "1.3.1" }
dashmap = { version = "6.0.1" }
datatest-stable = { version = "0.3.3" }
dunce = { version = "1.0.5" }
divan = { package = "codspeed-divan-compat", version = "4.0.4" }
drop_bomb = { version = "0.1.5" }
dunce = { version = "1.0.5" }
etcetera = { version = "0.11.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
get-size2 = { version = "0.7.3", features = [
"derive",
"smallvec",
"hashbrown",
"compact-str",
] }
getrandom = { version = "0.3.1" }
glob = { version = "0.3.1" }
globset = { version = "0.4.14" }
globwalk = { version = "0.9.1" }
@@ -116,8 +116,8 @@ is-macro = { version = "0.3.5" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.14.0" }
jiff = { version = "0.2.0" }
js-sys = { version = "0.3.69" }
jod-thread = { version = "1.0.0" }
js-sys = { version = "0.3.69" }
libc = { version = "0.2.153" }
libcst = { version = "1.8.4", default-features = false }
log = { version = "0.4.17" }
@@ -138,9 +138,9 @@ pep440_rs = { version = "0.7.1" }
pretty_assertions = "1.3.0"
proc-macro2 = { version = "1.0.79" }
pyproject-toml = { version = "0.13.4" }
quickcheck = { version = "1.0.3", default-features = false}
quickcheck_macros = { version = "1.0.0" }
quick-junit = { version = "0.5.0" }
quickcheck = { version = "1.0.3", default-features = false }
quickcheck_macros = { version = "1.0.0" }
quote = { version = "1.0.23" }
rand = { version = "0.9.0" }
rayon = { version = "1.10.0" }
@@ -197,9 +197,9 @@ tryfn = { version = "0.2.1" }
typed-arena = { version = "2.0.2" }
unic-ucd-category = { version = "0.9" }
unicode-ident = { version = "1.0.12" }
unicode-normalization = { version = "0.1.23" }
unicode-width = { version = "0.2.0" }
unicode_names2 = { version = "1.2.2" }
unicode-normalization = { version = "0.1.23" }
url = { version = "2.5.0" }
uuid = { version = "1.6.1", features = ["v4", "fast-rng", "macro-diagnostics"] }
walkdir = { version = "2.3.2" }
@@ -209,8 +209,13 @@ wild = { version = "2" }
zip = { version = "0.6.6", default-features = false }
[workspace.metadata.cargo-shear]
ignored = ["getrandom", "ruff_options_metadata", "uuid", "get-size2", "ty_completion_eval"]
ignored = [
"getrandom",
"ruff_options_metadata",
"uuid",
"get-size2",
"ty_completion_eval",
]
[workspace.lints.rust]
unsafe_code = "warn"
@@ -270,17 +275,10 @@ if_not_else = "allow"
# Diagnostics are not actionable: Enable once https://github.com/rust-lang/rust-clippy/issues/13774 is resolved.
large_stack_arrays = "allow"
[profile.release]
lto = "fat"
codegen-units = 16
# Profile to build a minimally sized binary for ruff/ty
[profile.minimal-size]
inherits = "release"
opt-level = "z"
codegen-units = 1
# Some crates don't change as much but benefit more from
# more expensive optimization passes, so we selectively
# decrease codegen-units in some cases.
@@ -291,6 +289,12 @@ codegen-units = 1
[profile.release.package.salsa]
codegen-units = 1
# Profile to build a minimally sized binary for ruff/ty
[profile.minimal-size]
inherits = "release"
opt-level = "z"
codegen-units = 1
[profile.dev.package.insta]
opt-level = 3

View File

@@ -12,6 +12,13 @@ license = { workspace = true }
readme = "../../README.md"
default-run = "ruff"
[package.metadata.cargo-shear]
# Used via macro expansion.
ignored = ["jiff"]
[package.metadata.dist]
dist = true
[dependencies]
ruff_cache = { workspace = true }
ruff_db = { workspace = true, default-features = false, features = ["os"] }
@@ -61,6 +68,12 @@ tracing = { workspace = true, features = ["log"] }
walkdir = { workspace = true }
wild = { workspace = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), not(target_os = "aix"), not(target_os = "android"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64", target_arch = "riscv64")))'.dependencies]
tikv-jemallocator = { workspace = true }
[target.'cfg(target_os = "windows")'.dependencies]
mimalloc = { workspace = true }
[dev-dependencies]
# Enable test rules during development
ruff_linter = { workspace = true, features = ["clap", "test-rules"] }
@@ -76,18 +89,5 @@ ruff_python_trivia = { workspace = true }
tempfile = { workspace = true }
test-case = { workspace = true }
[package.metadata.cargo-shear]
# Used via macro expansion.
ignored = ["jiff"]
[package.metadata.dist]
dist = true
[target.'cfg(target_os = "windows")'.dependencies]
mimalloc = { workspace = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), not(target_os = "aix"), not(target_os = "android"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64", target_arch = "riscv64")))'.dependencies]
tikv-jemallocator = { workspace = true }
[lints]
workspace = true

View File

@@ -125,7 +125,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -261,6 +261,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -127,7 +127,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -263,6 +263,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -129,7 +129,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -265,6 +265,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -129,7 +129,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -265,6 +265,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -126,7 +126,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -262,6 +262,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -126,7 +126,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -262,6 +262,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -125,7 +125,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -261,6 +261,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -125,7 +125,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -261,6 +261,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -125,7 +125,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -261,6 +261,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -238,7 +238,7 @@ linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|
linter.flake8_copyright.author = none
linter.flake8_copyright.min_file_size = 0
linter.flake8_errmsg.max_string_length = 0
linter.flake8_gettext.functions_names = [
linter.flake8_gettext.function_names = [
_,
gettext,
ngettext,
@@ -374,6 +374,7 @@ linter.pylint.max_locals = 15
linter.pylint.max_nested_blocks = 5
linter.pyupgrade.keep_runtime_typing = false
linter.ruff.parenthesize_tuple_in_subscript = false
linter.ruff.strictly_empty_init_modules = false
# Formatter Settings
formatter.exclude = []

View File

@@ -12,10 +12,6 @@ license = "MIT OR Apache-2.0"
[lib]
[features]
default = []
testing-colors = []
[dependencies]
anstyle = { workspace = true }
memchr = { workspace = true }
@@ -23,12 +19,17 @@ unicode-width = { workspace = true }
[dev-dependencies]
ruff_annotate_snippets = { workspace = true, features = ["testing-colors"] }
anstream = { workspace = true }
serde = { workspace = true, features = ["derive"] }
snapbox = { workspace = true, features = ["diff", "term-svg", "cmd", "examples"] }
toml = { workspace = true }
tryfn = { workspace = true }
[features]
default = []
testing-colors = []
[[test]]
name = "fixtures"
harness = false

View File

@@ -16,6 +16,51 @@ bench = false
test = false
doctest = false
[dependencies]
ruff_db = { workspace = true, features = ["testing"] }
ruff_linter = { workspace = true, optional = true }
ruff_python_ast = { workspace = true }
ruff_python_formatter = { workspace = true, optional = true }
ruff_python_parser = { workspace = true, optional = true }
ruff_python_trivia = { workspace = true, optional = true }
ty_project = { workspace = true, optional = true }
anyhow = { workspace = true }
codspeed-criterion-compat = { workspace = true, default-features = false, optional = true }
criterion = { workspace = true, default-features = false, optional = true }
divan = { workspace = true, optional = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64", target_arch = "riscv64")))'.dependencies]
tikv-jemallocator = { workspace = true, optional = true }
[target.'cfg(target_os = "windows")'.dependencies]
mimalloc = { workspace = true, optional = true }
[dev-dependencies]
rayon = { workspace = true }
rustc-hash = { workspace = true }
[features]
default = ["ty_instrumented", "ty_walltime", "ruff_instrumented"]
# Enables the ruff instrumented benchmarks
ruff_instrumented = [
"criterion",
"ruff_linter",
"ruff_python_formatter",
"ruff_python_parser",
"ruff_python_trivia",
"mimalloc",
"tikv-jemallocator",
]
# Enables the ty instrumented benchmarks
ty_instrumented = ["criterion", "ty_project", "ruff_python_trivia"]
codspeed = ["codspeed-criterion-compat"]
# Enables the ty_walltime benchmarks
ty_walltime = ["ruff_db/os", "ty_project", "divan"]
[[bench]]
name = "linter"
harness = false
@@ -46,54 +91,5 @@ name = "ty_walltime"
harness = false
required-features = ["ty_walltime"]
[dependencies]
ruff_db = { workspace = true, features = ["testing"] }
ruff_python_ast = { workspace = true }
ruff_linter = { workspace = true, optional = true }
ruff_python_formatter = { workspace = true, optional = true }
ruff_python_parser = { workspace = true, optional = true }
ruff_python_trivia = { workspace = true, optional = true }
ty_project = { workspace = true, optional = true }
divan = { workspace = true, optional = true }
anyhow = { workspace = true }
codspeed-criterion-compat = { workspace = true, default-features = false, optional = true }
criterion = { workspace = true, default-features = false, optional = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true }
[lints]
workspace = true
[features]
default = ["ty_instrumented", "ty_walltime", "ruff_instrumented"]
# Enables the ruff instrumented benchmarks
ruff_instrumented = [
"criterion",
"ruff_linter",
"ruff_python_formatter",
"ruff_python_parser",
"ruff_python_trivia",
"mimalloc",
"tikv-jemallocator",
]
# Enables the ty instrumented benchmarks
ty_instrumented = [
"criterion",
"ty_project",
"ruff_python_trivia",
]
codspeed = ["codspeed-criterion-compat"]
# Enables the ty_walltime benchmarks
ty_walltime = ["ruff_db/os", "ty_project", "divan"]
[target.'cfg(target_os = "windows")'.dependencies]
mimalloc = { workspace = true, optional = true }
[target.'cfg(all(not(target_os = "windows"), not(target_os = "openbsd"), any(target_arch = "x86_64", target_arch = "aarch64", target_arch = "powerpc64", target_arch = "riscv64")))'.dependencies]
tikv-jemallocator = { workspace = true, optional = true }
[dev-dependencies]
rustc-hash = { workspace = true }
rayon = { workspace = true }

View File

@@ -194,7 +194,7 @@ static SYMPY: Benchmark = Benchmark::new(
max_dep_date: "2025-06-17",
python_version: PythonVersion::PY312,
},
13106,
13116,
);
static TANJUN: Benchmark = Benchmark::new(

View File

@@ -11,11 +11,11 @@ repository = { workspace = true }
license = { workspace = true }
[dependencies]
filetime = { workspace = true }
glob = { workspace = true }
globset = { workspace = true }
itertools = { workspace = true }
regex = { workspace = true }
filetime = { workspace = true }
seahash = { workspace = true }
[dev-dependencies]

View File

@@ -48,12 +48,12 @@ tracing = { workspace = true }
tracing-subscriber = { workspace = true, optional = true }
zip = { workspace = true }
[target.'cfg(target_arch="wasm32")'.dependencies]
web-time = { version = "1.1.0" }
[target.'cfg(not(target_arch="wasm32"))'.dependencies]
etcetera = { workspace = true, optional = true }
[target.'cfg(target_arch="wasm32")'.dependencies]
web-time = { version = "1.1.0" }
[dev-dependencies]
insta = { workspace = true, features = ["filters"] }
tempfile = { workspace = true }

View File

@@ -11,10 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[dependencies]
ty = { workspace = true }
ty_project = { workspace = true, features = ["schemars"] }
ty_python_semantic = { workspace = true }
ty_static = { workspace = true }
ruff = { workspace = true }
ruff_formatter = { workspace = true }
ruff_linter = { workspace = true, features = ["schemars"] }
@@ -26,6 +22,10 @@ ruff_python_formatter = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_workspace = { workspace = true, features = ["schemars"] }
ty = { workspace = true }
ty_project = { workspace = true, features = ["schemars"] }
ty_python_semantic = { workspace = true }
ty_static = { workspace = true }
anyhow = { workspace = true }
clap = { workspace = true, features = ["wrap_help"] }

View File

@@ -10,6 +10,10 @@ documentation = { workspace = true }
repository = { workspace = true }
license = { workspace = true }
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[dependencies]
ruff_cache = { workspace = true }
ruff_macros = { workspace = true }
@@ -25,10 +29,6 @@ unicode-width = { workspace = true }
[dev-dependencies]
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[features]
serde = ["dep:serde", "ruff_text_size/serde"]
schemars = ["dep:schemars", "ruff_text_size/schemars"]

View File

@@ -9,6 +9,10 @@ repository.workspace = true
authors.workspace = true
license.workspace = true
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[dependencies]
ruff_cache = { workspace = true }
ruff_db = { workspace = true, features = ["os", "serde"] }
@@ -29,7 +33,3 @@ zip = { workspace = true, features = [] }
[lints]
workspace = true
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]

View File

@@ -16,17 +16,17 @@ license = { workspace = true }
ruff_cache = { workspace = true }
ruff_db = { workspace = true, features = ["junit", "serde"] }
ruff_diagnostics = { workspace = true, features = ["serde"] }
ruff_notebook = { workspace = true }
ruff_macros = { workspace = true }
ruff_notebook = { workspace = true }
ruff_python_ast = { workspace = true, features = ["serde", "cache"] }
ruff_python_codegen = { workspace = true }
ruff_python_importer = { workspace = true }
ruff_python_index = { workspace = true }
ruff_python_literal = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_semantic = { workspace = true }
ruff_python_stdlib = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_source_file = { workspace = true, features = ["serde"] }
ruff_text_size = { workspace = true }
@@ -44,8 +44,8 @@ imperative = { workspace = true }
is-macro = { workspace = true }
is-wsl = { workspace = true }
itertools = { workspace = true }
libcst = { workspace = true }
jiff = { workspace = true }
libcst = { workspace = true }
log = { workspace = true }
memchr = { workspace = true }
natord = { workspace = true }
@@ -67,17 +67,17 @@ strum_macros = { workspace = true }
thiserror = { workspace = true }
toml = { workspace = true }
typed-arena = { workspace = true }
unicode-normalization = { workspace = true }
unicode-width = { workspace = true }
unicode_names2 = { workspace = true }
unicode-normalization = { workspace = true }
url = { workspace = true }
[dev-dependencies]
insta = { workspace = true, features = ["filters", "json", "redactions"] }
test-case = { workspace = true }
# Disable colored output in tests
colored = { workspace = true, features = ["no-color"] }
insta = { workspace = true, features = ["filters", "json", "redactions"] }
tempfile = { workspace = true }
test-case = { workspace = true }
[features]
default = []

View File

@@ -0,0 +1,31 @@
from __future__ import annotations
from airflow.lineage.hook import HookLineageCollector
# airflow.lineage.hook
hlc = HookLineageCollector()
hlc.create_asset("there")
hlc.create_asset("should", "be", "no", "posarg")
hlc.create_asset(name="but", uri="kwargs are ok")
hlc.create_asset()
HookLineageCollector().create_asset(name="but", uri="kwargs are ok")
HookLineageCollector().create_asset("there")
HookLineageCollector().create_asset("should", "be", "no", "posarg")
args = ["uri_value"]
hlc.create_asset(*args)
HookLineageCollector().create_asset(*args)
# Literal unpacking
hlc.create_asset(*["literal_uri"])
HookLineageCollector().create_asset(*["literal_uri"])
# starred args with keyword args
hlc.create_asset(*args, extra="value")
HookLineageCollector().create_asset(*args, extra="value")
# Double-starred keyword arguments
kwargs = {"uri": "value", "name": "test"}
hlc.create_asset(**kwargs)
HookLineageCollector().create_asset(**kwargs)

View File

@@ -1,5 +1,5 @@
from abc import abstractmethod
from typing import overload, cast
from typing import overload, cast, TypeVar
from typing_extensions import override
@@ -256,3 +256,15 @@ class C:
"""Docstring."""
msg = t"{x}..."
raise NotImplementedError(msg)
###
# Unused arguments with `**kwargs`.
###
def f(
default: object = None, # noqa: ARG001
**kwargs: object,
) -> None:
TypeVar(**kwargs)

View File

@@ -53,3 +53,25 @@ for i in items:
items = [1, 2, 3, 4]
for i in items:
items[i]
# A case with multiple uses of the value to show off the secondary annotations
for instrument in ORCHESTRA:
data = json.dumps(
{
"instrument": instrument,
"section": ORCHESTRA[instrument],
}
)
print(f"saving data for {instrument} in {ORCHESTRA[instrument]}")
with open(f"{instrument}/{ORCHESTRA[instrument]}.txt", "w") as f:
f.write(data)
# This should still suppress the error
for ( # noqa: PLC0206
instrument
) in ORCHESTRA:
print(f"{instrument}: {ORCHESTRA[instrument]}")

View File

@@ -73,3 +73,7 @@ foo == 1 or foo == 1.0 # Different types, same hashed value
foo == False or foo == 0 # Different types, same hashed value
foo == 0.0 or foo == 0j # Different types, same hashed value
foo == "bar" or foo == "bar" # All members identical
foo == "bar" or foo == "bar" or foo == "buzz" # All but one members identical

View File

@@ -0,0 +1,51 @@
"""This is the module docstring."""
# convenience imports:
import os
from pathlib import Path
__all__ = ["MY_CONSTANT"]
__all__ += ["foo"]
__all__: list[str] = __all__
__all__ = __all__ = __all__
MY_CONSTANT = 5
"""This is an important constant."""
os.environ["FOO"] = 1
def foo():
return Path("foo.py")
def __getattr__(name): # ok
return name
__path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
if os.environ["FOO"] != "1": # RUF067
MY_CONSTANT = 4 # ok, don't flag nested statements
if TYPE_CHECKING: # ok
MY_CONSTANT = 3
import typing
if typing.TYPE_CHECKING: # ok
MY_CONSTANT = 2
__version__ = "1.2.3" # ok
def __dir__(): # ok
return ["foo"]
import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__) # ok
__path__ = unknown.extend_path(__path__, __name__) # also ok
# non-`extend_path` assignments are not allowed
__path__ = 5 # RUF067
# also allow `__author__`
__author__ = "The Author" # ok

View File

@@ -0,0 +1,54 @@
"""
The code here is not in an `__init__.py` file and should not trigger the
lint.
"""
# convenience imports:
import os
from pathlib import Path
__all__ = ["MY_CONSTANT"]
__all__ += ["foo"]
__all__: list[str] = __all__
__all__ = __all__ = __all__
MY_CONSTANT = 5
"""This is an important constant."""
os.environ["FOO"] = 1
def foo():
return Path("foo.py")
def __getattr__(name): # ok
return name
__path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
if os.environ["FOO"] != "1": # RUF067
MY_CONSTANT = 4 # ok, don't flag nested statements
if TYPE_CHECKING: # ok
MY_CONSTANT = 3
import typing
if typing.TYPE_CHECKING: # ok
MY_CONSTANT = 2
__version__ = "1.2.3" # ok
def __dir__(): # ok
return ["foo"]
import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__) # ok
__path__ = unknown.extend_path(__path__, __name__) # also ok
# non-`extend_path` assignments are not allowed
__path__ = 5 # RUF067
# also allow `__author__`
__author__ = "The Author" # ok

View File

@@ -1043,7 +1043,7 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
]) && flake8_gettext::is_gettext_func_call(
checker,
func,
&checker.settings().flake8_gettext.functions_names,
&checker.settings().flake8_gettext.function_names,
) {
if checker.is_rule_enabled(Rule::FStringInGetTextFuncCall) {
flake8_gettext::rules::f_string_in_gettext_func_call(checker, args);
@@ -1278,6 +1278,9 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
if checker.is_rule_enabled(Rule::Airflow3SuggestedUpdate) {
airflow::rules::airflow_3_0_suggested_update_expr(checker, expr);
}
if checker.is_rule_enabled(Rule::Airflow3IncompatibleFunctionSignature) {
airflow::rules::airflow_3_incompatible_function_signature(checker, expr);
}
if checker.is_rule_enabled(Rule::UnnecessaryCastToInt) {
ruff::rules::unnecessary_cast_to_int(checker, call);
}

View File

@@ -1630,4 +1630,7 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
_ => {}
}
if checker.is_rule_enabled(Rule::NonEmptyInitModule) {
ruff::rules::non_empty_init_module(checker, stmt);
}
}

View File

@@ -33,7 +33,7 @@ pub(crate) fn unresolved_references(checker: &Checker) {
}
// Allow __path__.
if checker.path.ends_with("__init__.py") {
if checker.in_init_module() {
if reference.name(checker.source()) == "__path__" {
continue;
}

View File

@@ -21,7 +21,7 @@
//! represents the lint-rule analysis phase. In the future, these steps may be separated into
//! distinct passes over the AST.
use std::cell::RefCell;
use std::cell::{OnceCell, RefCell};
use std::path::Path;
use itertools::Itertools;
@@ -198,6 +198,8 @@ pub(crate) struct Checker<'a> {
parsed_type_annotation: Option<&'a ParsedAnnotation>,
/// The [`Path`] to the file under analysis.
path: &'a Path,
/// Whether `path` points to an `__init__.py` file.
in_init_module: OnceCell<bool>,
/// The [`Path`] to the package containing the current file.
package: Option<PackageRoot<'a>>,
/// The module representation of the current file (e.g., `foo.bar`).
@@ -274,6 +276,7 @@ impl<'a> Checker<'a> {
noqa_line_for,
noqa,
path,
in_init_module: OnceCell::new(),
package,
module,
source_type,
@@ -482,9 +485,11 @@ impl<'a> Checker<'a> {
self.context.settings
}
/// The [`Path`] to the file under analysis.
pub(crate) const fn path(&self) -> &'a Path {
self.path
/// Returns whether the file under analysis is an `__init__.py` file.
pub(crate) fn in_init_module(&self) -> bool {
*self
.in_init_module
.get_or_init(|| self.path.ends_with("__init__.py"))
}
/// The [`Path`] to the package containing the current file.
@@ -1873,6 +1878,9 @@ impl<'a> Visitor<'a> for Checker<'a> {
} else {
self.visit_non_type_definition(value);
}
} else {
// Ex: typing.TypeVar(**kwargs)
self.visit_non_type_definition(value);
}
}
}
@@ -3171,7 +3179,7 @@ impl<'a> Checker<'a> {
// F822
if self.is_rule_enabled(Rule::UndefinedExport) {
if is_undefined_export_in_dunder_init_enabled(self.settings())
|| !self.path.ends_with("__init__.py")
|| !self.in_init_module()
{
self.report_diagnostic(
pyflakes::rules::UndefinedExport {

View File

@@ -1060,6 +1060,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "064") => rules::ruff::rules::NonOctalPermissions,
(Ruff, "065") => rules::ruff::rules::LoggingEagerConversion,
(Ruff, "066") => rules::ruff::rules::PropertyWithoutReturn,
(Ruff, "067") => rules::ruff::rules::NonEmptyInitModule,
(Ruff, "100") => rules::ruff::rules::UnusedNOQA,
(Ruff, "101") => rules::ruff::rules::RedirectedNOQA,
@@ -1123,6 +1124,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Airflow, "002") => rules::airflow::rules::AirflowDagNoScheduleArgument,
(Airflow, "301") => rules::airflow::rules::Airflow3Removal,
(Airflow, "302") => rules::airflow::rules::Airflow3MovedToProvider,
(Airflow, "303") => rules::airflow::rules::Airflow3IncompatibleFunctionSignature,
(Airflow, "311") => rules::airflow::rules::Airflow3SuggestedUpdate,
(Airflow, "312") => rules::airflow::rules::Airflow3SuggestedToMoveToProvider,

View File

@@ -47,6 +47,7 @@ mod tests {
#[test_case(Rule::Airflow3MovedToProvider, Path::new("AIR302_zendesk.py"))]
#[test_case(Rule::Airflow3MovedToProvider, Path::new("AIR302_standard.py"))]
#[test_case(Rule::Airflow3MovedToProvider, Path::new("AIR302_try.py"))]
#[test_case(Rule::Airflow3IncompatibleFunctionSignature, Path::new("AIR303.py"))]
#[test_case(Rule::Airflow3SuggestedUpdate, Path::new("AIR311_args.py"))]
#[test_case(Rule::Airflow3SuggestedUpdate, Path::new("AIR311_names.py"))]
#[test_case(Rule::Airflow3SuggestedUpdate, Path::new("AIR311_try.py"))]

View File

@@ -0,0 +1,128 @@
use crate::checkers::ast::Checker;
use crate::{FixAvailability, Violation};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::{Arguments, Expr, ExprAttribute, ExprCall, Identifier};
use ruff_python_semantic::Modules;
use ruff_python_semantic::analyze::typing;
use ruff_text_size::Ranged;
/// ## What it does
/// Checks for Airflow function calls that will raise a runtime error in Airflow 3.0
/// due to function signature changes, such as functions that changed to accept only
/// keyword arguments, parameter reordering, or parameter type changes.
///
/// ## Why is this bad?
/// Airflow 3.0 introduces changes to function signatures. Code that
/// worked in Airflow 2.x will raise a runtime error if not updated in Airflow
/// 3.0.
///
/// ## Example
/// ```python
/// from airflow.lineage.hook import HookLineageCollector
///
/// collector = HookLineageCollector()
/// # Passing positional arguments will raise a runtime error in Airflow 3.0
/// collector.create_asset("s3://bucket/key")
/// ```
///
/// Use instead:
/// ```python
/// from airflow.lineage.hook import HookLineageCollector
///
/// collector = HookLineageCollector()
/// # Passing arguments as keyword arguments instead of positional arguments
/// collector.create_asset(uri="s3://bucket/key")
/// ```
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.11")]
pub(crate) struct Airflow3IncompatibleFunctionSignature {
function_name: String,
change_type: FunctionSignatureChangeType,
}
impl Violation for Airflow3IncompatibleFunctionSignature {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::None;
#[derive_message_formats]
fn message(&self) -> String {
let Airflow3IncompatibleFunctionSignature {
function_name,
change_type,
} = self;
match change_type {
FunctionSignatureChangeType::KeywordOnly { .. } => {
format!("`{function_name}` signature is changed in Airflow 3.0")
}
}
}
fn fix_title(&self) -> Option<String> {
let Airflow3IncompatibleFunctionSignature { change_type, .. } = self;
match change_type {
FunctionSignatureChangeType::KeywordOnly { message } => Some(message.to_string()),
}
}
}
/// AIR303
pub(crate) fn airflow_3_incompatible_function_signature(checker: &Checker, expr: &Expr) {
if !checker.semantic().seen_module(Modules::AIRFLOW) {
return;
}
let Expr::Call(ExprCall {
func, arguments, ..
}) = expr
else {
return;
};
let Expr::Attribute(ExprAttribute { attr, value, .. }) = func.as_ref() else {
return;
};
// Resolve the qualified name: try variable assignments first, then fall back to direct
// constructor calls.
let qualified_name = typing::resolve_assignment(value, checker.semantic()).or_else(|| {
value
.as_call_expr()
.and_then(|call| checker.semantic().resolve_qualified_name(&call.func))
});
let Some(qualified_name) = qualified_name else {
return;
};
check_keyword_only_method(checker, &qualified_name, attr, arguments);
}
fn check_keyword_only_method(
checker: &Checker,
qualified_name: &QualifiedName,
attr: &Identifier,
arguments: &Arguments,
) {
let has_positional_args =
arguments.find_positional(0).is_some() || arguments.args.iter().any(Expr::is_starred_expr);
if let ["airflow", "lineage", "hook", "HookLineageCollector"] = qualified_name.segments() {
if attr.as_str() == "create_asset" && has_positional_args {
checker.report_diagnostic(
Airflow3IncompatibleFunctionSignature {
function_name: attr.to_string(),
change_type: FunctionSignatureChangeType::KeywordOnly {
message: "Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)",
},
},
attr.range(),
);
}
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub(crate) enum FunctionSignatureChangeType {
/// Function signature changed to only accept keyword arguments.
KeywordOnly { message: &'static str },
}

View File

@@ -1,4 +1,5 @@
pub(crate) use dag_schedule_argument::*;
pub(crate) use function_signature_change_in_3::*;
pub(crate) use moved_to_provider_in_3::*;
pub(crate) use removal_in_3::*;
pub(crate) use suggested_to_move_to_provider_in_3::*;
@@ -6,6 +7,7 @@ pub(crate) use suggested_to_update_3_0::*;
pub(crate) use task_variable_name::*;
mod dag_schedule_argument;
mod function_signature_change_in_3;
mod moved_to_provider_in_3;
mod removal_in_3;
mod suggested_to_move_to_provider_in_3;

View File

@@ -0,0 +1,114 @@
---
source: crates/ruff_linter/src/rules/airflow/mod.rs
---
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:7:5
|
5 | # airflow.lineage.hook
6 | hlc = HookLineageCollector()
7 | hlc.create_asset("there")
| ^^^^^^^^^^^^
8 | hlc.create_asset("should", "be", "no", "posarg")
9 | hlc.create_asset(name="but", uri="kwargs are ok")
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:8:5
|
6 | hlc = HookLineageCollector()
7 | hlc.create_asset("there")
8 | hlc.create_asset("should", "be", "no", "posarg")
| ^^^^^^^^^^^^
9 | hlc.create_asset(name="but", uri="kwargs are ok")
10 | hlc.create_asset()
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:13:24
|
12 | HookLineageCollector().create_asset(name="but", uri="kwargs are ok")
13 | HookLineageCollector().create_asset("there")
| ^^^^^^^^^^^^
14 | HookLineageCollector().create_asset("should", "be", "no", "posarg")
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:14:24
|
12 | HookLineageCollector().create_asset(name="but", uri="kwargs are ok")
13 | HookLineageCollector().create_asset("there")
14 | HookLineageCollector().create_asset("should", "be", "no", "posarg")
| ^^^^^^^^^^^^
15 |
16 | args = ["uri_value"]
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:17:5
|
16 | args = ["uri_value"]
17 | hlc.create_asset(*args)
| ^^^^^^^^^^^^
18 | HookLineageCollector().create_asset(*args)
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:18:24
|
16 | args = ["uri_value"]
17 | hlc.create_asset(*args)
18 | HookLineageCollector().create_asset(*args)
| ^^^^^^^^^^^^
19 |
20 | # Literal unpacking
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:21:5
|
20 | # Literal unpacking
21 | hlc.create_asset(*["literal_uri"])
| ^^^^^^^^^^^^
22 | HookLineageCollector().create_asset(*["literal_uri"])
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:22:24
|
20 | # Literal unpacking
21 | hlc.create_asset(*["literal_uri"])
22 | HookLineageCollector().create_asset(*["literal_uri"])
| ^^^^^^^^^^^^
23 |
24 | # starred args with keyword args
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:25:5
|
24 | # starred args with keyword args
25 | hlc.create_asset(*args, extra="value")
| ^^^^^^^^^^^^
26 | HookLineageCollector().create_asset(*args, extra="value")
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)
AIR303 `create_asset` signature is changed in Airflow 3.0
--> AIR303.py:26:24
|
24 | # starred args with keyword args
25 | hlc.create_asset(*args, extra="value")
26 | HookLineageCollector().create_asset(*args, extra="value")
| ^^^^^^^^^^^^
27 |
28 | # Double-starred keyword arguments
|
help: Pass positional arguments as keyword arguments (e.g., `create_asset(uri=...)`)

View File

@@ -1,6 +1,8 @@
use ruff_db::diagnostic::Diagnostic;
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::function_type::is_subject_to_liskov_substitution_principle;
use crate::checkers::ast::Checker;
use crate::settings::LinterSettings;
@@ -191,3 +193,27 @@ pub(super) fn allow_boolean_trap(call: &ast::ExprCall, checker: &Checker) -> boo
false
}
pub(super) fn add_liskov_substitution_principle_help(
diagnostic: &mut Diagnostic,
function_name: &str,
decorator_list: &[ast::Decorator],
checker: &Checker,
) {
let semantic = checker.semantic();
let parent_scope = semantic.current_scope();
let pep8_settings = &checker.settings().pep8_naming;
if is_subject_to_liskov_substitution_principle(
function_name,
decorator_list,
parent_scope,
semantic,
&pep8_settings.classmethod_decorators,
&pep8_settings.staticmethod_decorators,
) {
diagnostic.help(
"Consider adding `@typing.override` if changing the function signature \
would violate the Liskov Substitution Principle",
);
}
}

View File

@@ -6,7 +6,9 @@ use ruff_python_semantic::analyze::visibility;
use crate::Violation;
use crate::checkers::ast::Checker;
use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
use crate::rules::flake8_boolean_trap::helpers::{
add_liskov_substitution_principle_help, is_allowed_func_def,
};
/// ## What it does
/// Checks for the use of boolean positional arguments in function definitions,
@@ -139,7 +141,9 @@ pub(crate) fn boolean_default_value_positional_argument(
return;
}
checker.report_diagnostic(BooleanDefaultValuePositionalArgument, param.identifier());
let mut diagnostic = checker
.report_diagnostic(BooleanDefaultValuePositionalArgument, param.identifier());
add_liskov_substitution_principle_help(&mut diagnostic, name, decorator_list, checker);
}
}
}

View File

@@ -7,7 +7,9 @@ use ruff_python_semantic::analyze::visibility;
use crate::Violation;
use crate::checkers::ast::Checker;
use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
use crate::rules::flake8_boolean_trap::helpers::{
add_liskov_substitution_principle_help, is_allowed_func_def,
};
/// ## What it does
/// Checks for the use of boolean positional arguments in function definitions,
@@ -149,7 +151,10 @@ pub(crate) fn boolean_type_hint_positional_argument(
return;
}
checker.report_diagnostic(BooleanTypeHintPositionalArgument, parameter.identifier());
let mut diagnostic =
checker.report_diagnostic(BooleanTypeHintPositionalArgument, parameter.identifier());
add_liskov_substitution_principle_help(&mut diagnostic, name, decorator_list, checker);
}
}

View File

@@ -97,6 +97,7 @@ FBT001 Boolean-typed positional argument in function definition
| ^^^^^
91 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle
FBT001 Boolean-typed positional argument in function definition
--> FBT.py:100:10

View File

@@ -38,6 +38,10 @@ use crate::checkers::ast::Checker;
/// _("Hello, %s!") % name # Looks for "Hello, %s!".
/// ```
///
/// ## Options
///
/// - `lint.flake8-gettext.function-names`
///
/// ## References
/// - [Python documentation: `gettext` — Multilingual internationalization services](https://docs.python.org/3/library/gettext.html)
#[derive(ViolationMetadata)]

View File

@@ -38,6 +38,10 @@ use crate::checkers::ast::Checker;
/// _("Hello, %s!") % name # Looks for "Hello, %s!".
/// ```
///
/// ## Options
///
/// - `lint.flake8-gettext.function-names`
///
/// ## References
/// - [Python documentation: `gettext` — Multilingual internationalization services](https://docs.python.org/3/library/gettext.html)
#[derive(ViolationMetadata)]

View File

@@ -37,6 +37,10 @@ use crate::checkers::ast::Checker;
/// _("Hello, %s!") % name # Looks for "Hello, %s!".
/// ```
///
/// ## Options
///
/// - `lint.flake8-gettext.function-names`
///
/// ## References
/// - [Python documentation: `gettext` — Multilingual internationalization services](https://docs.python.org/3/library/gettext.html)
#[derive(ViolationMetadata)]

View File

@@ -5,7 +5,7 @@ use std::fmt::{Display, Formatter};
#[derive(Debug, Clone, CacheKey)]
pub struct Settings {
pub functions_names: Vec<Name>,
pub function_names: Vec<Name>,
}
pub fn default_func_names() -> Vec<Name> {
@@ -19,7 +19,7 @@ pub fn default_func_names() -> Vec<Name> {
impl Default for Settings {
fn default() -> Self {
Self {
functions_names: default_func_names(),
function_names: default_func_names(),
}
}
}
@@ -30,7 +30,7 @@ impl Display for Settings {
formatter = f,
namespace = "linter.flake8_gettext",
fields = [
self.functions_names | array
self.function_names | array
]
}
Ok(())

View File

@@ -32,6 +32,13 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// "dog"
/// )
/// ```
///
/// ## Options
///
/// Setting `lint.flake8-implicit-str-concat.allow-multiline = false` will disable this rule because
/// it would leave no allowed way to write a multi-line string.
///
/// - `lint.flake8-implicit-str-concat.allow-multiline`
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "v0.0.201")]
pub(crate) struct ExplicitStringConcatenation;

View File

@@ -11,7 +11,7 @@ use crate::{FixAvailability, Violation};
///
/// ## Why is this bad?
/// `ByteString` has been deprecated since Python 3.9 and will be removed in
/// Python 3.14. The Python documentation recommends using either
/// Python 3.17. The Python documentation recommends using either
/// `collections.abc.Buffer` (or the `typing_extensions` backport
/// on Python <3.12) or a union like `bytes | bytearray | memoryview` instead.
///

View File

@@ -130,10 +130,16 @@ pub(crate) fn invalid_function_name(
return;
}
checker.report_diagnostic(
let mut diagnostic = checker.report_diagnostic(
InvalidFunctionName {
name: name.to_string(),
},
stmt.identifier(),
);
if parent_class.is_some() {
diagnostic.help(
"Consider adding `@typing.override` if this method \
overrides a method from a superclass",
);
}
}

View File

@@ -36,6 +36,10 @@ use crate::rules::pep8_naming::settings::IgnoreNames;
/// - Instead of `example-module-name` or `example module name`, use `example_module_name`.
/// - Instead of `ExampleModule`, use `example_module`.
///
/// ## Options
///
/// - `lint.pep8-naming.ignore-names`
///
/// [PEP 8]: https://peps.python.org/pep-0008/#package-and-module-names
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "v0.0.248")]

View File

@@ -42,6 +42,7 @@ N802 Function name `testTest` should be lowercase
| ^^^^^^^^
41 | assert True
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
N802 Function name `bad_Name` should be lowercase
--> N802.py:65:9
@@ -52,6 +53,7 @@ N802 Function name `bad_Name` should be lowercase
| ^^^^^^^^
66 | pass
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
N802 Function name `dont_GET` should be lowercase
--> N802.py:84:9
@@ -62,6 +64,7 @@ N802 Function name `dont_GET` should be lowercase
| ^^^^^^^^
85 | pass
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
N802 Function name `dont_OPTIONS` should be lowercase
--> N802.py:95:9
@@ -72,6 +75,7 @@ N802 Function name `dont_OPTIONS` should be lowercase
| ^^^^^^^^^^^^
96 | pass
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
N802 Function name `dont_OPTIONS` should be lowercase
--> N802.py:106:9
@@ -82,3 +86,4 @@ N802 Function name `dont_OPTIONS` should be lowercase
| ^^^^^^^^^^^^
107 | pass
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass

View File

@@ -20,3 +20,4 @@ N802 Function name `stillBad` should be lowercase
| ^^^^^^^^
14 | return super().tearDown()
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass

View File

@@ -58,6 +58,11 @@ use crate::rules::pydocstyle::settings::Convention;
/// """
/// return distance / time
/// ```
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.1")]
pub(crate) struct DocstringExtraneousParameter {
@@ -113,6 +118,12 @@ impl Violation for DocstringExtraneousParameter {
/// """
/// return distance / time
/// ```
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
/// - `lint.pydocstyle.property-decorators`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.5.6")]
pub(crate) struct DocstringMissingReturns;
@@ -165,6 +176,11 @@ impl Violation for DocstringMissingReturns {
/// for _ in range(n):
/// print("Hello!")
/// ```
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.5.6")]
pub(crate) struct DocstringExtraneousReturns;
@@ -218,6 +234,11 @@ impl Violation for DocstringExtraneousReturns {
/// for i in range(1, n + 1):
/// yield i
/// ```
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.5.7")]
pub(crate) struct DocstringMissingYields;
@@ -270,6 +291,11 @@ impl Violation for DocstringMissingYields {
/// for _ in range(n):
/// print("Hello!")
/// ```
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.5.7")]
pub(crate) struct DocstringExtraneousYields;
@@ -342,6 +368,11 @@ impl Violation for DocstringExtraneousYields {
/// except ZeroDivisionError as exc:
/// raise FasterThanLightError from exc
/// ```
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.5.5")]
pub(crate) struct DocstringMissingException {
@@ -410,6 +441,11 @@ impl Violation for DocstringMissingException {
/// It may often be desirable to document *all* exceptions that a function
/// could possibly raise, even those which are not explicitly raised using
/// `raise` statements in the function body.
///
/// ## Options
///
/// - `lint.pydoclint.ignore-one-line-docstrings`
/// - `lint.pydocstyle.convention`
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.5.5")]
pub(crate) struct DocstringExtraneousException {

View File

@@ -38,6 +38,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// foobar.__doc__ # "Docstring for foo\bar."
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [Python documentation: String and Bytes literals](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals)

View File

@@ -34,6 +34,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// """
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -33,6 +33,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// """Return the mean of the given values."""
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)
@@ -80,6 +84,10 @@ impl Violation for BlankLineBeforeFunction {
/// return sum(values) / len(values)
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -25,6 +25,10 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
/// """Return the mean of the given values."""
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -63,6 +63,10 @@ use crate::docstrings::Docstring;
/// factorial.__doc__ # "Return the factorial of n."
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [Python documentation: `typing.overload`](https://docs.python.org/3/library/typing.html#typing.overload)

View File

@@ -44,6 +44,10 @@ use crate::{Edit, Fix};
/// The rule is also incompatible with the [formatter] when using
/// `format.indent-style="tab"`.
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)
@@ -93,6 +97,10 @@ impl Violation for DocstringTabIndentation {
/// We recommend against using this rule alongside the [formatter]. The
/// formatter enforces consistent indentation, making the rule redundant.
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)
@@ -146,6 +154,10 @@ impl AlwaysFixableViolation for UnderIndentation {
/// We recommend against using this rule alongside the [formatter]. The
/// formatter enforces consistent indentation, making the rule redundant.
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -37,6 +37,10 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
/// """
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -27,6 +27,10 @@ use crate::rules::pydocstyle::helpers::ends_with_backslash;
/// """Return the factorial of n."""
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -24,6 +24,10 @@ use crate::docstrings::Docstring;
/// """Return the mean of the given values."""
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -55,6 +55,10 @@ use crate::checkers::ast::Checker;
/// ## Notebook behavior
/// This rule is ignored for Jupyter Notebooks.
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)
@@ -139,6 +143,10 @@ impl Violation for UndocumentedPublicModule {
/// self.points += points
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)
@@ -366,6 +374,10 @@ impl Violation for UndocumentedPublicFunction {
/// __all__ = ["player", "game"]
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)
@@ -480,6 +492,10 @@ impl Violation for UndocumentedMagicMethod {
/// bar.__doc__ # "Class Bar."
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)

View File

@@ -32,6 +32,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// documentation generators, or custom introspection utilities that rely on
/// specific docstring formatting.
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
///

View File

@@ -1066,6 +1066,10 @@ impl AlwaysFixableViolation for MissingBlankLineAfterLastSection {
/// raise FasterThanLightError from exc
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)
@@ -1317,6 +1321,10 @@ impl Violation for UndocumentedParam {
/// raise FasterThanLightError from exc
/// ```
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [PEP 287 reStructuredText Docstring Format](https://peps.python.org/pep-0287/)

View File

@@ -31,6 +31,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// We recommend against using this rule alongside the [formatter]. The
/// formatter enforces consistent quotes, making the rule redundant.
///
/// ## Options
///
/// - `lint.pydocstyle.ignore-decorators`
///
/// ## References
/// - [PEP 257 Docstring Conventions](https://peps.python.org/pep-0257/)
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)

View File

@@ -389,7 +389,7 @@ pub(crate) fn unused_import(checker: &Checker, scope: &Scope) {
}
}
let in_init = checker.path().ends_with("__init__.py");
let in_init = checker.in_init_module();
let fix_init = !checker.settings().ignore_init_module_imports;
let preview_mode = is_dunder_init_fix_unused_import_enabled(checker.settings());
let dunder_all_exprs = find_dunder_all_exprs(checker.semantic());

View File

@@ -1,16 +1,17 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::{
self as ast, Expr, ExprContext,
self as ast, Expr, ExprContext, StmtFor,
token::parenthesized_range,
visitor::{self, Visitor},
};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::type_inference::{PythonType, ResolvedPythonType};
use ruff_python_semantic::analyze::typing::is_dict;
use ruff_text_size::Ranged;
use ruff_text_size::{Ranged, TextRange};
use crate::Violation;
use crate::checkers::ast::Checker;
use crate::checkers::ast::{Checker, DiagnosticGuard};
/// ## What it does
/// Checks for dictionary iterations that extract the dictionary value
@@ -47,20 +48,26 @@ use crate::checkers::ast::Checker;
/// ```
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "0.8.0")]
pub(crate) struct DictIndexMissingItems;
pub(crate) struct DictIndexMissingItems<'a> {
key: &'a str,
dict: &'a str,
}
impl Violation for DictIndexMissingItems {
impl Violation for DictIndexMissingItems<'_> {
#[derive_message_formats]
fn message(&self) -> String {
"Extracting value from dictionary without calling `.items()`".to_string()
}
fn fix_title(&self) -> Option<String> {
let Self { key, dict } = self;
Some(format!("Use `for {key}, value in {dict}.items()` instead"))
}
}
/// PLC0206
pub(crate) fn dict_index_missing_items(checker: &Checker, stmt_for: &ast::StmtFor) {
let ast::StmtFor {
target, iter, body, ..
} = stmt_for;
pub(crate) fn dict_index_missing_items(checker: &Checker, stmt_for: &StmtFor) {
let StmtFor { iter, body, .. } = stmt_for;
// Extract the name of the iteration object (e.g., `obj` in `for key in obj:`).
let Some(dict_name) = extract_dict_name(iter) else {
@@ -77,40 +84,46 @@ pub(crate) fn dict_index_missing_items(checker: &Checker, stmt_for: &ast::StmtFo
return;
}
let has_violation = {
let mut visitor = SubscriptVisitor::new(target, dict_name);
for stmt in body {
visitor.visit_stmt(stmt);
}
visitor.has_violation
};
if has_violation {
checker.report_diagnostic(DictIndexMissingItems, stmt_for.range());
}
SubscriptVisitor::new(stmt_for, dict_name, checker).visit_body(body);
}
/// A visitor to detect subscript operations on a target dictionary.
struct SubscriptVisitor<'a> {
struct SubscriptVisitor<'a, 'b> {
/// The target of the for loop (e.g., `key` in `for key in obj:`).
target: &'a Expr,
/// The name of the iterated object (e.g., `obj` in `for key in obj:`).
dict_name: &'a ast::ExprName,
/// Whether a violation has been detected.
has_violation: bool,
/// The range to use for the primary diagnostic.
range: TextRange,
/// The [`Checker`] used to emit diagnostics.
checker: &'a Checker<'b>,
/// The [`DiagnosticGuard`] used to attach additional annotations for each subscript.
///
/// The guard is initially `None` and then set to `Some` when the first subscript is found.
guard: Option<DiagnosticGuard<'a, 'b>>,
}
impl<'a> SubscriptVisitor<'a> {
fn new(target: &'a Expr, dict_name: &'a ast::ExprName) -> Self {
impl<'a, 'b> SubscriptVisitor<'a, 'b> {
fn new(stmt_for: &'a StmtFor, dict_name: &'a ast::ExprName, checker: &'a Checker<'b>) -> Self {
let StmtFor { target, iter, .. } = stmt_for;
let range = {
let target_start =
parenthesized_range(target.into(), stmt_for.into(), checker.tokens())
.map_or(target.start(), TextRange::start);
TextRange::new(target_start, iter.end())
};
Self {
target,
dict_name,
has_violation: false,
range,
checker,
guard: None,
}
}
}
impl<'a> Visitor<'a> for SubscriptVisitor<'a> {
impl<'a> Visitor<'a> for SubscriptVisitor<'a, '_> {
fn visit_expr(&mut self, expr: &'a Expr) {
// Given `obj[key]`, `value` must be `obj` and `slice` must be `key`.
if let Expr::Subscript(ast::ExprSubscript {
@@ -134,7 +147,17 @@ impl<'a> Visitor<'a> for SubscriptVisitor<'a> {
return;
}
self.has_violation = true;
let guard = self.guard.get_or_insert_with(|| {
self.checker.report_diagnostic(
DictIndexMissingItems {
key: self.checker.locator().slice(self.target),
dict: self.checker.locator().slice(self.dict_name),
},
self.range,
)
});
guard.secondary_annotation("", expr);
} else {
visitor::walk_expr(self, expr);
}

View File

@@ -146,11 +146,15 @@ pub(crate) fn no_self_use(checker: &Checker, scope_id: ScopeId, scope: &Scope) {
.map(|binding_id| semantic.binding(binding_id))
.is_some_and(|binding| binding.kind.is_argument() && binding.is_unused())
{
checker.report_diagnostic(
let mut diagnostic = checker.report_diagnostic(
NoSelfUse {
method_name: name.to_string(),
},
func.identifier(),
);
diagnostic.help(
"Consider adding `@typing.override` if this method overrides \
a method from a superclass",
);
}
}

View File

@@ -140,6 +140,18 @@ pub(crate) fn repeated_equality_comparison(checker: &Checker, bool_op: &ast::Exp
continue;
}
if let Some((&first, rest)) = comparators.split_first() {
let first_comparable = ComparableExpr::from(first);
if rest
.iter()
.all(|&c| ComparableExpr::from(c) == first_comparable)
{
// Do not flag if all members are identical
continue;
}
}
// if we can determine that all the values are hashable, we can use a set
// TODO: improve with type inference
let all_hashable = comparators

View File

@@ -1,6 +1,7 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast as ast;
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::analyze::function_type::is_subject_to_liskov_substitution_principle;
use ruff_python_semantic::analyze::{function_type, visibility};
use crate::Violation;
@@ -121,11 +122,24 @@ pub(crate) fn too_many_arguments(checker: &Checker, function_def: &ast::StmtFunc
return;
}
checker.report_diagnostic(
let mut diagnostic = checker.report_diagnostic(
TooManyArguments {
c_args: num_arguments,
max_args: checker.settings().pylint.max_args,
},
function_def.identifier(),
);
if is_subject_to_liskov_substitution_principle(
&function_def.name,
&function_def.decorator_list,
semantic.current_scope(),
semantic,
&checker.settings().pep8_naming.classmethod_decorators,
&checker.settings().pep8_naming.staticmethod_decorators,
) {
diagnostic.help(
"Consider adding `@typing.override` if changing the function signature \
would violate the Liskov Substitution Principle",
);
}
}

View File

@@ -1,5 +1,6 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, identifier::Identifier};
use ruff_python_semantic::analyze::function_type::is_subject_to_liskov_substitution_principle;
use ruff_python_semantic::analyze::{function_type, visibility};
use crate::Violation;
@@ -125,11 +126,24 @@ pub(crate) fn too_many_positional_arguments(
return;
}
checker.report_diagnostic(
let mut diagnostic = checker.report_diagnostic(
TooManyPositionalArguments {
c_pos: num_positional_args,
max_pos: checker.settings().pylint.max_positional_args,
},
function_def.identifier(),
);
if is_subject_to_liskov_substitution_principle(
&function_def.name,
&function_def.decorator_list,
semantic.current_scope(),
semantic,
&checker.settings().pep8_naming.classmethod_decorators,
&checker.settings().pep8_naming.staticmethod_decorators,
) {
diagnostic.help(
"Consider adding `@typing.override` if changing the function signature \
would violate the Liskov Substitution Principle",
);
}
}

View File

@@ -77,7 +77,7 @@ pub(crate) fn useless_import_alias(checker: &Checker, alias: &Alias) {
}
// A re-export in __init__.py is probably intentional.
if checker.path().ends_with("__init__.py") {
if checker.in_init_module() {
return;
}
@@ -116,7 +116,7 @@ pub(crate) fn useless_import_from_alias(
}
// A re-export in __init__.py is probably intentional.
if checker.path().ends_with("__init__.py") {
if checker.in_init_module() {
return;
}

View File

@@ -2,72 +2,107 @@
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:9:1
--> dict_index_missing_items.py:9:5
|
8 | # Errors
9 | / for instrument in ORCHESTRA:
10 | | print(f"{instrument}: {ORCHESTRA[instrument]}")
| |___________________________________________________^
8 | # Errors
9 | for instrument in ORCHESTRA:
| ^^^^^^^^^^^^^^^^^^^^^^^
10 | print(f"{instrument}: {ORCHESTRA[instrument]}")
| ---------------------
11 |
12 | for instrument in ORCHESTRA:
12 | for instrument in ORCHESTRA:
|
help: Use `for instrument, value in ORCHESTRA.items()` instead
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:12:1
--> dict_index_missing_items.py:12:5
|
10 | print(f"{instrument}: {ORCHESTRA[instrument]}")
10 | print(f"{instrument}: {ORCHESTRA[instrument]}")
11 |
12 | / for instrument in ORCHESTRA:
13 | | ORCHESTRA[instrument]
| |_________________________^
12 | for instrument in ORCHESTRA:
| ^^^^^^^^^^^^^^^^^^^^^^^
13 | ORCHESTRA[instrument]
| ---------------------
14 |
15 | for instrument in ORCHESTRA.keys():
15 | for instrument in ORCHESTRA.keys():
|
help: Use `for instrument, value in ORCHESTRA.items()` instead
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:15:1
--> dict_index_missing_items.py:15:5
|
13 | ORCHESTRA[instrument]
13 | ORCHESTRA[instrument]
14 |
15 | / for instrument in ORCHESTRA.keys():
16 | | print(f"{instrument}: {ORCHESTRA[instrument]}")
| |___________________________________________________^
15 | for instrument in ORCHESTRA.keys():
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
16 | print(f"{instrument}: {ORCHESTRA[instrument]}")
| ---------------------
17 |
18 | for instrument in ORCHESTRA.keys():
18 | for instrument in ORCHESTRA.keys():
|
help: Use `for instrument, value in ORCHESTRA.items()` instead
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:18:1
--> dict_index_missing_items.py:18:5
|
16 | print(f"{instrument}: {ORCHESTRA[instrument]}")
16 | print(f"{instrument}: {ORCHESTRA[instrument]}")
17 |
18 | / for instrument in ORCHESTRA.keys():
19 | | ORCHESTRA[instrument]
| |_________________________^
18 | for instrument in ORCHESTRA.keys():
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
19 | ORCHESTRA[instrument]
| ---------------------
20 |
21 | for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
21 | for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
|
help: Use `for instrument, value in ORCHESTRA.items()` instead
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:21:1
--> dict_index_missing_items.py:21:5
|
19 | ORCHESTRA[instrument]
19 | ORCHESTRA[instrument]
20 |
21 | / for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
22 | | print(f"{instrument}: {temp_orchestra[instrument]}")
| |________________________________________________________^
21 | for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
22 | print(f"{instrument}: {temp_orchestra[instrument]}")
| --------------------------
23 |
24 | for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
24 | for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
|
help: Use `for instrument, value in temp_orchestra.items()` instead
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:24:1
--> dict_index_missing_items.py:24:5
|
22 | print(f"{instrument}: {temp_orchestra[instrument]}")
22 | print(f"{instrument}: {temp_orchestra[instrument]}")
23 |
24 | / for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
25 | | temp_orchestra[instrument]
| |______________________________^
24 | for instrument in (temp_orchestra := {"violin": "strings", "oboe": "woodwind"}):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
25 | temp_orchestra[instrument]
| --------------------------
26 |
27 | # # OK
27 | # # OK
|
help: Use `for instrument, value in temp_orchestra.items()` instead
PLC0206 Extracting value from dictionary without calling `.items()`
--> dict_index_missing_items.py:59:5
|
58 | # A case with multiple uses of the value to show off the secondary annotations
59 | for instrument in ORCHESTRA:
| ^^^^^^^^^^^^^^^^^^^^^^^
60 | data = json.dumps(
61 | {
62 | "instrument": instrument,
63 | "section": ORCHESTRA[instrument],
| ---------------------
64 | }
65 | )
66 |
67 | print(f"saving data for {instrument} in {ORCHESTRA[instrument]}")
| ---------------------
68 |
69 | with open(f"{instrument}/{ORCHESTRA[instrument]}.txt", "w") as f:
| ---------------------
70 | f.write(data)
|
help: Use `for instrument, value in ORCHESTRA.items()` instead

View File

@@ -49,6 +49,7 @@ PLR0913 Too many arguments in function definition (8 > 5)
| ^
52 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle
PLR0913 Too many arguments in function definition (8 > 5)
--> too_many_arguments.py:58:9
@@ -58,6 +59,7 @@ PLR0913 Too many arguments in function definition (8 > 5)
| ^
59 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle
PLR0913 Too many arguments in function definition (8 > 5)
--> too_many_arguments.py:66:9
@@ -67,6 +69,7 @@ PLR0913 Too many arguments in function definition (8 > 5)
| ^
67 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle
PLR0913 Too many arguments in function definition (6 > 5)
--> too_many_arguments.py:70:9
@@ -76,3 +79,4 @@ PLR0913 Too many arguments in function definition (6 > 5)
| ^
71 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle

View File

@@ -34,6 +34,7 @@ PLR0917 Too many positional arguments (6/5)
| ^
44 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle
PLR0917 Too many positional arguments (6/5)
--> too_many_positional_arguments.py:47:9
@@ -43,3 +44,4 @@ PLR0917 Too many positional arguments (6/5)
| ^
48 | pass
|
help: Consider adding `@typing.override` if changing the function signature would violate the Liskov Substitution Principle

View File

@@ -436,6 +436,7 @@ help: Merge multiple comparisons
73 + foo in {False, 0} # Different types, same hashed value
74 |
75 | foo == 0.0 or foo == 0j # Different types, same hashed value
76 |
note: This is an unsafe fix and may change runtime behavior
PLR1714 [*] Consider merging multiple comparisons: `foo in {0.0, 0j}`.
@@ -445,6 +446,8 @@ PLR1714 [*] Consider merging multiple comparisons: `foo in {0.0, 0j}`.
74 |
75 | foo == 0.0 or foo == 0j # Different types, same hashed value
| ^^^^^^^^^^^^^^^^^^^^^^^
76 |
77 | foo == "bar" or foo == "bar" # All members identical
|
help: Merge multiple comparisons
72 |
@@ -452,4 +455,23 @@ help: Merge multiple comparisons
74 |
- foo == 0.0 or foo == 0j # Different types, same hashed value
75 + foo in {0.0, 0j} # Different types, same hashed value
76 |
77 | foo == "bar" or foo == "bar" # All members identical
78 |
note: This is an unsafe fix and may change runtime behavior
PLR1714 [*] Consider merging multiple comparisons: `foo in {"bar", "bar", "buzz"}`.
--> repeated_equality_comparison.py:79:1
|
77 | foo == "bar" or foo == "bar" # All members identical
78 |
79 | foo == "bar" or foo == "bar" or foo == "buzz" # All but one members identical
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: Merge multiple comparisons
76 |
77 | foo == "bar" or foo == "bar" # All members identical
78 |
- foo == "bar" or foo == "bar" or foo == "buzz" # All but one members identical
79 + foo in {"bar", "bar", "buzz"} # All but one members identical
note: This is an unsafe fix and may change runtime behavior

View File

@@ -9,6 +9,7 @@ PLR6301 Method `developer_greeting` could be a function, class method, or static
| ^^^^^^^^^^^^^^^^^^
8 | print(f"Greetings {name}!")
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `greeting_1` could be a function, class method, or static method
--> no_self_use.py:10:9
@@ -19,6 +20,7 @@ PLR6301 Method `greeting_1` could be a function, class method, or static method
| ^^^^^^^^^^
11 | print("Hello!")
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `greeting_2` could be a function, class method, or static method
--> no_self_use.py:13:9
@@ -29,6 +31,7 @@ PLR6301 Method `greeting_2` could be a function, class method, or static method
| ^^^^^^^^^^
14 | print("Hi!")
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `validate_y` could be a function, class method, or static method
--> no_self_use.py:103:9
@@ -39,6 +42,7 @@ PLR6301 Method `validate_y` could be a function, class method, or static method
104 | if value <= 0:
105 | raise ValueError("y must be a positive integer")
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `non_simple_assignment` could be a function, class method, or static method
--> no_self_use.py:128:9
@@ -50,6 +54,7 @@ PLR6301 Method `non_simple_assignment` could be a function, class method, or sta
129 | msg = foo = ""
130 | raise NotImplementedError(msg)
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `non_simple_assignment_2` could be a function, class method, or static method
--> no_self_use.py:132:9
@@ -61,6 +66,7 @@ PLR6301 Method `non_simple_assignment_2` could be a function, class method, or s
133 | msg[0] = ""
134 | raise NotImplementedError(msg)
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `unused_message` could be a function, class method, or static method
--> no_self_use.py:136:9
@@ -72,6 +78,7 @@ PLR6301 Method `unused_message` could be a function, class method, or static met
137 | msg = ""
138 | raise NotImplementedError("")
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `unused_message_2` could be a function, class method, or static method
--> no_self_use.py:140:9
@@ -83,6 +90,7 @@ PLR6301 Method `unused_message_2` could be a function, class method, or static m
141 | msg = ""
142 | raise NotImplementedError(x)
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `developer_greeting` could be a function, class method, or static method
--> no_self_use.py:145:9
@@ -92,6 +100,7 @@ PLR6301 Method `developer_greeting` could be a function, class method, or static
| ^^^^^^^^^^^^^^^^^^
146 | print(t"Greetings {name}!")
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass
PLR6301 Method `tstring` could be a function, class method, or static method
--> no_self_use.py:151:9
@@ -103,3 +112,4 @@ PLR6301 Method `tstring` could be a function, class method, or static method
152 | msg = t"{x}"
153 | raise NotImplementedError(msg)
|
help: Consider adding `@typing.override` if this method overrides a method from a superclass

View File

@@ -34,6 +34,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// different values when introspecting types at runtime. However, in most cases,
/// the fix should be safe to apply.
///
/// ## Options
///
/// - `target-version`
///
/// [PEP 646]: https://peps.python.org/pep-0646/
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "0.10.0")]

View File

@@ -78,6 +78,10 @@ use super::{
/// This rule only applies to generic classes and does not include generic functions. See
/// [`non-pep695-generic-function`][UP047] for the function version.
///
/// ## Options
///
/// - `target-version`
///
/// [PEP 695]: https://peps.python.org/pep-0695/
/// [PEP 696]: https://peps.python.org/pep-0696/
/// [PYI018]: https://docs.astral.sh/ruff/rules/unused-private-type-var/

View File

@@ -71,6 +71,10 @@ use super::{DisplayTypeVars, TypeVarReferenceVisitor, check_type_vars, in_nested
/// This rule only applies to generic functions and does not include generic classes. See
/// [`non-pep695-generic-class`][UP046] for the class version.
///
/// ## Options
///
/// - `target-version`
///
/// [PEP 695]: https://peps.python.org/pep-0695/
/// [PEP 696]: https://peps.python.org/pep-0696/
/// [PYI018]: https://docs.astral.sh/ruff/rules/unused-private-type-var/

View File

@@ -78,6 +78,10 @@ use super::{
/// new type parameters are restricted in scope to their associated aliases. See
/// [`private-type-parameter`][UP049] for a rule to update these names.
///
/// ## Options
///
/// - `target-version`
///
/// [PEP 695]: https://peps.python.org/pep-0695/
/// [PYI018]: https://docs.astral.sh/ruff/rules/unused-private-type-var/
/// [UP046]: https://docs.astral.sh/ruff/rules/non-pep695-generic-class/

View File

@@ -72,6 +72,10 @@ use crate::{Edit, Fix, FixAvailability, Violation};
/// As such, migrating to `enum.StrEnum` will introduce a behavior change for
/// code that relies on the Python 3.11 behavior.
///
/// ## Options
///
/// - `target-version`
///
/// ## References
/// - [enum.StrEnum](https://docs.python.org/3/library/enum.html#enum.StrEnum)
///

View File

@@ -119,6 +119,8 @@ mod tests {
#[test_case(Rule::RedirectedNOQA, Path::new("RUF101_0.py"))]
#[test_case(Rule::RedirectedNOQA, Path::new("RUF101_1.py"))]
#[test_case(Rule::InvalidRuleCode, Path::new("RUF102.py"))]
#[test_case(Rule::NonEmptyInitModule, Path::new("RUF067/modules/__init__.py"))]
#[test_case(Rule::NonEmptyInitModule, Path::new("RUF067/modules/okay.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(
@@ -136,6 +138,7 @@ mod tests {
&LinterSettings {
ruff: super::settings::Settings {
parenthesize_tuple_in_subscript: true,
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::IncorrectlyParenthesizedTupleInSubscript)
},
@@ -151,6 +154,7 @@ mod tests {
&LinterSettings {
ruff: super::settings::Settings {
parenthesize_tuple_in_subscript: false,
..super::settings::Settings::default()
},
unresolved_target_version: PythonVersion::PY310.into(),
..LinterSettings::for_rule(Rule::IncorrectlyParenthesizedTupleInSubscript)
@@ -714,4 +718,26 @@ mod tests {
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
#[test]
fn strictly_empty_init_modules_ruf067() -> Result<()> {
assert_diagnostics_diff!(
Path::new("ruff/RUF067/modules/__init__.py"),
&LinterSettings {
ruff: super::settings::Settings {
strictly_empty_init_modules: false,
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::NonEmptyInitModule)
},
&LinterSettings {
ruff: super::settings::Settings {
strictly_empty_init_modules: true,
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::NonEmptyInitModule)
},
);
Ok(())
}
}

View File

@@ -32,6 +32,7 @@ pub(crate) use mutable_dataclass_default::*;
pub(crate) use mutable_fromkeys_value::*;
pub(crate) use needless_else::*;
pub(crate) use never_union::*;
pub(crate) use non_empty_init_module::*;
pub(crate) use non_octal_permissions::*;
pub(crate) use none_not_at_end_of_union::*;
pub(crate) use parenthesize_chained_operators::*;
@@ -99,6 +100,7 @@ mod mutable_dataclass_default;
mod mutable_fromkeys_value;
mod needless_else;
mod never_union;
mod non_empty_init_module;
mod non_octal_permissions;
mod none_not_at_end_of_union;
mod parenthesize_chained_operators;

View File

@@ -0,0 +1,259 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_semantic::analyze::typing::is_type_checking_block;
use ruff_text_size::Ranged;
use crate::{Violation, checkers::ast::Checker};
/// ## What it does
///
/// Detects the presence of code in `__init__.py` files.
///
/// ## Why is this bad?
///
/// `__init__.py` files are often empty or only contain simple code to modify a module's API. As
/// such, it's easy to overlook them and their possible side effects when debugging.
///
/// ## Example
///
/// Instead of defining `MyClass` directly in `__init__.py`:
///
/// ```python
/// """My module docstring."""
///
///
/// class MyClass:
/// def my_method(self): ...
/// ```
///
/// move the definition to another file, import it, and include it in `__all__`:
///
/// ```python
/// """My module docstring."""
///
/// from submodule import MyClass
///
/// __all__ = ["MyClass"]
/// ```
///
/// Code in `__init__.py` files is also run at import time and can cause surprising slowdowns. To
/// disallow any code in `__init__.py` files, you can enable the
/// [`lint.ruff.strictly-empty-init-modules`] setting. In this case:
///
/// ```python
/// from submodule import MyClass
///
/// __all__ = ["MyClass"]
/// ```
///
/// the only fix is entirely emptying the file:
///
/// ```python
/// ```
///
/// ## Details
///
/// In non-strict mode, this rule allows several common patterns in `__init__.py` files:
///
/// - Imports
/// - Assignments to `__all__`, `__path__`, `__version__`, and `__author__`
/// - Module-level and attribute docstrings
/// - `if TYPE_CHECKING` blocks
/// - [PEP-562] module-level `__getattr__` and `__dir__` functions
///
/// ## Options
///
/// - [`lint.ruff.strictly-empty-init-modules`]
///
/// ## References
///
/// - [`flake8-empty-init-modules`](https://github.com/samueljsb/flake8-empty-init-modules/)
///
/// [PEP-562]: https://peps.python.org/pep-0562/
#[derive(ViolationMetadata)]
#[violation_metadata(preview_since = "0.14.11")]
pub(crate) struct NonEmptyInitModule {
strictly_empty_init_modules: bool,
}
impl Violation for NonEmptyInitModule {
#[derive_message_formats]
fn message(&self) -> String {
if self.strictly_empty_init_modules {
"`__init__` module should not contain any code".to_string()
} else {
"`__init__` module should only contain docstrings and re-exports".to_string()
}
}
}
/// RUF067
pub(crate) fn non_empty_init_module(checker: &Checker, stmt: &Stmt) {
if !checker.in_init_module() {
return;
}
let semantic = checker.semantic();
// Only flag top-level statements
if !semantic.at_top_level() {
return;
}
let strictly_empty_init_modules = checker.settings().ruff.strictly_empty_init_modules;
if !strictly_empty_init_modules {
// Even though module-level attributes are disallowed, we still allow attribute docstrings
// to avoid needing two `noqa` comments in a case like:
//
// ```py
// MY_CONSTANT = 1 # noqa: RUF067
// "A very important constant"
// ```
if semantic.in_pep_257_docstring() || semantic.in_attribute_docstring() {
return;
}
match stmt {
// Allow imports
Stmt::Import(_) | Stmt::ImportFrom(_) => return,
// Allow PEP-562 module `__getattr__` and `__dir__`
Stmt::FunctionDef(func) if matches!(&*func.name, "__getattr__" | "__dir__") => return,
// Allow `TYPE_CHECKING` blocks
Stmt::If(stmt_if) if is_type_checking_block(stmt_if, semantic) => return,
_ => {}
}
if let Some(assignment) = Assignment::from_stmt(stmt) {
// Allow assignments to `__all__`.
//
// TODO(brent) should we allow additional cases here? Beyond simple assignments, you could
// also append or extend `__all__`.
//
// This is actually going slightly beyond the upstream rule already, which only checks for
// `Stmt::Assign`.
if assignment.is_assignment_to("__all__") {
return;
}
// Allow legacy namespace packages with assignments like:
//
// ```py
// __path__ = __import__('pkgutil').extend_path(__path__, __name__)
// ```
if assignment.is_assignment_to("__path__") && assignment.is_pkgutil_extend_path() {
return;
}
// Allow assignments to `__version__`.
if assignment.is_assignment_to("__version__") {
return;
}
// Allow assignments to `__author__`.
if assignment.is_assignment_to("__author__") {
return;
}
}
}
checker.report_diagnostic(
NonEmptyInitModule {
strictly_empty_init_modules,
},
stmt.range(),
);
}
/// Any assignment statement, including plain assignment, annotated assignments, and augmented
/// assignments.
struct Assignment<'a> {
targets: &'a [Expr],
value: Option<&'a Expr>,
}
impl<'a> Assignment<'a> {
fn from_stmt(stmt: &'a Stmt) -> Option<Self> {
let (targets, value) = match stmt {
Stmt::Assign(ast::StmtAssign { targets, value, .. }) => {
(targets.as_slice(), Some(&**value))
}
Stmt::AnnAssign(ast::StmtAnnAssign { target, value, .. }) => {
(std::slice::from_ref(&**target), value.as_deref())
}
Stmt::AugAssign(ast::StmtAugAssign { target, value, .. }) => {
(std::slice::from_ref(&**target), Some(&**value))
}
_ => return None,
};
Some(Self { targets, value })
}
/// Returns whether all of the assignment targets match `name`.
///
/// For example, both of the following would be allowed for a `name` of `__all__`:
///
/// ```py
/// __all__ = ["foo"]
/// __all__ = __all__ = ["foo"]
/// ```
///
/// but not:
///
/// ```py
/// __all__ = another_list = ["foo"]
/// ```
fn is_assignment_to(&self, name: &str) -> bool {
self.targets
.iter()
.all(|target| target.as_name_expr().is_some_and(|expr| expr.id == name))
}
/// Returns `true` if the value being assigned is a call to `pkgutil.extend_path`.
///
/// For example, both of the following would return true:
///
/// ```py
/// __path__ = __import__('pkgutil').extend_path(__path__, __name__)
/// __path__ = other.extend_path(__path__, __name__)
/// ```
///
/// We're intentionally a bit less strict here, not requiring that the receiver of the
/// `extend_path` call is the typical `__import__('pkgutil')` or `pkgutil`.
fn is_pkgutil_extend_path(&self) -> bool {
let Some(Expr::Call(ast::ExprCall {
func: extend_func,
arguments: extend_arguments,
..
})) = self.value
else {
return false;
};
let Expr::Attribute(ast::ExprAttribute {
attr: maybe_extend_path,
..
}) = &**extend_func
else {
return false;
};
// Test that this is an `extend_path(__path__, __name__)` call
if maybe_extend_path != "extend_path" {
return false;
}
let Some(Expr::Name(path)) = extend_arguments.find_argument_value("path", 0) else {
return false;
};
let Some(Expr::Name(name)) = extend_arguments.find_argument_value("name", 1) else {
return false;
};
path.id() == "__path__" && name.id() == "__name__"
}
}

View File

@@ -57,7 +57,7 @@ use crate::{Applicability, Edit, Fix, FixAvailability, Violation};
/// ## References
/// - [Typing documentation: Legal parameters for `Literal` at type check time](https://typing.python.org/en/latest/spec/literal.html#legal-parameters-for-literal-at-type-check-time)
///
/// [PEP 586](https://peps.python.org/pep-0586/)
/// [PEP 586]: https://peps.python.org/pep-0586/
#[derive(ViolationMetadata)]
#[violation_metadata(stable_since = "0.10.0")]
pub(crate) struct UnnecessaryNestedLiteral;

View File

@@ -7,6 +7,7 @@ use std::fmt;
#[derive(Debug, Clone, CacheKey, Default)]
pub struct Settings {
pub parenthesize_tuple_in_subscript: bool,
pub strictly_empty_init_modules: bool,
}
impl fmt::Display for Settings {
@@ -16,6 +17,7 @@ impl fmt::Display for Settings {
namespace = "linter.ruff",
fields = [
self.parenthesize_tuple_in_subscript,
self.strictly_empty_init_modules,
]
}
Ok(())

View File

@@ -0,0 +1,53 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:12:1
|
10 | __all__ = __all__ = __all__
11 |
12 | MY_CONSTANT = 5
| ^^^^^^^^^^^^^^^
13 | """This is an important constant."""
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:15:1
|
13 | """This is an important constant."""
14 |
15 | os.environ["FOO"] = 1
| ^^^^^^^^^^^^^^^^^^^^^
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:18:1
|
18 | / def foo():
19 | | return Path("foo.py")
| |_________________________^
20 |
21 | def __getattr__(name): # ok
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:26:1
|
24 | __path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
25 |
26 | / if os.environ["FOO"] != "1": # RUF067
27 | | MY_CONSTANT = 4 # ok, don't flag nested statements
| |___________________^
28 |
29 | if TYPE_CHECKING: # ok
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:48:1
|
47 | # non-`extend_path` assignments are not allowed
48 | __path__ = 5 # RUF067
| ^^^^^^^^^^^^
49 |
50 | # also allow `__author__`
|

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---

View File

@@ -0,0 +1,344 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
--- Linter settings ---
-linter.ruff.strictly_empty_init_modules = false
+linter.ruff.strictly_empty_init_modules = true
--- Summary ---
Removed: 5
Added: 24
--- Removed ---
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:12:1
|
10 | __all__ = __all__ = __all__
11 |
12 | MY_CONSTANT = 5
| ^^^^^^^^^^^^^^^
13 | """This is an important constant."""
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:15:1
|
13 | """This is an important constant."""
14 |
15 | os.environ["FOO"] = 1
| ^^^^^^^^^^^^^^^^^^^^^
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:18:1
|
18 | / def foo():
19 | | return Path("foo.py")
| |_________________________^
20 |
21 | def __getattr__(name): # ok
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:26:1
|
24 | __path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
25 |
26 | / if os.environ["FOO"] != "1": # RUF067
27 | | MY_CONSTANT = 4 # ok, don't flag nested statements
| |___________________^
28 |
29 | if TYPE_CHECKING: # ok
|
RUF067 `__init__` module should only contain docstrings and re-exports
--> __init__.py:48:1
|
47 | # non-`extend_path` assignments are not allowed
48 | __path__ = 5 # RUF067
| ^^^^^^^^^^^^
49 |
50 | # also allow `__author__`
|
--- Added ---
RUF067 `__init__` module should not contain any code
--> __init__.py:1:1
|
1 | """This is the module docstring."""
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2 |
3 | # convenience imports:
|
RUF067 `__init__` module should not contain any code
--> __init__.py:4:1
|
3 | # convenience imports:
4 | import os
| ^^^^^^^^^
5 | from pathlib import Path
|
RUF067 `__init__` module should not contain any code
--> __init__.py:5:1
|
3 | # convenience imports:
4 | import os
5 | from pathlib import Path
| ^^^^^^^^^^^^^^^^^^^^^^^^
6 |
7 | __all__ = ["MY_CONSTANT"]
|
RUF067 `__init__` module should not contain any code
--> __init__.py:7:1
|
5 | from pathlib import Path
6 |
7 | __all__ = ["MY_CONSTANT"]
| ^^^^^^^^^^^^^^^^^^^^^^^^^
8 | __all__ += ["foo"]
9 | __all__: list[str] = __all__
|
RUF067 `__init__` module should not contain any code
--> __init__.py:8:1
|
7 | __all__ = ["MY_CONSTANT"]
8 | __all__ += ["foo"]
| ^^^^^^^^^^^^^^^^^^
9 | __all__: list[str] = __all__
10 | __all__ = __all__ = __all__
|
RUF067 `__init__` module should not contain any code
--> __init__.py:9:1
|
7 | __all__ = ["MY_CONSTANT"]
8 | __all__ += ["foo"]
9 | __all__: list[str] = __all__
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
10 | __all__ = __all__ = __all__
|
RUF067 `__init__` module should not contain any code
--> __init__.py:10:1
|
8 | __all__ += ["foo"]
9 | __all__: list[str] = __all__
10 | __all__ = __all__ = __all__
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
11 |
12 | MY_CONSTANT = 5
|
RUF067 `__init__` module should not contain any code
--> __init__.py:12:1
|
10 | __all__ = __all__ = __all__
11 |
12 | MY_CONSTANT = 5
| ^^^^^^^^^^^^^^^
13 | """This is an important constant."""
|
RUF067 `__init__` module should not contain any code
--> __init__.py:13:1
|
12 | MY_CONSTANT = 5
13 | """This is an important constant."""
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
14 |
15 | os.environ["FOO"] = 1
|
RUF067 `__init__` module should not contain any code
--> __init__.py:15:1
|
13 | """This is an important constant."""
14 |
15 | os.environ["FOO"] = 1
| ^^^^^^^^^^^^^^^^^^^^^
|
RUF067 `__init__` module should not contain any code
--> __init__.py:18:1
|
18 | / def foo():
19 | | return Path("foo.py")
| |_________________________^
20 |
21 | def __getattr__(name): # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:21:1
|
19 | return Path("foo.py")
20 |
21 | / def __getattr__(name): # ok
22 | | return name
| |_______________^
23 |
24 | __path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:24:1
|
22 | return name
23 |
24 | __path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
25 |
26 | if os.environ["FOO"] != "1": # RUF067
|
RUF067 `__init__` module should not contain any code
--> __init__.py:26:1
|
24 | __path__ = __import__('pkgutil').extend_path(__path__, __name__) # ok
25 |
26 | / if os.environ["FOO"] != "1": # RUF067
27 | | MY_CONSTANT = 4 # ok, don't flag nested statements
| |___________________^
28 |
29 | if TYPE_CHECKING: # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:29:1
|
27 | MY_CONSTANT = 4 # ok, don't flag nested statements
28 |
29 | / if TYPE_CHECKING: # ok
30 | | MY_CONSTANT = 3
| |___________________^
31 |
32 | import typing
|
RUF067 `__init__` module should not contain any code
--> __init__.py:32:1
|
30 | MY_CONSTANT = 3
31 |
32 | import typing
| ^^^^^^^^^^^^^
33 |
34 | if typing.TYPE_CHECKING: # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:34:1
|
32 | import typing
33 |
34 | / if typing.TYPE_CHECKING: # ok
35 | | MY_CONSTANT = 2
| |___________________^
36 |
37 | __version__ = "1.2.3" # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:37:1
|
35 | MY_CONSTANT = 2
36 |
37 | __version__ = "1.2.3" # ok
| ^^^^^^^^^^^^^^^^^^^^^
38 |
39 | def __dir__(): # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:39:1
|
37 | __version__ = "1.2.3" # ok
38 |
39 | / def __dir__(): # ok
40 | | return ["foo"]
| |__________________^
41 |
42 | import pkgutil
|
RUF067 `__init__` module should not contain any code
--> __init__.py:42:1
|
40 | return ["foo"]
41 |
42 | import pkgutil
| ^^^^^^^^^^^^^^
43 |
44 | __path__ = pkgutil.extend_path(__path__, __name__) # ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:44:1
|
42 | import pkgutil
43 |
44 | __path__ = pkgutil.extend_path(__path__, __name__) # ok
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
45 | __path__ = unknown.extend_path(__path__, __name__) # also ok
|
RUF067 `__init__` module should not contain any code
--> __init__.py:45:1
|
44 | __path__ = pkgutil.extend_path(__path__, __name__) # ok
45 | __path__ = unknown.extend_path(__path__, __name__) # also ok
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
46 |
47 | # non-`extend_path` assignments are not allowed
|
RUF067 `__init__` module should not contain any code
--> __init__.py:48:1
|
47 | # non-`extend_path` assignments are not allowed
48 | __path__ = 5 # RUF067
| ^^^^^^^^^^^^
49 |
50 | # also allow `__author__`
|
RUF067 `__init__` module should not contain any code
--> __init__.py:51:1
|
50 | # also allow `__author__`
51 | __author__ = "The Author" # ok
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|

View File

@@ -18,10 +18,10 @@ doctest = false
ruff_python_trivia = { workspace = true }
heck = { workspace = true }
itertools = { workspace = true }
proc-macro2 = { workspace = true }
quote = { workspace = true }
syn = { workspace = true, features = ["derive", "parsing", "extra-traits", "full"] }
itertools = { workspace = true }
[lints]
workspace = true

View File

@@ -20,12 +20,12 @@ ruff_text_size = { workspace = true }
anyhow = { workspace = true }
itertools = { workspace = true }
rand = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
serde_with = { workspace = true, default-features = false, features = ["macros"] }
thiserror = { workspace = true }
uuid = { workspace = true }
rand = { workspace = true }
[dev-dependencies]
test-case = { workspace = true }

View File

@@ -10,6 +10,10 @@ documentation = { workspace = true }
repository = { workspace = true }
license = { workspace = true }
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[lib]
[dependencies]
@@ -42,14 +46,7 @@ serde = [
"dep:ruff_cache",
"compact_str/serde",
]
get-size = [
"dep:get-size2",
"ruff_text_size/get-size"
]
get-size = ["dep:get-size2", "ruff_text_size/get-size"]
[lints]
workspace = true
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]

View File

@@ -12,8 +12,8 @@ license.workspace = true
[dependencies]
[dev-dependencies]
ruff_python_parser = { workspace = true }
ruff_python_ast = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_text_size = { workspace = true }

View File

@@ -10,6 +10,10 @@ documentation = { workspace = true }
repository = { workspace = true }
license = { workspace = true }
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[lib]
doctest = false
@@ -18,10 +22,10 @@ ruff_cache = { workspace = true }
ruff_db = { workspace = true }
ruff_formatter = { workspace = true }
ruff_macros = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_source_file = { workspace = true }
ruff_python_ast = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_source_file = { workspace = true }
ruff_text_size = { workspace = true }
anyhow = { workspace = true }
@@ -32,8 +36,8 @@ memchr = { workspace = true }
regex = { workspace = true }
rustc-hash = { workspace = true }
salsa = { workspace = true }
serde = { workspace = true, optional = true }
schemars = { workspace = true, optional = true }
serde = { workspace = true, optional = true }
serde_json = { workspace = true, optional = true }
smallvec = { workspace = true }
static_assertions = { workspace = true }
@@ -50,16 +54,6 @@ serde = { workspace = true }
serde_json = { workspace = true }
similar = { workspace = true }
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[[test]]
name = "fixtures"
harness = false
test = true
required-features = ["serde"]
[features]
default = ["serde"]
serde = [
@@ -70,5 +64,11 @@ serde = [
]
schemars = ["dep:schemars", "dep:serde_json", "ruff_formatter/schemars"]
[[test]]
name = "fixtures"
harness = false
test = true
required-features = ["serde"]
[lints]
workspace = true

View File

@@ -12,10 +12,6 @@ license = { workspace = true }
[lib]
[[test]]
name = "fixtures"
harness = false
[dependencies]
ruff_python_ast = { workspace = true, features = ["get-size"] }
ruff_python_trivia = { workspace = true }
@@ -29,8 +25,8 @@ memchr = { workspace = true }
rustc-hash = { workspace = true }
static_assertions = { workspace = true }
unicode-ident = { workspace = true }
unicode_names2 = { workspace = true }
unicode-normalization = { workspace = true }
unicode_names2 = { workspace = true }
[dev-dependencies]
ruff_annotate_snippets = { workspace = true }
@@ -45,5 +41,9 @@ serde = { workspace = true }
serde_json = { workspace = true }
walkdir = { workspace = true }
[[test]]
name = "fixtures"
harness = false
[lints]
workspace = true

View File

@@ -10,6 +10,10 @@ documentation = { workspace = true }
repository = { workspace = true }
license = { workspace = true }
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]
[dependencies]
ruff_cache = { workspace = true }
ruff_index = { workspace = true }
@@ -28,12 +32,8 @@ smallvec = { workspace = true }
[dev-dependencies]
insta = { workspace = true, features = ["filters", "json", "redactions"] }
test-case = { workspace = true }
ruff_python_parser = { workspace = true }
test-case = { workspace = true }
[lints]
workspace = true
[package.metadata.cargo-shear]
# Used via `CacheKey` macro expansion.
ignored = ["ruff_cache"]

Some files were not shown because too many files have changed in this diff Show More