Compare commits

...

41 Commits

Author SHA1 Message Date
Dhruv Manilawala
3bb0dac235 Bump version to 0.8.4 (#15064) 2024-12-19 13:15:42 +00:00
Alex Waygood
40cba5dc8a [red-knot] Cleanup various todo_type!() messages (#15063)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-12-19 13:03:41 +00:00
Dylan
596d80cc8e [perflint] Parenthesize walrus expressions in autofix for manual-list-comprehension (PERF401) (#15050) 2024-12-19 06:56:45 -06:00
Alex Waygood
d8b9a366c8 Disable actionlint hook by default when running pre-commit locally (#15061) 2024-12-19 12:45:17 +00:00
Taras Matsyk
85e71ba91a [flake8-bandit] Check S105 for annotated assignment (#15059)
## Summary

A follow up PR on https://github.com/astral-sh/ruff/issues/14991
Ruff ignores hardcoded passwords for typed variables. Add a rule to
catch passwords in typed code bases

## Test Plan

Includes 2 more test typed variables
2024-12-19 12:26:40 +00:00
Douglas Creager
2802cbde29 Don't special-case class instances in unary expression inference (#15045)
We have a handy `to_meta_type` that does the right thing for class
instances, and also works for all of the other types that are “instances
of” something. Unless I'm missing something, this should let us get rid
of the catch-all clause in one fell swoop.

cf #14548
2024-12-18 14:37:17 -05:00
InSync
ed2bce6ebb [red-knot] Report invalid exceptions (#15042)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-12-18 18:31:24 +00:00
InSync
f0012df686 Fix typos in RUF043.py (#15044)
(Accidentally introduced in #14966.)
2024-12-18 15:39:55 +00:00
Micha Reiser
0fc4e8f795 Introduce InferContext (#14956)
## Summary

I'm currently on the fence about landing the #14760 PR because it's
unclear how we'd support tracking used and unused suppression comments
in a performant way:
* Salsa adds an "untracked" dependency to every query reading
accumulated values. This has the effect that the query re-runs on every
revision. For example, a possible future query
`unused_suppression_comments(db, file)` would re-run on every
incremental change and for every file. I don't expect the operation
itself to be expensive, but it all adds up in a project with 100k+ files
* Salsa collects the accumulated values by traversing the entire query
dependency graph. It can skip over sub-graphs if it is known that they
contain no accumulated values. This makes accumulators a great tool for
when they are rare; diagnostics are a good example. Unfortunately,
suppressions are more common, and they often appear in many different
files, making the "skip over subgraphs" optimization less effective.

Because of that, I want to wait to adopt salsa accumulators for type
check diagnostics (we could start using them for other diagnostics)
until we have very specific reasons that justify regressing incremental
check performance.

This PR does a "small" refactor that brings us closer to what I have in
#14760 but without using accumulators. To emit a diagnostic, a method
needs:

* Access to the db
* Access to the currently checked file

This PR introduces a new `InferContext` that holds on to the db, the
current file, and the reported diagnostics. It replaces the
`TypeCheckDiagnosticsBuilder`. We pass the `InferContext` instead of the
`db` to methods that *might* emit diagnostics. This simplifies some of
the `Outcome` methods, which can now be called with a context instead of
a `db` and the diagnostics builder. Having the `db` and the file on a
single type like this would also be useful when using accumulators.

This PR doesn't solve the issue that the `Outcome` types feel somewhat
complicated nor that it can be annoying when you need to report a
`Diagnostic,` but you don't have access to an `InferContext` (or the
file). However, I also believe that accumulators won't solve these
problems because:

* Even with accumulators, it's necessary to have a reference to the file
that's being checked. The struggle would be to get a reference to that
file rather than getting a reference to `InferContext`.
* Users of the `HasTy` trait (e.g., a linter) don't want to bother
getting the `File` when calling `Type::return_ty` because they aren't
interested in the created diagnostics. They just want to know what
calling the current expression would return (and if it even is a
callable). This is what the different methods of `Outcome` enable today.
I can ask for the return type without needing extra data that's only
relevant for emitting a diagnostic.

A shortcoming of this approach is that it is now a bit confusing when to
pass `db` and when an `InferContext`. An option is that we'd make the
`file` on `InferContext` optional (it won't collect any diagnostics if
`None`) and change all methods on `Type` to take `InferContext` as the
first argument instead of a `db`. I'm interested in your opinion on
this.

Accumulators are definitely harder to use incorrectly because they
remove the need to merge the diagnostics explicitly and there's no risk
that we accidentally merge the diagnostics twice, resulting in
duplicated diagnostics. I still value performance more over making our
life slightly easier.
2024-12-18 12:22:33 +00:00
InSync
ac81c72bf3 [ruff] Ambiguous pattern passed to pytest.raises() (RUF043) (#14966) 2024-12-18 11:53:48 +00:00
David Salvisberg
c0b7c36d43 [ruff] Avoid false positives for RUF027 for typing context bindings. (#15037)
Closes #14000 

## Summary

For typing context bindings we know that they won't be available at
runtime. We shouldn't recommend a fix, that will result in name errors
at runtime.

## Test Plan

`cargo nextest run`
2024-12-18 08:50:49 +01:00
Douglas Creager
e8e461da6a Prioritize attribute in from/import statement (#15041)
This tweaks the new semantics from #15026 a bit when a symbol could be
interpreted both as an attribute and a submodule of a package. For
`from...import`, we should actually prioritize the attribute, because of
how the statement itself is implemented [1].

> 1. check if the imported module has an attribute by that name
> 2. if not, attempt to import a submodule with that name and then check
the imported module again for that attribute

[1] https://docs.python.org/3/reference/simple_stmts.html#the-import-statement
2024-12-17 16:58:23 -05:00
Douglas Creager
91c9168dd7 Handle nested imports correctly in from ... import (#15026)
#14946 fixed our handling of nested imports with the `import` statement,
but didn't touch `from...import` statements.

cf
https://github.com/astral-sh/ruff/issues/14826#issuecomment-2525344515
2024-12-17 14:23:34 -05:00
Micha Reiser
80577a49f8 Upgrade salsa in fuzzer script (#15040) 2024-12-17 18:01:58 +01:00
cake-monotone
f463fa7b7c [red-knot] Narrowing For Truthiness Checks (if x or if not x) (#14687)
## Summary

Fixes #14550.

Add `AlwaysTruthy` and `AlwaysFalsy` types, representing the set of objects whose `__bool__` method can only ever return `True` or `False`, respectively, and narrow `if x` and `if not x` accordingly.


## Test Plan

- New Markdown test for truthiness narrowing `narrow/truthiness.md`
- unit tests in `types.rs` and `builders.rs` (`cargo test --package
red_knot_python_semantic --lib -- types`)
2024-12-17 08:37:07 -08:00
Micha Reiser
c3b6139f39 Upgrade salsa (#15039)
The only code change is that Salsa now requires the `Db` to implement
`Clone` to create "lightweight" snapshots.
2024-12-17 15:50:33 +00:00
InSync
c9fdb1f5e3 [pylint] Preserve original value format (PLR6104) (#14978)
## Summary

Resolves #11672.

## Test Plan

`cargo nextest run` and `cargo insta test`.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-12-17 16:07:07 +01:00
Alex Waygood
463046ae07 [red-knot] Explicitly test diagnostics are emitted for unresolvable submodule imports (#15035) 2024-12-17 12:55:50 +00:00
Micha Reiser
dcb99cc817 Fix stale File status in tests (#15030)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/15027

The `MemoryFileSystem::write_file` API automatically creates
non-existing ancestor directoryes
but we failed to update the status of the now created ancestor
directories in the `Files` data structure.


## Test Plan

Tested that the case in https://github.com/astral-sh/ruff/issues/15027
now passes regardless of whether the *Simple* case is commented out or
not
2024-12-17 12:45:36 +01:00
InSync
7c2e7cf25e [red-knot] Basic support for other legacy typing aliases (#14998)
## Summary

Resolves #14997.

## Test Plan

Markdown tests.
2024-12-17 09:33:15 +00:00
Wei Lee
867a8f9497 feat(AIR302): extend the following rules (#15015)
## Summary


Airflow 3.0 removes various deprecated functions, members, modules, and
other values. They have been deprecated in 2.x, but the removal causes
incompatibilities that we want to detect. This PR deprecates the
following names.

* `airflow.api_connexion.security.requires_access_dataset` →
`airflow.api_connexion.security.requires_access_asset`
* `airflow.auth.managers.base_auth_manager.is_authorized_dataset` →
`airflow.auth.managers.base_auth_manager.is_authorized_asset`
* `airflow.auth.managers.models.resource_details.DatasetDetails` →
`airflow.auth.managers.models.resource_details.AssetDetails`
* `airflow.lineage.hook.DatasetLineageInfo` →
`airflow.lineage.hook.AssetLineageInfo`
* `airflow.security.permissions.RESOURCE_DATASET` →
`airflow.security.permissions.RESOURCE_ASSET`
* `airflow.www.auth.has_access_dataset` →
`airflow.www.auth.has_access_dataset.has_access_asset`
* remove `airflow.datasets.DatasetAliasEvent`
* `airflow.datasets.Dataset` → `airflow.sdk.definitions.asset.Asset`
* `airflow.Dataset` → `airflow.sdk.definitions.asset.Asset`
* `airflow.datasets.DatasetAlias` →
`airflow.sdk.definitions.asset.AssetAlias`
* `airflow.datasets.DatasetAll` →
`airflow.sdk.definitions.asset.AssetAll`
* `airflow.datasets.DatasetAny` →
`airflow.sdk.definitions.asset.AssetAny`
* `airflow.datasets.metadata` → `airflow.sdk.definitions.asset.metadata`
* `airflow.datasets.expand_alias_to_datasets` →
`airflow.sdk.definitions.asset.expand_alias_to_assets`
* `airflow.datasets.manager.dataset_manager` → `airflow.assets.manager`
* `airflow.datasets.manager.resolve_dataset_manager` →
`airflow.assets.resolve_asset_manager`
* `airflow.datasets.manager.DatasetManager` →
`airflow.assets.AssetManager`
* `airflow.listeners.spec.dataset.on_dataset_created` →
`airflow.listeners.spec.asset.on_asset_created`
* `airflow.listeners.spec.dataset.on_dataset_changed` →
`airflow.listeners.spec.asset.on_asset_changed`
* `airflow.timetables.simple.DatasetTriggeredTimetable` →
`airflow.timetables.simple.AssetTriggeredTimetable`
* `airflow.timetables.datasets.DatasetOrTimeSchedule` →
`airflow.timetables.assets.AssetOrTimeSchedule`
*
`airflow.providers.amazon.auth_manager.avp.entities.AvpEntities.DATASET`
→ `airflow.providers.amazon.auth_manager.avp.entities.AvpEntities.ASSET`
* `airflow.providers.amazon.aws.datasets.s3.create_dataset` →
`airflow.providers.amazon.aws.assets.s3.create_asset`
*
`airflow.providers.amazon.aws.datasets.s3.convert_dataset_to_openlineage`
→
`airflow.providers.amazon.aws.datasets.s3.convert_dataset_to_openlineage`
* `airflow.providers.amazon.aws.datasets.s3.sanitize_uri` →
`airflow.providers.amazon.aws.assets.s3.sanitize_uri`
*
`airflow.providers.common.io.datasets.file.convert_dataset_to_openlineage`
→ `airflow.providers.common.io.assets.file.convert_asset_to_openlineage`
* `airflow.providers.common.io.datasets.file.sanitize_uri` →
`airflow.providers.common.io.assets.file.sanitize_uri`
* `airflow.providers.common.io.datasets.file.create_dataset` →
`airflow.providers.common.io.assets.file.create_asset`
* `airflow.providers.google.datasets.bigquery.sanitize_uri` →
`airflow.providers.google.assets.bigquery.sanitize_uri`
* `airflow.providers.google.datasets.gcs.create_dataset` →
`airflow.providers.google.assets.gcs.create_asset`
* `airflow.providers.google.datasets.gcs.sanitize_uri` →
`airflow.providers.google.assets.gcs.sanitize_uri`
* `airflow.providers.google.datasets.gcs.convert_dataset_to_openlineage`
→ `airflow.providers.google.assets.gcs.convert_asset_to_openlineage`
*
`airflow.providers.fab.auth_manager.fab_auth_manager.is_authorized_dataset`
→
`airflow.providers.fab.auth_manager.fab_auth_manager.is_authorized_asset`
* `airflow.providers.openlineage.utils.utils.DatasetInfo` →
`airflow.providers.openlineage.utils.utils.AssetInfo`
* `airflow.providers.openlineage.utils.utils.translate_airflow_dataset`
→ `airflow.providers.openlineage.utils.utils.translate_airflow_asset`
* `airflow.providers.postgres.datasets.postgres.sanitize_uri` →
`airflow.providers.postgres.assets.postgres.sanitize_uri`
* `airflow.providers.mysql.datasets.mysql.sanitize_uri` →
`airflow.providers.mysql.assets.mysql.sanitize_uri`
* `airflow.providers.trino.datasets.trino.sanitize_uri` →
`airflow.providers.trino.assets.trino.sanitize_uri`

In additional to the newly added rules above, the message for
`airflow.contrib.*` and `airflow.subdag.*` has been extended,
`airflow.sensors.external_task.ExternalTaskSensorLink` error has been
fixed and the test fixture has been reorganized

## Test Plan

A test fixture is included in the PR.
2024-12-17 08:32:48 +01:00
w0nder1ng
e22718f25f [perflint] Simplify finding the loop target in PERF401 (#15025)
Fixes #15012.

```python
def f():
    # panics when the code can't find the loop variable
    values = [1, 2, 3]
    result = []
    for i in values:
        result.append(i + 1)
    del i
```

I'm not sure exactly why this test case panics, but I suspect the `del
i` removes the binding from the semantic model's symbols.

I changed the code to search for the correct binding by directly
iterating through the bindings. Since we know exactly which binding we
want, this should find the loop variable without any complications.
2024-12-17 08:30:32 +01:00
Dhruv Manilawala
dcdc6e7c64 [red-knot] Avoid undeclared path when raising conflicting declarations (#14958)
## Summary

This PR updates the logic when raising conflicting declarations
diagnostic to avoid the undeclared path if present.

The conflicting declaration diagnostics is added when there are two or
more declarations in the control flow path of a definition whose type
isn't equivalent to each other. This can be seen in the following
example:

```py
if flag:
	x: int
x = 1  # conflicting-declarations: Unknown, int
```

After this PR, we'd avoid considering "Unknown" as part of the
conflicting declarations. This means we'd still flag it for the
following case:

```py
if flag:
	x: int
else:
	x: str
x = 1  # conflicting-declarations: int, str
```

A solution that's local to the exception control flow was also explored
which required updating the logic for merging the flow snapshot to avoid
considering declarations using a flag. This is preserved here:
https://github.com/astral-sh/ruff/compare/dhruv/control-flow-no-declarations?expand=1.

The main motivation to avoid that is we don't really understand what the
user experience is w.r.t. the Unknown type and the
conflicting-declaration diagnostics. This makes us unsure on what the
right semantics are as to whether that diagnostics should be raised or
not and when to raise them. For now, we've decided to move forward with
this PR and could decide to adopt another solution or remove the
conflicting-declaration diagnostics in the future.

Closes: #13966 

## Test Plan

Update the existing mdtest case. Add an additional case specific to
exception control flow to verify that the diagnostic is not being raised
now.
2024-12-17 09:49:39 +05:30
Douglas Creager
4ddf9228f6 Bind top-most parent when importing nested module (#14946)
When importing a nested module, we were correctly creating a binding for
the top-most parent, but we were binding that to the nested module, not
to that parent module. Moreover, we weren't treating those submodules as
members of their containing parents. This PR addresses both issues, so
that nested imports work as expected.

As discussed in ~Slack~ whatever chat app I find myself in these days
😄, this requires keeping track of which modules have been imported
within the current file, so that when we resolve member access on a
module reference, we can see if that member has been imported as a
submodule. If so, we return the submodule reference immediately, instead
of checking whether the parent module's definition defines the symbol.

This is currently done in a flow insensitive manner. The `SemanticIndex`
now tracks all of the modules that are imported (via `import`, not via
`from...import`). The member access logic mentioned above currently only
considers module imports in the file containing the attribute
expression.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2024-12-16 16:15:40 -05:00
Alex Waygood
6d72be2683 Bump zizmor pre-commit hook to the latest version and fix new warnings (#15022) 2024-12-16 17:45:46 +00:00
Alex Waygood
712c886749 Add actionlint as a pre-commit hook (with shellcheck integration) (#15021) 2024-12-16 17:32:49 +00:00
renovate[bot]
50739f91dc Update dependency mdformat-mkdocs to v4 (#15011)
This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
|
[mdformat-mkdocs](https://redirect.github.com/kyleking/mdformat-mkdocs)
([changelog](https://redirect.github.com/kyleking/mdformat-mkdocs/releases))
| `==3.1.1` -> `==4.0.0` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/mdformat-mkdocs/4.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/mdformat-mkdocs/4.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/mdformat-mkdocs/3.1.1/4.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/mdformat-mkdocs/3.1.1/4.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>kyleking/mdformat-mkdocs (mdformat-mkdocs)</summary>

###
[`v4.0.0`](https://redirect.github.com/KyleKing/mdformat-mkdocs/releases/tag/v4.0.0)

[Compare
Source](https://redirect.github.com/kyleking/mdformat-mkdocs/compare/v3.1.1...v4.0.0)

#### What's Changed

- fix!: add newline after title for consistency with MKDocs style by
[@&#8203;KyleKing](https://redirect.github.com/KyleKing) in
[https://github.com/KyleKing/mdformat-mkdocs/pull/44](https://redirect.github.com/KyleKing/mdformat-mkdocs/pull/44)

**Full Changelog**:
https://github.com/KyleKing/mdformat-mkdocs/compare/v3.1.1...v4.0.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS41OC4xIiwidXBkYXRlZEluVmVyIjoiMzkuNTguMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Co-authored-by: Kyle King <KyleKing@users.noreply.github.com>
2024-12-16 22:48:37 +05:30
Dylan
6a5eff6017 [pydocstyle] Skip leading whitespace for D403 (#14963)
This PR introduces three changes to `D403`, which has to do with
capitalizing the first word in a docstring.

1. The diagnostic and fix now skip leading whitespace when determining
what counts as "the first word".
2. The name has been changed to `first-word-uncapitalized` from
`first-line-capitalized`, for both clarity and compliance with our rule
naming policy.
3. The diagnostic message and documentation has been modified slightly
to reflect this.

Closes #14890
2024-12-16 09:09:27 -06:00
renovate[bot]
a623d8f7c4 Update pre-commit dependencies (#15008)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-12-16 11:13:49 +00:00
Dhruv Manilawala
aa429b413f Check diagnostic refresh support from client capability (#15014)
## Summary

Per the LSP spec, the property name is `workspace.diagnostics` with an
`s` at the end but the `lsp-types` dependency uses
`workspace.diagnostic` (without an `s`). Our fork contains this fix
(0f58d62879)
so we should avoid the hardcoded value.

The implication of this is that the client which doesn't support
workspace refresh capability didn't support the [dynamic
configuration](https://docs.astral.sh/ruff/editors/features/#dynamic-configuration)
feature because the server would _always_ send the workspace refresh
request but the client would ignore it. We have a fallback logic to
publish the diagnostics instead:


5f6fc3988b/crates/ruff_server/src/server/api/notifications/did_change_watched_files.rs (L28-L40)

fixes: #15013 

## Test Plan

### VS Code


https://github.com/user-attachments/assets/61ac8e6f-aa20-41cc-b398-998e1866b5bc

### Neovim



https://github.com/user-attachments/assets/4131e13c-3fba-411c-9bb7-478d26eb8d56
2024-12-16 16:26:40 +05:30
renovate[bot]
425c248232 Update Rust crate colored to v2.2.0 (#15010)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [colored](https://redirect.github.com/mackwic/colored) |
workspace.dependencies | minor | `2.1.0` -> `2.2.0` |

---

### Release Notes

<details>
<summary>mackwic/colored (colored)</summary>

###
[`v2.2.0`](https://redirect.github.com/mackwic/colored/compare/v2.1.0...v2.2.0)

[Compare
Source](https://redirect.github.com/mackwic/colored/compare/v2.1.0...v2.2.0)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS41OC4xIiwidXBkYXRlZEluVmVyIjoiMzkuNTguMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-12-16 08:48:51 +01:00
renovate[bot]
bcd944347d Update dependency monaco-editor to v0.52.2 (#15006) 2024-12-15 20:26:21 -05:00
renovate[bot]
86eff81c6a Update Rust crate thiserror to v2.0.7 (#15005) 2024-12-15 20:26:14 -05:00
renovate[bot]
24ace68560 Update Rust crate serde to v1.0.216 (#15004) 2024-12-15 20:26:08 -05:00
renovate[bot]
b664505d7b Update Rust crate libc to v0.2.168 (#15003) 2024-12-15 20:25:59 -05:00
renovate[bot]
aa575da1e7 Update Rust crate fern to v0.7.1 (#15002) 2024-12-15 20:25:52 -05:00
renovate[bot]
921eb2acb3 Update Rust crate chrono to v0.4.39 (#15001) 2024-12-15 20:25:46 -05:00
renovate[bot]
8665d2dc95 Update Rust crate bstr to v1.11.1 (#15000) 2024-12-15 20:25:39 -05:00
renovate[bot]
1cc27c995c Update NPM Development dependencies (#14999) 2024-12-15 20:25:10 -05:00
renovate[bot]
a93bc2af6b Update dependency ruff to v0.8.3 (#15007) 2024-12-15 20:25:04 -05:00
Alex Waygood
d848182340 Pin mdformat plugins in pre-commit (#14992) 2024-12-15 19:37:45 +00:00
117 changed files with 5307 additions and 2196 deletions

9
.github/actionlint.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
# Configuration for the actionlint tool, which we run via pre-commit
# to verify the correctness of the syntax in our GitHub Actions workflows.
self-hosted-runner:
# Various runners we use that aren't recognized out-of-the-box by actionlint:
labels:
- depot-ubuntu-latest-8
- depot-ubuntu-22.04-16
- windows-latest-xlarge

View File

@@ -53,7 +53,7 @@ jobs:
args: --out dist
- name: "Test sdist"
run: |
pip install dist/${PACKAGE_NAME}-*.tar.gz --force-reinstall
pip install dist/"${PACKAGE_NAME}"-*.tar.gz --force-reinstall
"${MODULE_NAME}" --help
python -m "${MODULE_NAME}" --help
- name: "Upload sdist"
@@ -125,7 +125,7 @@ jobs:
args: --release --locked --out dist
- name: "Test wheel - aarch64"
run: |
pip install dist/${PACKAGE_NAME}-*.whl --force-reinstall
pip install dist/"${PACKAGE_NAME}"-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
@@ -186,7 +186,7 @@ jobs:
if: ${{ !startsWith(matrix.platform.target, 'aarch64') }}
shell: bash
run: |
python -m pip install dist/${PACKAGE_NAME}-*.whl --force-reinstall
python -m pip install dist/"${PACKAGE_NAME}"-*.whl --force-reinstall
"${MODULE_NAME}" --help
python -m "${MODULE_NAME}" --help
- name: "Upload wheels"
@@ -236,7 +236,7 @@ jobs:
- name: "Test wheel"
if: ${{ startsWith(matrix.target, 'x86_64') }}
run: |
pip install dist/${PACKAGE_NAME}-*.whl --force-reinstall
pip install dist/"${PACKAGE_NAME}"-*.whl --force-reinstall
"${MODULE_NAME}" --help
python -m "${MODULE_NAME}" --help
- name: "Upload wheels"

View File

@@ -142,6 +142,7 @@ jobs:
# The printf will expand the base image with the `<RUFF_BASE_IMG>@sha256:<sha256> ...` for each sha256 in the directory
# The final command becomes `docker buildx imagetools create -t tag1 -t tag2 ... <RUFF_BASE_IMG>@sha256:<sha256_1> <RUFF_BASE_IMG>@sha256:<sha256_2> ...`
run: |
# shellcheck disable=SC2046
docker buildx imagetools create \
$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf "${RUFF_BASE_IMG}@sha256:%s " *)
@@ -286,6 +287,8 @@ jobs:
# The final command becomes `docker buildx imagetools create -t tag1 -t tag2 ... <RUFF_BASE_IMG>@sha256:<sha256_1> <RUFF_BASE_IMG>@sha256:<sha256_2> ...`
run: |
readarray -t lines <<< "$DOCKER_METADATA_OUTPUT_ANNOTATIONS"; annotations=(); for line in "${lines[@]}"; do annotations+=(--annotation "$line"); done
# shellcheck disable=SC2046
docker buildx imagetools create \
"${annotations[@]}" \
$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \

View File

@@ -290,7 +290,9 @@ jobs:
file: "Cargo.toml"
field: "workspace.package.rust-version"
- name: "Install Rust toolchain"
run: rustup default ${{ steps.msrv.outputs.value }}
env:
MSRV: ${{ steps.msrv.outputs.value }}
run: rustup default "${MSRV}"
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
@@ -306,7 +308,8 @@ jobs:
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo +${{ steps.msrv.outputs.value }} insta test --all-features --unreferenced reject --test-runner nextest
MSRV: ${{ steps.msrv.outputs.value }}
run: cargo "+${MSRV}" insta test --all-features --unreferenced reject --test-runner nextest
cargo-fuzz-build:
name: "cargo fuzz build"
@@ -354,16 +357,18 @@ jobs:
name: ruff
path: ruff-to-test
- name: Fuzz
env:
DOWNLOAD_PATH: ${{ steps.download-cached-binary.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ${{ steps.download-cached-binary.outputs.download-path }}/ruff
chmod +x "${DOWNLOAD_PATH}/ruff"
(
uvx \
--python=${{ env.PYTHON_VERSION }} \
--python="${PYTHON_VERSION}" \
--from=./python/py-fuzzer \
fuzz \
--test-executable=${{ steps.download-cached-binary.outputs.download-path }}/ruff \
--test-executable="${DOWNLOAD_PATH}/ruff" \
--bin=ruff \
0-500
)
@@ -429,64 +434,72 @@ jobs:
- name: Run `ruff check` stable ecosystem check
if: ${{ needs.determine_changes.outputs.linter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem check ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff --cache ./checkouts --output-format markdown | tee ecosystem-result-check-stable
ruff-ecosystem check ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown | tee ecosystem-result-check-stable
cat ecosystem-result-check-stable > $GITHUB_STEP_SUMMARY
cat ecosystem-result-check-stable > "$GITHUB_STEP_SUMMARY"
echo "### Linter (stable)" > ecosystem-result
cat ecosystem-result-check-stable >> ecosystem-result
echo "" >> ecosystem-result
- name: Run `ruff check` preview ecosystem check
if: ${{ needs.determine_changes.outputs.linter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem check ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-check-preview
ruff-ecosystem check ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-check-preview
cat ecosystem-result-check-preview > $GITHUB_STEP_SUMMARY
cat ecosystem-result-check-preview > "$GITHUB_STEP_SUMMARY"
echo "### Linter (preview)" >> ecosystem-result
cat ecosystem-result-check-preview >> ecosystem-result
echo "" >> ecosystem-result
- name: Run `ruff format` stable ecosystem check
if: ${{ needs.determine_changes.outputs.formatter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem format ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff --cache ./checkouts --output-format markdown | tee ecosystem-result-format-stable
ruff-ecosystem format ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown | tee ecosystem-result-format-stable
cat ecosystem-result-format-stable > $GITHUB_STEP_SUMMARY
cat ecosystem-result-format-stable > "$GITHUB_STEP_SUMMARY"
echo "### Formatter (stable)" >> ecosystem-result
cat ecosystem-result-format-stable >> ecosystem-result
echo "" >> ecosystem-result
- name: Run `ruff format` preview ecosystem check
if: ${{ needs.determine_changes.outputs.formatter == 'true' }}
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff
chmod +x ./ruff "${DOWNLOAD_PATH}/ruff"
# Set pipefail to avoid hiding errors with tee
set -eo pipefail
ruff-ecosystem format ./ruff ${{ steps.ruff-target.outputs.download-path }}/ruff --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-format-preview
ruff-ecosystem format ./ruff "${DOWNLOAD_PATH}/ruff" --cache ./checkouts --output-format markdown --force-preview | tee ecosystem-result-format-preview
cat ecosystem-result-format-preview > $GITHUB_STEP_SUMMARY
cat ecosystem-result-format-preview > "$GITHUB_STEP_SUMMARY"
echo "### Formatter (preview)" >> ecosystem-result
cat ecosystem-result-format-preview >> ecosystem-result
echo "" >> ecosystem-result
@@ -541,7 +554,7 @@ jobs:
args: --out dist
- name: "Test wheel"
run: |
pip install --force-reinstall --find-links dist ${{ env.PACKAGE_NAME }}
pip install --force-reinstall --find-links dist "${PACKAGE_NAME}"
ruff --help
python -m ruff --help
- name: "Remove wheels from cache"
@@ -570,13 +583,14 @@ jobs:
key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: "Run pre-commit"
run: |
echo '```console' > $GITHUB_STEP_SUMMARY
echo '```console' > "$GITHUB_STEP_SUMMARY"
# Enable color output for pre-commit and remove it for the summary
SKIP=cargo-fmt,clippy,dev-generate-all pre-commit run --all-files --show-diff-on-failure --color=always | \
tee >(sed -E 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})*)?[mGK]//g' >> $GITHUB_STEP_SUMMARY) >&1
exit_code=${PIPESTATUS[0]}
echo '```' >> $GITHUB_STEP_SUMMARY
exit $exit_code
# Use --hook-stage=manual to enable slower pre-commit hooks that are skipped by default
SKIP=cargo-fmt,clippy,dev-generate-all pre-commit run --all-files --show-diff-on-failure --color=always --hook-stage=manual | \
tee >(sed -E 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})*)?[mGK]//g' >> "$GITHUB_STEP_SUMMARY") >&1
exit_code="${PIPESTATUS[0]}"
echo '```' >> "$GITHUB_STEP_SUMMARY"
exit "$exit_code"
docs:
name: "mkdocs"
@@ -637,7 +651,7 @@ jobs:
- name: "Run checks"
run: scripts/formatter_ecosystem_checks.sh
- name: "Github step summary"
run: cat target/formatter-ecosystem/stats.txt > $GITHUB_STEP_SUMMARY
run: cat target/formatter-ecosystem/stats.txt > "$GITHUB_STEP_SUMMARY"
- name: "Remove checkouts from cache"
run: rm -r target/formatter-ecosystem
@@ -676,11 +690,13 @@ jobs:
just install
- name: Run ruff-lsp tests
env:
DOWNLOAD_PATH: ${{ steps.ruff-target.outputs.download-path }}
run: |
# Setup development binary
pip uninstall --yes ruff
chmod +x ${{ steps.ruff-target.outputs.download-path }}/ruff
export PATH=${{ steps.ruff-target.outputs.download-path }}:$PATH
chmod +x "${DOWNLOAD_PATH}/ruff"
export PATH="${DOWNLOAD_PATH}:${PATH}"
ruff version
just test

View File

@@ -46,6 +46,7 @@ jobs:
run: cargo build --locked
- name: Fuzz
run: |
# shellcheck disable=SC2046
(
uvx \
--python=3.12 \

View File

@@ -10,12 +10,11 @@ on:
description: The ecosystem workflow that triggers the workflow run
required: true
permissions:
pull-requests: write
jobs:
comment:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@v7
name: Download pull request number
@@ -30,7 +29,7 @@ jobs:
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> $GITHUB_OUTPUT
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@v7
@@ -66,9 +65,9 @@ jobs:
cat pr/ecosystem/ecosystem-result >> comment.txt
echo "" >> comment.txt
echo 'comment<<EOF' >> $GITHUB_OUTPUT
cat comment.txt >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.txt >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@v3

View File

@@ -44,8 +44,8 @@ jobs:
# Use version as display name for now
display_name="$version"
echo "version=$version" >> $GITHUB_ENV
echo "display_name=$display_name" >> $GITHUB_ENV
echo "version=$version" >> "$GITHUB_ENV"
echo "display_name=$display_name" >> "$GITHUB_ENV"
- name: "Set branch name"
run: |
@@ -55,8 +55,8 @@ jobs:
# characters disallowed in git branch names with hyphens
branch_display_name="$(echo "${display_name}" | tr -c '[:alnum:]._' '-' | tr -s '-')"
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> $GITHUB_ENV
echo "timestamp=$timestamp" >> $GITHUB_ENV
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> "$GITHUB_ENV"
echo "timestamp=$timestamp" >> "$GITHUB_ENV"
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
@@ -112,7 +112,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.ASTRAL_DOCS_PAT }}
run: |
# set the PR title
pull_request_title="Update ruff documentation for "${display_name}""
pull_request_title="Update ruff documentation for ${display_name}"
# Delete any existing pull requests that are open for this version
# by checking against pull_request_title because the new PR will
@@ -124,10 +124,12 @@ jobs:
git push origin "${branch_name}"
# create the PR
gh pr create --base main --head "${branch_name}" \
--title "$pull_request_title" \
--body "Automated documentation update for "${display_name}"" \
--label "documentation"
gh pr create \
--base=main \
--head="${branch_name}" \
--title="${pull_request_title}" \
--body="Automated documentation update for ${display_name}" \
--label="documentation"
- name: "Merge Pull Request"
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}

View File

@@ -59,7 +59,7 @@ jobs:
run: |
cd ruff
git push --force origin typeshedbot/sync-typeshed
gh pr list --repo $GITHUB_REPOSITORY --head typeshedbot/sync-typeshed --json id --jq length | grep 1 && exit 0 # exit if there is existing pr
gh pr list --repo "$GITHUB_REPOSITORY" --head typeshedbot/sync-typeshed --json id --jq length | grep 1 && exit 0 # exit if there is existing pr
gh pr create --title "Sync vendored typeshed stubs" --body "Close and reopen this PR to trigger CI" --label "internal"
create-issue-on-failure:

6
.github/zizmor.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
# Configuration for the zizmor static analysis tool, run via pre-commit in CI
# https://woodruffw.github.io/zizmor/configuration/
rules:
dangerous-triggers:
ignore:
- pr-comment.yaml

View File

@@ -21,3 +21,11 @@ MD014: false
MD024:
# Allow when nested under different parents e.g. CHANGELOG.md
siblings_only: true
# MD046/code-block-style
#
# Ignore this because it conflicts with the code block style used in content
# tabs of mkdocs-material which is to add a blank line after the content title.
#
# Ref: https://github.com/astral-sh/ruff/pull/15011#issuecomment-2544790854
MD046: false

View File

@@ -2,6 +2,7 @@ fail_fast: false
exclude: |
(?x)^(
.github/workflows/release.yml|
crates/red_knot_vendored/vendor/.*|
crates/red_knot_workspace/resources/.*|
crates/ruff_linter/resources/.*|
@@ -26,9 +27,8 @@ repos:
hooks:
- id: mdformat
additional_dependencies:
- mdformat-mkdocs
- mdformat-admon
- mdformat-footnote
- mdformat-mkdocs==4.0.0
- mdformat-footnote==0.1.1
exclude: |
(?x)^(
docs/formatter/black\.md
@@ -59,7 +59,7 @@ repos:
- black==24.10.0
- repo: https://github.com/crate-ci/typos
rev: v1.28.2
rev: v1.28.3
hooks:
- id: typos
@@ -73,7 +73,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.2
rev: v0.8.3
hooks:
- id: ruff-format
- id: ruff
@@ -88,18 +88,37 @@ repos:
- id: prettier
types: [yaml]
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/woodruffw/zizmor-pre-commit
rev: v0.8.0
rev: v0.9.2
hooks:
- id: zizmor
# `release.yml` is autogenerated by `dist`; security issues need to be fixed there
# (https://opensource.axo.dev/cargo-dist/)
exclude: .github/workflows/release.yml
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.30.0
hooks:
- id: check-github-workflows
# `actionlint` hook, for verifying correct syntax in GitHub Actions workflows.
# Some additional configuration for `actionlint` can be found in `.github/actionlint.yaml`.
- repo: https://github.com/rhysd/actionlint
rev: v1.7.4
hooks:
- id: actionlint
stages:
# This hook is disabled by default, since it's quite slow.
# To run all hooks *including* this hook, use `uvx pre-commit run -a --hook-stage=manual`.
# To run *just* this hook, use `uvx pre-commit run -a actionlint --hook-stage=manual`.
- manual
args:
- "-ignore=SC2129" # ignorable stylistic lint from shellcheck
- "-ignore=SC2016" # another shellcheck lint: seems to have false positives?
additional_dependencies:
# actionlint has a shellcheck integration which extracts shell scripts in `run:` steps from GitHub Actions
# and checks these with shellcheck. This is arguably its most useful feature,
# but the integration only works if shellcheck is installed
- "github.com/wasilibs/go-shellcheck/cmd/shellcheck@v0.10.0"
ci:
skip: [cargo-fmt, dev-generate-all]

View File

@@ -1,5 +1,33 @@
# Changelog
## 0.8.4
### Preview features
- \[`airflow`\] Extend `AIR302` with additional functions and classes ([#15015](https://github.com/astral-sh/ruff/pull/15015))
- \[`airflow`\] Implement `moved-to-provider-in-3` for modules that has been moved to Airflow providers (`AIR303`) ([#14764](https://github.com/astral-sh/ruff/pull/14764))
- \[`flake8-use-pathlib`\] Extend check for invalid path suffix to include the case `"."` (`PTH210`) ([#14902](https://github.com/astral-sh/ruff/pull/14902))
- \[`perflint`\] Fix panic in `PERF401` when list variable is after the `for` loop ([#14971](https://github.com/astral-sh/ruff/pull/14971))
- \[`perflint`\] Simplify finding the loop target in `PERF401` ([#15025](https://github.com/astral-sh/ruff/pull/15025))
- \[`pylint`\] Preserve original value format (`PLR6104`) ([#14978](https://github.com/astral-sh/ruff/pull/14978))
- \[`ruff`\] Avoid false positives for `RUF027` for typing context bindings ([#15037](https://github.com/astral-sh/ruff/pull/15037))
- \[`ruff`\] Check for ambiguous pattern passed to `pytest.raises()` (`RUF043`) ([#14966](https://github.com/astral-sh/ruff/pull/14966))
### Rule changes
- \[`flake8-bandit`\] Check `S105` for annotated assignment ([#15059](https://github.com/astral-sh/ruff/pull/15059))
- \[`flake8-pyi`\] More autofixes for `redundant-none-literal` (`PYI061`) ([#14872](https://github.com/astral-sh/ruff/pull/14872))
- \[`pydocstyle`\] Skip leading whitespace for `D403` ([#14963](https://github.com/astral-sh/ruff/pull/14963))
- \[`ruff`\] Skip `SQLModel` base classes for `mutable-class-default` (`RUF012`) ([#14949](https://github.com/astral-sh/ruff/pull/14949))
### Bug
- \[`perflint`\] Parenthesize walrus expressions in autofix for `manual-list-comprehension` (`PERF401`) ([#15050](https://github.com/astral-sh/ruff/pull/15050))
### Server
- Check diagnostic refresh support from client capability which enables dynamic configuration for various editors ([#15014](https://github.com/astral-sh/ruff/pull/15014))
## 0.8.3
### Preview features

72
Cargo.lock generated
View File

@@ -220,9 +220,9 @@ checksum = "7f839cdf7e2d3198ac6ca003fd8ebc61715755f41c1cad15ff13df67531e00ed"
[[package]]
name = "bstr"
version = "1.11.0"
version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a68f1f47cdf0ec8ee4b941b2eee2a80cb796db73118c0dd09ac63fbe405be22"
checksum = "786a307d683a5bf92e6fd5fd69a7eb613751668d1d8d67d802846dfe367c62c8"
dependencies = [
"memchr",
"regex-automata 0.4.8",
@@ -314,9 +314,9 @@ dependencies = [
[[package]]
name = "chrono"
version = "0.4.38"
version = "0.4.39"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a21f936df1771bf62b77f047b726c4625ff2e8aa607c01ec06e5a05bd8463401"
checksum = "7e36cc9d416881d2e24f9a963be5fb1cd90966419ac844274161d10488b3e825"
dependencies = [
"android-tzdata",
"iana-time-zone",
@@ -465,12 +465,12 @@ checksum = "acbf1af155f9b9ef647e42cdc158db4b64a1b61f743629225fde6f3e0be2a7c7"
[[package]]
name = "colored"
version = "2.1.0"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cbf2150cce219b664a8a70df7a1f933836724b503f8a413af9365b4dcc4d90b8"
checksum = "117725a109d387c937a1533ce01b450cbde6b88abceea8473c4d7a85853cda3c"
dependencies = [
"lazy_static",
"windows-sys 0.48.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -923,9 +923,9 @@ checksum = "e8c02a5121d4ea3eb16a80748c74f5549a5665e4c21333c6098f283870fbdea6"
[[package]]
name = "fern"
version = "0.7.0"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69ff9c9d5fb3e6da8ac2f77ab76fe7e8087d512ce095200f8f29ac5b656cf6dc"
checksum = "4316185f709b23713e41e3195f90edef7fb00c3ed4adc79769cf09cc762a3b29"
dependencies = [
"log",
]
@@ -1521,9 +1521,9 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.167"
version = "0.2.168"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09d6582e104315a817dff97f75133544b2e094ee22447d2acf4a74e189ba06fc"
checksum = "5aaeb2981e0606ca11d79718f8bb01164f1d6ed75080182d3abf017e6d244b6d"
[[package]]
name = "libcst"
@@ -2161,7 +2161,7 @@ dependencies = [
"newtype-uuid",
"quick-xml",
"strip-ansi-escapes",
"thiserror 2.0.6",
"thiserror 2.0.7",
"uuid",
]
@@ -2288,6 +2288,7 @@ dependencies = [
"compact_str",
"countme",
"dir-test",
"drop_bomb",
"hashbrown 0.15.2",
"indexmap",
"insta",
@@ -2314,7 +2315,7 @@ dependencies = [
"static_assertions",
"tempfile",
"test-case",
"thiserror 2.0.6",
"thiserror 2.0.7",
"tracing",
]
@@ -2411,7 +2412,7 @@ dependencies = [
"rustc-hash 2.1.0",
"salsa",
"serde",
"thiserror 2.0.6",
"thiserror 2.0.7",
"toml",
"tracing",
]
@@ -2517,7 +2518,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.8.3"
version = "0.8.4"
dependencies = [
"anyhow",
"argfile",
@@ -2564,7 +2565,7 @@ dependencies = [
"strum",
"tempfile",
"test-case",
"thiserror 2.0.6",
"thiserror 2.0.7",
"tikv-jemallocator",
"toml",
"tracing",
@@ -2634,7 +2635,7 @@ dependencies = [
"salsa",
"serde",
"tempfile",
"thiserror 2.0.6",
"thiserror 2.0.7",
"tracing",
"tracing-subscriber",
"tracing-tree",
@@ -2736,7 +2737,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.8.3"
version = "0.8.4"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2786,7 +2787,7 @@ dependencies = [
"strum",
"strum_macros",
"test-case",
"thiserror 2.0.6",
"thiserror 2.0.7",
"toml",
"typed-arena",
"unicode-normalization",
@@ -2820,7 +2821,7 @@ dependencies = [
"serde_json",
"serde_with",
"test-case",
"thiserror 2.0.6",
"thiserror 2.0.7",
"uuid",
]
@@ -2892,7 +2893,7 @@ dependencies = [
"similar",
"smallvec",
"static_assertions",
"thiserror 2.0.6",
"thiserror 2.0.7",
"tracing",
]
@@ -3025,7 +3026,7 @@ dependencies = [
"serde",
"serde_json",
"shellexpand",
"thiserror 2.0.6",
"thiserror 2.0.7",
"tracing",
"tracing-subscriber",
]
@@ -3051,7 +3052,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.8.3"
version = "0.8.4"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -3193,7 +3194,7 @@ checksum = "e86697c916019a8588c99b5fac3cead74ec0b4b819707a682fd4d23fa0ce1ba1"
[[package]]
name = "salsa"
version = "0.18.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=254c749b02cde2fd29852a7463a33e800b771758#254c749b02cde2fd29852a7463a33e800b771758"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3c7f1694c9efba751dbeeacfbc93b227586e316a#3c7f1694c9efba751dbeeacfbc93b227586e316a"
dependencies = [
"append-only-vec",
"arc-swap",
@@ -3203,6 +3204,7 @@ dependencies = [
"indexmap",
"lazy_static",
"parking_lot",
"rayon",
"rustc-hash 2.1.0",
"salsa-macro-rules",
"salsa-macros",
@@ -3213,12 +3215,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.1.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=254c749b02cde2fd29852a7463a33e800b771758#254c749b02cde2fd29852a7463a33e800b771758"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3c7f1694c9efba751dbeeacfbc93b227586e316a#3c7f1694c9efba751dbeeacfbc93b227586e316a"
[[package]]
name = "salsa-macros"
version = "0.18.0"
source = "git+https://github.com/salsa-rs/salsa.git?rev=254c749b02cde2fd29852a7463a33e800b771758#254c749b02cde2fd29852a7463a33e800b771758"
source = "git+https://github.com/salsa-rs/salsa.git?rev=3c7f1694c9efba751dbeeacfbc93b227586e316a#3c7f1694c9efba751dbeeacfbc93b227586e316a"
dependencies = [
"heck",
"proc-macro2",
@@ -3280,9 +3282,9 @@ checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
[[package]]
name = "serde"
version = "1.0.215"
version = "1.0.216"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f"
checksum = "0b9781016e935a97e8beecf0c933758c97a5520d32930e460142b4cd80c6338e"
dependencies = [
"serde_derive",
]
@@ -3300,9 +3302,9 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.215"
version = "1.0.216"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0"
checksum = "46f859dbbf73865c6627ed570e78961cd3ac92407a2d117204c49232485da55e"
dependencies = [
"proc-macro2",
"quote",
@@ -3623,11 +3625,11 @@ dependencies = [
[[package]]
name = "thiserror"
version = "2.0.6"
version = "2.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8fec2a1820ebd077e2b90c4df007bebf344cd394098a13c563957d0afc83ea47"
checksum = "93605438cbd668185516ab499d589afb7ee1859ea3d5fc8f6b0755e1c7443767"
dependencies = [
"thiserror-impl 2.0.6",
"thiserror-impl 2.0.7",
]
[[package]]
@@ -3643,9 +3645,9 @@ dependencies = [
[[package]]
name = "thiserror-impl"
version = "2.0.6"
version = "2.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d65750cab40f4ff1929fb1ba509e9914eb756131cef4210da8d5d700d26f6312"
checksum = "e1d8749b4531af2117677a5fcd12b1348a3fe2b81e36e61ffeac5c4aa3273e36"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -118,7 +118,8 @@ rand = { version = "0.8.5" }
rayon = { version = "1.10.0" }
regex = { version = "1.10.2" }
rustc-hash = { version = "2.0.0" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "254c749b02cde2fd29852a7463a33e800b771758" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "3c7f1694c9efba751dbeeacfbc93b227586e316a" }
schemars = { version = "0.8.16" }
seahash = { version = "4.1.0" }
serde = { version = "1.0.197", features = ["derive"] }

View File

@@ -140,8 +140,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.8.3/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.8.3/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.8.4/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.8.4/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -174,7 +174,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.8.3
rev: v0.8.4
hooks:
# Run the linter.
- id: ruff

View File

@@ -279,7 +279,7 @@ impl MainLoop {
while let Ok(message) = self.receiver.recv() {
match message {
MainLoopMessage::CheckWorkspace => {
let db = db.snapshot();
let db = db.clone();
let sender = self.sender.clone();
// Spawn a new task that checks the workspace. This needs to be done in a separate thread

View File

@@ -26,6 +26,7 @@ bitflags = { workspace = true }
camino = { workspace = true }
compact_str = { workspace = true }
countme = { workspace = true }
drop_bomb = { workspace = true }
indexmap = { workspace = true }
itertools = { workspace = true }
ordermap = { workspace = true }
@@ -58,4 +59,3 @@ serde = ["ruff_db/serde", "dep:serde"]
[lints]
workspace = true

View File

@@ -73,12 +73,12 @@ qux = (foo, bar)
reveal_type(qux) # revealed: tuple[Literal["foo"], Literal["bar"]]
# TODO: Infer "LiteralString"
reveal_type(foo.join(qux)) # revealed: @Todo(call todo)
reveal_type(foo.join(qux)) # revealed: @Todo(Attribute access on `StringLiteral` types)
template: LiteralString = "{}, {}"
reveal_type(template) # revealed: Literal["{}, {}"]
# TODO: Infer `LiteralString`
reveal_type(template.format(foo, bar)) # revealed: @Todo(call todo)
reveal_type(template.format(foo, bar)) # revealed: @Todo(Attribute access on `StringLiteral` types)
```
### Assignability

View File

@@ -3,43 +3,59 @@
The `typing` module has various aliases to other stdlib classes. These are a legacy feature, but
still need to be supported by a type checker.
## Currently unsupported
## Correspondence
Support for most of these symbols is currently a TODO:
All of the following symbols can be mapped one-to-one with the actual type:
```py
import typing
def f(
a: typing.List,
b: typing.List[int],
c: typing.Dict,
d: typing.Dict[int, str],
e: typing.DefaultDict,
f: typing.DefaultDict[str, int],
g: typing.Set,
h: typing.Set[int],
i: typing.FrozenSet,
j: typing.FrozenSet[str],
k: typing.OrderedDict,
l: typing.OrderedDict[int, str],
m: typing.Counter,
n: typing.Counter[int],
list_bare: typing.List,
list_parametrized: typing.List[int],
dict_bare: typing.Dict,
dict_parametrized: typing.Dict[int, str],
set_bare: typing.Set,
set_parametrized: typing.Set[int],
frozen_set_bare: typing.FrozenSet,
frozen_set_parametrized: typing.FrozenSet[str],
chain_map_bare: typing.ChainMap,
chain_map_parametrized: typing.ChainMap[int],
counter_bare: typing.Counter,
counter_parametrized: typing.Counter[int],
default_dict_bare: typing.DefaultDict,
default_dict_parametrized: typing.DefaultDict[str, int],
deque_bare: typing.Deque,
deque_parametrized: typing.Deque[str],
ordered_dict_bare: typing.OrderedDict,
ordered_dict_parametrized: typing.OrderedDict[int, str],
):
reveal_type(a) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(b) # revealed: @Todo(typing.List alias)
reveal_type(c) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(d) # revealed: @Todo(typing.Dict alias)
reveal_type(e) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(f) # revealed: @Todo(typing.DefaultDict[] alias)
reveal_type(g) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(h) # revealed: @Todo(typing.Set alias)
reveal_type(i) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(j) # revealed: @Todo(typing.FrozenSet alias)
reveal_type(k) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(l) # revealed: @Todo(typing.OrderedDict alias)
reveal_type(m) # revealed: @Todo(Unsupported or invalid type in a type expression)
reveal_type(n) # revealed: @Todo(typing.Counter[] alias)
reveal_type(list_bare) # revealed: list
reveal_type(list_parametrized) # revealed: list
reveal_type(dict_bare) # revealed: dict
reveal_type(dict_parametrized) # revealed: dict
reveal_type(set_bare) # revealed: set
reveal_type(set_parametrized) # revealed: set
reveal_type(frozen_set_bare) # revealed: frozenset
reveal_type(frozen_set_parametrized) # revealed: frozenset
reveal_type(chain_map_bare) # revealed: ChainMap
reveal_type(chain_map_parametrized) # revealed: ChainMap
reveal_type(counter_bare) # revealed: Counter
reveal_type(counter_parametrized) # revealed: Counter
reveal_type(default_dict_bare) # revealed: defaultdict
reveal_type(default_dict_parametrized) # revealed: defaultdict
reveal_type(deque_bare) # revealed: deque
reveal_type(deque_parametrized) # revealed: deque
reveal_type(ordered_dict_bare) # revealed: OrderedDict
reveal_type(ordered_dict_parametrized) # revealed: OrderedDict
```
## Inheritance
@@ -49,35 +65,63 @@ The aliases can be inherited from. Some of these are still partially or wholly T
```py
import typing
class A(typing.Dict): ...
####################
### Built-ins
class ListSubclass(typing.List): ...
# TODO: should have `Generic`, should not have `Unknown`
reveal_type(A.__mro__) # revealed: tuple[Literal[A], Literal[dict], Unknown, Literal[object]]
# revealed: tuple[Literal[ListSubclass], Literal[list], Unknown, Literal[object]]
reveal_type(ListSubclass.__mro__)
class B(typing.List): ...
class DictSubclass(typing.Dict): ...
# TODO: should have `Generic`, should not have `Unknown`
reveal_type(B.__mro__) # revealed: tuple[Literal[B], Literal[list], Unknown, Literal[object]]
# revealed: tuple[Literal[DictSubclass], Literal[dict], Unknown, Literal[object]]
reveal_type(DictSubclass.__mro__)
class C(typing.Set): ...
class SetSubclass(typing.Set): ...
# TODO: should have `Generic`, should not have `Unknown`
reveal_type(C.__mro__) # revealed: tuple[Literal[C], Literal[set], Unknown, Literal[object]]
# revealed: tuple[Literal[SetSubclass], Literal[set], Unknown, Literal[object]]
reveal_type(SetSubclass.__mro__)
class D(typing.FrozenSet): ...
class FrozenSetSubclass(typing.FrozenSet): ...
# TODO: should have `Generic`, should not have `Unknown`
reveal_type(D.__mro__) # revealed: tuple[Literal[D], Literal[frozenset], Unknown, Literal[object]]
# revealed: tuple[Literal[FrozenSetSubclass], Literal[frozenset], Unknown, Literal[object]]
reveal_type(FrozenSetSubclass.__mro__)
class E(typing.DefaultDict): ...
####################
### `collections`
reveal_type(E.__mro__) # revealed: tuple[Literal[E], @Todo(Support for more typing aliases as base classes), Literal[object]]
class ChainMapSubclass(typing.ChainMap): ...
class F(typing.OrderedDict): ...
# TODO: Should be (ChainMapSubclass, ChainMap, MutableMapping, Mapping, Collection, Sized, Iterable, Container, Generic, object)
# revealed: tuple[Literal[ChainMapSubclass], Literal[ChainMap], Unknown, Literal[object]]
reveal_type(ChainMapSubclass.__mro__)
reveal_type(F.__mro__) # revealed: tuple[Literal[F], @Todo(Support for more typing aliases as base classes), Literal[object]]
class CounterSubclass(typing.Counter): ...
class G(typing.Counter): ...
# TODO: Should be (CounterSubclass, Counter, dict, MutableMapping, Mapping, Collection, Sized, Iterable, Container, Generic, object)
# revealed: tuple[Literal[CounterSubclass], Literal[Counter], Unknown, Literal[object]]
reveal_type(CounterSubclass.__mro__)
reveal_type(G.__mro__) # revealed: tuple[Literal[G], @Todo(Support for more typing aliases as base classes), Literal[object]]
class DefaultDictSubclass(typing.DefaultDict): ...
# TODO: Should be (DefaultDictSubclass, defaultdict, dict, MutableMapping, Mapping, Collection, Sized, Iterable, Container, Generic, object)
# revealed: tuple[Literal[DefaultDictSubclass], Literal[defaultdict], Unknown, Literal[object]]
reveal_type(DefaultDictSubclass.__mro__)
class DequeSubclass(typing.Deque): ...
# TODO: Should be (DequeSubclass, deque, MutableSequence, Sequence, Reversible, Collection, Sized, Iterable, Container, Generic, object)
# revealed: tuple[Literal[DequeSubclass], Literal[deque], Unknown, Literal[object]]
reveal_type(DequeSubclass.__mro__)
class OrderedDictSubclass(typing.OrderedDict): ...
# TODO: Should be (OrderedDictSubclass, OrderedDict, dict, MutableMapping, Mapping, Collection, Sized, Iterable, Container, Generic, object)
# revealed: tuple[Literal[OrderedDictSubclass], Literal[OrderedDict], Unknown, Literal[object]]
reveal_type(OrderedDictSubclass.__mro__)
```

View File

@@ -51,7 +51,7 @@ class D(TypeIs): ... # error: [invalid-base]
class E(Concatenate): ... # error: [invalid-base]
class F(Callable): ...
reveal_type(F.__mro__) # revealed: tuple[Literal[F], @Todo(Support for more typing aliases as base classes), Literal[object]]
reveal_type(F.__mro__) # revealed: tuple[Literal[F], @Todo(Support for Callable as a base class), Literal[object]]
```
## Subscriptability

View File

@@ -67,6 +67,6 @@ def _(flag: bool):
def __call__(self) -> int: ...
a = NonCallable()
# error: "Object of type `Literal[1] | Literal[__call__]` is not callable (due to union element `Literal[1]`)"
reveal_type(a()) # revealed: Unknown | int
# error: "Object of type `Literal[__call__] | Literal[1]` is not callable (due to union element `Literal[1]`)"
reveal_type(a()) # revealed: int | Unknown
```

View File

@@ -19,14 +19,17 @@ def _(flag: bool):
x = 1 # error: [conflicting-declarations] "Conflicting declared types for `x`: str, int"
```
## Partial declarations
## Incompatible declarations for 2 (out of 3) types
```py
def _(flag: bool):
if flag:
def _(flag1: bool, flag2: bool):
if flag1:
x: str
elif flag2:
x: int
x = 1 # error: [conflicting-declarations] "Conflicting declared types for `x`: Unknown, int"
# Here, the declared type for `x` is `int | str | Unknown`.
x = 1 # error: [conflicting-declarations] "Conflicting declared types for `x`: str, int"
```
## Incompatible declarations with bad assignment
@@ -42,3 +45,31 @@ def _(flag: bool):
# error: [invalid-assignment]
x = b"foo"
```
## No errors
Currently, we avoid raising the conflicting-declarations for the following cases:
### Partial declarations
```py
def _(flag: bool):
if flag:
x: int
x = 1
```
### Partial declarations in try-except
Refer to <https://github.com/astral-sh/ruff/issues/13966>
```py
def _():
try:
x: int = 1
except:
x = 2
x = 3
```

View File

@@ -90,3 +90,83 @@ def foo(
# TODO: should emit a diagnostic here:
reveal_type(g) # revealed: @Todo(full tuple[...] support)
```
## Object raised is not an exception
```py
try:
raise AttributeError() # fine
except:
...
try:
raise FloatingPointError # fine
except:
...
try:
raise 1 # error: [invalid-raise]
except:
...
try:
raise int # error: [invalid-raise]
except:
...
def _(e: Exception | type[Exception]):
raise e # fine
def _(e: Exception | type[Exception] | None):
raise e # error: [invalid-raise]
```
## Exception cause is not an exception
```py
try:
raise EOFError() from GeneratorExit # fine
except:
...
try:
raise StopIteration from MemoryError() # fine
except:
...
try:
raise BufferError() from None # fine
except:
...
try:
raise ZeroDivisionError from False # error: [invalid-raise]
except:
...
try:
raise SystemExit from bool() # error: [invalid-raise]
except:
...
try:
raise
except KeyboardInterrupt as e: # fine
reveal_type(e) # revealed: KeyboardInterrupt
raise LookupError from e # fine
try:
raise
except int as e: # error: [invalid-exception-caught]
reveal_type(e) # revealed: Unknown
raise KeyError from e
def _(e: Exception | type[Exception]):
raise ModuleNotFoundError from e # fine
def _(e: Exception | type[Exception] | None):
raise IndexError from e # fine
def _(e: int | None):
raise IndexError from e # error: [invalid-raise]
```

View File

@@ -25,3 +25,82 @@ reveal_type(D) # revealed: Literal[C]
```py path=b.py
class C: ...
```
## Nested
```py
import a.b
reveal_type(a.b.C) # revealed: Literal[C]
```
```py path=a/__init__.py
```
```py path=a/b.py
class C: ...
```
## Deeply nested
```py
import a.b.c
reveal_type(a.b.c.C) # revealed: Literal[C]
```
```py path=a/__init__.py
```
```py path=a/b/__init__.py
```
```py path=a/b/c.py
class C: ...
```
## Nested with rename
```py
import a.b as b
reveal_type(b.C) # revealed: Literal[C]
```
```py path=a/__init__.py
```
```py path=a/b.py
class C: ...
```
## Deeply nested with rename
```py
import a.b.c as c
reveal_type(c.C) # revealed: Literal[C]
```
```py path=a/__init__.py
```
```py path=a/b/__init__.py
```
```py path=a/b/c.py
class C: ...
```
## Unresolvable submodule imports
```py
# Topmost component resolvable, submodule not resolvable:
import a.foo # error: [unresolved-import] "Cannot resolve import `a.foo`"
# Topmost component unresolvable:
import b.foo # error: [unresolved-import] "Cannot resolve import `b.foo`"
```
```py path=a/__init__.py
```

View File

@@ -0,0 +1,75 @@
# Conflicting attributes and submodules
## Via import
```py
import a.b
reveal_type(a.b) # revealed: <module 'a.b'>
```
```py path=a/__init__.py
b = 42
```
```py path=a/b.py
```
## Via from/import
```py
from a import b
reveal_type(b) # revealed: Literal[42]
```
```py path=a/__init__.py
b = 42
```
```py path=a/b.py
```
## Via both
```py
import a.b
from a import b
reveal_type(b) # revealed: <module 'a.b'>
reveal_type(a.b) # revealed: <module 'a.b'>
```
```py path=a/__init__.py
b = 42
```
```py path=a/b.py
```
## Via both (backwards)
In this test, we infer a different type for `b` than the runtime behavior of the Python interpreter.
The interpreter will not load the submodule `a.b` during the `from a import b` statement, since `a`
contains a non-module attribute named `b`. (See the [definition][from-import] of a `from...import`
statement for details.) However, because our import tracking is flow-insensitive, we will see that
`a.b` is imported somewhere in the file, and therefore assume that the `from...import` statement
sees the submodule as the value of `b` instead of the integer.
```py
from a import b
import a.b
# Python would say `Literal[42]` for `b`
reveal_type(b) # revealed: <module 'a.b'>
reveal_type(a.b) # revealed: <module 'a.b'>
```
```py path=a/__init__.py
b = 42
```
```py path=a/b.py
```
[from-import]: https://docs.python.org/3/reference/simple_stmts.html#the-import-statement

View File

@@ -7,3 +7,25 @@ from import bar # error: [invalid-syntax]
reveal_type(bar) # revealed: Unknown
```
## Invalid nested module import
TODO: This is correctly flagged as an error, but we could clean up the diagnostics that we report.
```py
# TODO: No second diagnostic
# error: [invalid-syntax] "Expected ',', found '.'"
# error: [unresolved-import] "Module `a` has no member `c`"
from a import b.c
# TODO: Should these be inferred as Unknown?
reveal_type(b) # revealed: <module 'a.b'>
reveal_type(b.c) # revealed: Literal[1]
```
```py path=a/__init__.py
```
```py path=a/b.py
c = 1
```

View File

@@ -121,23 +121,44 @@ X = 42
```
```py path=package/bar.py
# TODO: support submodule imports
from . import foo # error: [unresolved-import]
from . import foo
y = foo.X
# TODO: should be `Literal[42]`
reveal_type(y) # revealed: Unknown
reveal_type(foo.X) # revealed: Literal[42]
```
## Non-existent + bare to module
This test verifies that we emit an error when we try to import a symbol that is neither a submodule
nor an attribute of `package`.
```py path=package/__init__.py
```
```py path=package/bar.py
# TODO: support submodule imports
from . import foo # error: [unresolved-import]
reveal_type(foo) # revealed: Unknown
```
## Import submodule from self
We don't currently consider `from...import` statements when building up the `imported_modules` set
in the semantic index. When accessing an attribute of a module, we only consider it a potential
submodule when that submodule name appears in the `imported_modules` set. That means that submodules
that are imported via `from...import` are not visible to our type inference if you also access that
submodule via the attribute on its parent package.
```py path=package/__init__.py
```
```py path=package/foo.py
X = 42
```
```py path=package/bar.py
from . import foo
import package
# error: [unresolved-attribute] "Type `<module 'package'>` has no attribute `foo`"
reveal_type(package.foo.X) # revealed: Unknown
```

View File

@@ -0,0 +1,100 @@
# Tracking imported modules
These tests depend on how we track which modules have been imported. There are currently two
characteristics of our module tracking that can lead to inaccuracies:
- Imports are tracked on a per-file basis. At runtime, importing a submodule in one file makes that
submodule globally available via any reference to the containing package. We will flag an error
if a file tries to access a submodule without there being an import of that submodule _in that
same file_.
This is a purposeful decision, and not one we plan to change. If a module wants to re-export some
other module that it imports, there are ways to do that (tested below) that are blessed by the
typing spec and that are visible to our file-scoped import tracking.
- Imports are tracked flow-insensitively: submodule accesses are allowed and resolved if that
submodule is imported _anywhere in the file_. This handles the common case where all imports are
grouped at the top of the file, and is easiest to implement. We might revisit this decision and
track submodule imports flow-sensitively, in which case we will have to update the assertions in
some of these tests.
## Import submodule later in file
This test highlights our flow-insensitive analysis, since we access the `a.b` submodule before it
has been imported.
```py
import a
# Would be an error with flow-sensitive tracking
reveal_type(a.b.C) # revealed: Literal[C]
import a.b
```
```py path=a/__init__.py
```
```py path=a/b.py
class C: ...
```
## Rename a re-export
This test highlights how import tracking is local to each file, but specifically to the file where a
containing module is first referenced. This allows the main module to see that `q.a` contains a
submodule `b`, even though `a.b` is never imported in the main module.
```py
from q import a, b
reveal_type(b) # revealed: <module 'a.b'>
reveal_type(b.C) # revealed: Literal[C]
reveal_type(a.b) # revealed: <module 'a.b'>
reveal_type(a.b.C) # revealed: Literal[C]
```
```py path=a/__init__.py
```
```py path=a/b.py
class C: ...
```
```py path=q.py
import a as a
import a.b as b
```
## Attribute overrides submodule
Technically, either a submodule or a non-module attribute could shadow the other, depending on the
ordering of when the submodule is loaded relative to the parent module's `__init__.py` file being
evaluated. We have chosen to always have the submodule take priority. (This matches pyright's
current behavior, and opposite of mypy's current behavior.)
```py
import sub.b
import attr.b
# In the Python interpreter, `attr.b` is Literal[1]
reveal_type(sub.b) # revealed: <module 'sub.b'>
reveal_type(attr.b) # revealed: <module 'attr.b'>
```
```py path=sub/__init__.py
b = 1
```
```py path=sub/b.py
```
```py path=attr/__init__.py
from . import b as _
b = 1
```
```py path=attr/b.py
```

View File

@@ -0,0 +1,221 @@
# Narrowing For Truthiness Checks (`if x` or `if not x`)
## Value Literals
```py
def foo() -> Literal[0, -1, True, False, "", "foo", b"", b"bar", None] | tuple[()]:
return 0
x = foo()
if x:
reveal_type(x) # revealed: Literal[-1] | Literal[True] | Literal["foo"] | Literal[b"bar"]
else:
reveal_type(x) # revealed: Literal[0] | Literal[False] | Literal[""] | Literal[b""] | None | tuple[()]
if not x:
reveal_type(x) # revealed: Literal[0] | Literal[False] | Literal[""] | Literal[b""] | None | tuple[()]
else:
reveal_type(x) # revealed: Literal[-1] | Literal[True] | Literal["foo"] | Literal[b"bar"]
if x and not x:
reveal_type(x) # revealed: Never
else:
reveal_type(x) # revealed: Literal[-1, 0] | bool | Literal["", "foo"] | Literal[b"", b"bar"] | None | tuple[()]
if not (x and not x):
reveal_type(x) # revealed: Literal[-1, 0] | bool | Literal["", "foo"] | Literal[b"", b"bar"] | None | tuple[()]
else:
reveal_type(x) # revealed: Never
if x or not x:
reveal_type(x) # revealed: Literal[-1, 0] | bool | Literal["foo", ""] | Literal[b"bar", b""] | None | tuple[()]
else:
reveal_type(x) # revealed: Never
if not (x or not x):
reveal_type(x) # revealed: Never
else:
reveal_type(x) # revealed: Literal[-1, 0] | bool | Literal["foo", ""] | Literal[b"bar", b""] | None | tuple[()]
if (isinstance(x, int) or isinstance(x, str)) and x:
reveal_type(x) # revealed: Literal[-1] | Literal[True] | Literal["foo"]
else:
reveal_type(x) # revealed: Literal[b"", b"bar"] | None | tuple[()] | Literal[0] | Literal[False] | Literal[""]
```
## Function Literals
Basically functions are always truthy.
```py
def flag() -> bool:
return True
def foo(hello: int) -> bytes:
return b""
def bar(world: str, *args, **kwargs) -> float:
return 0.0
x = foo if flag() else bar
if x:
reveal_type(x) # revealed: Literal[foo, bar]
else:
reveal_type(x) # revealed: Never
```
## Mutable Truthiness
### Truthiness of Instances
The boolean value of an instance is not always consistent. For example, `__bool__` can be customized
to return random values, or in the case of a `list()`, the result depends on the number of elements
in the list. Therefore, these types should not be narrowed by `if x` or `if not x`.
```py
class A: ...
class B: ...
def f(x: A | B):
if x:
reveal_type(x) # revealed: A & ~AlwaysFalsy | B & ~AlwaysFalsy
else:
reveal_type(x) # revealed: A & ~AlwaysTruthy | B & ~AlwaysTruthy
if x and not x:
reveal_type(x) # revealed: A & ~AlwaysFalsy & ~AlwaysTruthy | B & ~AlwaysFalsy & ~AlwaysTruthy
else:
reveal_type(x) # revealed: A & ~AlwaysTruthy | B & ~AlwaysTruthy | A & ~AlwaysFalsy | B & ~AlwaysFalsy
if x or not x:
reveal_type(x) # revealed: A & ~AlwaysFalsy | B & ~AlwaysFalsy | A & ~AlwaysTruthy | B & ~AlwaysTruthy
else:
reveal_type(x) # revealed: A & ~AlwaysTruthy & ~AlwaysFalsy | B & ~AlwaysTruthy & ~AlwaysFalsy
```
### Truthiness of Types
Also, types may not be Truthy. This is because `__bool__` can be customized via a metaclass.
Although this is a very rare case, we may consider metaclass checks in the future to handle this
more accurately.
```py
def flag() -> bool:
return True
x = int if flag() else str
reveal_type(x) # revealed: Literal[int, str]
if x:
reveal_type(x) # revealed: Literal[int] & ~AlwaysFalsy | Literal[str] & ~AlwaysFalsy
else:
reveal_type(x) # revealed: Literal[int] & ~AlwaysTruthy | Literal[str] & ~AlwaysTruthy
```
## Determined Truthiness
Some custom classes can have a boolean value that is consistently determined as either `True` or
`False`, regardless of the instance's state. This is achieved by defining a `__bool__` method that
always returns a fixed value.
These types can always be fully narrowed in boolean contexts, as shown below:
```py
class T:
def __bool__(self) -> Literal[True]:
return True
class F:
def __bool__(self) -> Literal[False]:
return False
t = T()
if t:
reveal_type(t) # revealed: T
else:
reveal_type(t) # revealed: Never
f = F()
if f:
reveal_type(f) # revealed: Never
else:
reveal_type(f) # revealed: F
```
## Narrowing Complex Intersection and Union
```py
class A: ...
class B: ...
def flag() -> bool:
return True
def instance() -> A | B:
return A()
def literals() -> Literal[0, 42, "", "hello"]:
return 42
x = instance()
y = literals()
if isinstance(x, str) and not isinstance(x, B):
reveal_type(x) # revealed: A & str & ~B
reveal_type(y) # revealed: Literal[0, 42] | Literal["", "hello"]
z = x if flag() else y
reveal_type(z) # revealed: A & str & ~B | Literal[0, 42] | Literal["", "hello"]
if z:
reveal_type(z) # revealed: A & str & ~B & ~AlwaysFalsy | Literal[42] | Literal["hello"]
else:
reveal_type(z) # revealed: A & str & ~B & ~AlwaysTruthy | Literal[0] | Literal[""]
```
## Narrowing Multiple Variables
```py
def f(x: Literal[0, 1], y: Literal["", "hello"]):
if x and y and not x and not y:
reveal_type(x) # revealed: Never
reveal_type(y) # revealed: Never
else:
# ~(x or not x) and ~(y or not y)
reveal_type(x) # revealed: Literal[0, 1]
reveal_type(y) # revealed: Literal["", "hello"]
if (x or not x) and (y and not y):
reveal_type(x) # revealed: Literal[0, 1]
reveal_type(y) # revealed: Never
else:
# ~(x or not x) or ~(y and not y)
reveal_type(x) # revealed: Literal[0, 1]
reveal_type(y) # revealed: Literal["", "hello"]
```
## ControlFlow Merging
After merging control flows, when we take the union of all constraints applied in each branch, we
should return to the original state.
```py
class A: ...
x = A()
if x and not x:
y = x
reveal_type(y) # revealed: A & ~AlwaysFalsy & ~AlwaysTruthy
else:
y = x
reveal_type(y) # revealed: A & ~AlwaysTruthy | A & ~AlwaysFalsy
# TODO: It should be A. We should improve UnionBuilder or IntersectionBuilder. (issue #15023)
reveal_type(y) # revealed: A & ~AlwaysTruthy | A & ~AlwaysFalsy
```

View File

@@ -61,10 +61,8 @@ class B: ...
```py path=a/test.py
import a.b
# TODO: no diagnostic
# error: [unresolved-attribute]
def f(c: type[a.b.C]):
reveal_type(c) # revealed: @Todo(unsupported type[X] special form)
reveal_type(c) # revealed: type[C]
```
```py path=a/__init__.py

View File

@@ -0,0 +1,165 @@
# Custom unary operations
## Class instances
```py
class Yes:
def __pos__(self) -> bool:
return False
def __neg__(self) -> str:
return "negative"
def __invert__(self) -> int:
return 17
class Sub(Yes): ...
class No: ...
reveal_type(+Yes()) # revealed: bool
reveal_type(-Yes()) # revealed: str
reveal_type(~Yes()) # revealed: int
reveal_type(+Sub()) # revealed: bool
reveal_type(-Sub()) # revealed: str
reveal_type(~Sub()) # revealed: int
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `No`"
reveal_type(+No()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `No`"
reveal_type(-No()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `No`"
reveal_type(~No()) # revealed: Unknown
```
## Classes
```py
class Yes:
def __pos__(self) -> bool:
return False
def __neg__(self) -> str:
return "negative"
def __invert__(self) -> int:
return 17
class Sub(Yes): ...
class No: ...
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `Literal[Yes]`"
reveal_type(+Yes) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `Literal[Yes]`"
reveal_type(-Yes) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `Literal[Yes]`"
reveal_type(~Yes) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `Literal[Sub]`"
reveal_type(+Sub) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `Literal[Sub]`"
reveal_type(-Sub) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `Literal[Sub]`"
reveal_type(~Sub) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `Literal[No]`"
reveal_type(+No) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `Literal[No]`"
reveal_type(-No) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `Literal[No]`"
reveal_type(~No) # revealed: Unknown
```
## Function literals
```py
def f():
pass
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `Literal[f]`"
reveal_type(+f) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `Literal[f]`"
reveal_type(-f) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `Literal[f]`"
reveal_type(~f) # revealed: Unknown
```
## Subclass
```py
class Yes:
def __pos__(self) -> bool:
return False
def __neg__(self) -> str:
return "negative"
def __invert__(self) -> int:
return 17
class Sub(Yes): ...
class No: ...
def yes() -> type[Yes]:
return Yes
def sub() -> type[Sub]:
return Sub
def no() -> type[No]:
return No
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `type[Yes]`"
reveal_type(+yes()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `type[Yes]`"
reveal_type(-yes()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `type[Yes]`"
reveal_type(~yes()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `type[Sub]`"
reveal_type(+sub()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `type[Sub]`"
reveal_type(-sub()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `type[Sub]`"
reveal_type(~sub()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `type[No]`"
reveal_type(+no()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `type[No]`"
reveal_type(-no()) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `type[No]`"
reveal_type(~no()) # revealed: Unknown
```
## Metaclass
```py
class Meta(type):
def __pos__(self) -> bool:
return False
def __neg__(self) -> str:
return "negative"
def __invert__(self) -> int:
return 17
class Yes(metaclass=Meta): ...
class Sub(Yes): ...
class No: ...
reveal_type(+Yes) # revealed: bool
reveal_type(-Yes) # revealed: str
reveal_type(~Yes) # revealed: int
reveal_type(+Sub) # revealed: bool
reveal_type(-Sub) # revealed: str
reveal_type(~Sub) # revealed: int
# error: [unsupported-operator] "Unary operator `+` is unsupported for type `Literal[No]`"
reveal_type(+No) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `-` is unsupported for type `Literal[No]`"
reveal_type(-No) # revealed: Unknown
# error: [unsupported-operator] "Unary operator `~` is unsupported for type `Literal[No]`"
reveal_type(~No) # revealed: Unknown
```

View File

@@ -17,5 +17,5 @@ class Manager:
async def test():
async with Manager() as f:
reveal_type(f) # revealed: @Todo(async with statement)
reveal_type(f) # revealed: @Todo(async `with` statement)
```

View File

@@ -27,6 +27,7 @@ pub(crate) mod tests {
use ruff_db::{Db as SourceDb, Upcast};
#[salsa::db]
#[derive(Clone)]
pub(crate) struct TestDb {
storage: salsa::Storage<Self>,
files: Files,

View File

@@ -186,6 +186,26 @@ impl ModuleName {
self.0.push('.');
self.0.push_str(other);
}
/// Returns an iterator of this module name and all of its parent modules.
///
/// # Examples
///
/// ```
/// use red_knot_python_semantic::ModuleName;
///
/// assert_eq!(
/// ModuleName::new_static("foo.bar.baz").unwrap().ancestors().collect::<Vec<_>>(),
/// vec![
/// ModuleName::new_static("foo.bar.baz").unwrap(),
/// ModuleName::new_static("foo.bar").unwrap(),
/// ModuleName::new_static("foo").unwrap(),
/// ],
/// );
/// ```
pub fn ancestors(&self) -> impl Iterator<Item = Self> {
std::iter::successors(Some(self.clone()), Self::parent)
}
}
impl Deref for ModuleName {

View File

@@ -7,7 +7,7 @@ use super::path::SearchPath;
use crate::module_name::ModuleName;
/// Representation of a Python module.
#[derive(Clone, PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq, Hash)]
pub struct Module {
inner: Arc<ModuleInner>,
}
@@ -61,7 +61,7 @@ impl std::fmt::Debug for Module {
}
}
#[derive(PartialEq, Eq)]
#[derive(PartialEq, Eq, Hash)]
struct ModuleInner {
name: ModuleName,
kind: ModuleKind,

View File

@@ -73,6 +73,15 @@ enum SystemOrVendoredPathRef<'a> {
Vendored(&'a VendoredPath),
}
impl std::fmt::Display for SystemOrVendoredPathRef<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
SystemOrVendoredPathRef::System(system) => system.fmt(f),
SystemOrVendoredPathRef::Vendored(vendored) => vendored.fmt(f),
}
}
}
/// Resolves the module for the file with the given id.
///
/// Returns `None` if the file is not a module locatable via any of the known search paths.

View File

@@ -1,13 +1,14 @@
use std::iter::FusedIterator;
use std::sync::Arc;
use rustc_hash::{FxBuildHasher, FxHashMap};
use rustc_hash::{FxBuildHasher, FxHashMap, FxHashSet};
use salsa::plumbing::AsId;
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_index::{IndexSlice, IndexVec};
use crate::module_name::ModuleName;
use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
use crate::semantic_index::ast_ids::AstIds;
use crate::semantic_index::builder::SemanticIndexBuilder;
@@ -60,6 +61,22 @@ pub(crate) fn symbol_table<'db>(db: &'db dyn Db, scope: ScopeId<'db>) -> Arc<Sym
index.symbol_table(scope.file_scope_id(db))
}
/// Returns the set of modules that are imported anywhere in `file`.
///
/// This set only considers `import` statements, not `from...import` statements, because:
///
/// - In `from foo import bar`, we cannot determine whether `foo.bar` is a submodule (and is
/// therefore imported) without looking outside the content of this file. (We could turn this
/// into a _potentially_ imported modules set, but that would change how it's used in our type
/// inference logic.)
///
/// - We cannot resolve relative imports (which aren't allowed in `import` statements) without
/// knowing the name of the current module, and whether it's a package.
#[salsa::tracked]
pub(crate) fn imported_modules<'db>(db: &'db dyn Db, file: File) -> Arc<FxHashSet<ModuleName>> {
semantic_index(db, file).imported_modules.clone()
}
/// Returns the use-def map for a specific `scope`.
///
/// Using [`use_def_map`] over [`semantic_index`] has the advantage that
@@ -116,6 +133,9 @@ pub(crate) struct SemanticIndex<'db> {
/// changing a file invalidates all dependents.
ast_ids: IndexVec<FileScopeId, AstIds>,
/// The set of modules that are imported anywhere within this file.
imported_modules: Arc<FxHashSet<ModuleName>>,
/// Flags about the global scope (code usage impacting inference)
has_future_annotations: bool,
}

View File

@@ -1,7 +1,7 @@
use std::sync::Arc;
use except_handlers::TryNodeContextStackManager;
use rustc_hash::FxHashMap;
use rustc_hash::{FxHashMap, FxHashSet};
use ruff_db::files::File;
use ruff_db::parsed::ParsedModule;
@@ -12,6 +12,7 @@ use ruff_python_ast::visitor::{walk_expr, walk_pattern, walk_stmt, Visitor};
use ruff_python_ast::{BoolOp, Expr};
use crate::ast_node_ref::AstNodeRef;
use crate::module_name::ModuleName;
use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
use crate::semantic_index::ast_ids::AstIdsBuilder;
use crate::semantic_index::definition::{
@@ -79,6 +80,7 @@ pub(super) struct SemanticIndexBuilder<'db> {
scopes_by_expression: FxHashMap<ExpressionNodeKey, FileScopeId>,
definitions_by_node: FxHashMap<DefinitionNodeKey, Definition<'db>>,
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
imported_modules: FxHashSet<ModuleName>,
}
impl<'db> SemanticIndexBuilder<'db> {
@@ -105,6 +107,8 @@ impl<'db> SemanticIndexBuilder<'db> {
scopes_by_node: FxHashMap::default(),
definitions_by_node: FxHashMap::default(),
expressions_by_node: FxHashMap::default(),
imported_modules: FxHashSet::default(),
};
builder.push_scope_with_parent(NodeWithScopeRef::Module, None);
@@ -558,6 +562,7 @@ impl<'db> SemanticIndexBuilder<'db> {
scopes_by_expression: self.scopes_by_expression,
scopes_by_node: self.scopes_by_node,
use_def_maps,
imported_modules: Arc::new(self.imported_modules),
has_future_annotations: self.has_future_annotations,
}
}
@@ -661,6 +666,12 @@ where
}
ast::Stmt::Import(node) => {
for alias in &node.names {
// Mark the imported module, and all of its parents, as being imported in this
// file.
if let Some(module_name) = ModuleName::new(&alias.name) {
self.imported_modules.extend(module_name.ancestors());
}
let symbol_name = if let Some(asname) = &alias.asname {
asname.id.clone()
} else {

View File

@@ -17,6 +17,7 @@ pub(crate) enum CoreStdlibModule {
Sys,
#[allow(dead_code)]
Abc, // currently only used in tests
Collections,
}
impl CoreStdlibModule {
@@ -29,6 +30,7 @@ impl CoreStdlibModule {
Self::TypingExtensions => "typing_extensions",
Self::Sys => "sys",
Self::Abc => "abc",
Self::Collections => "collections",
}
}

View File

@@ -0,0 +1,50 @@
use salsa;
use ruff_db::{files::File, parsed::comment_ranges, source::source_text};
use ruff_index::{newtype_index, IndexVec};
use crate::{lint::LintId, Db};
#[salsa::tracked(return_ref)]
pub(crate) fn suppressions(db: &dyn Db, file: File) -> IndexVec<SuppressionIndex, Suppression> {
let comments = comment_ranges(db.upcast(), file);
let source = source_text(db.upcast(), file);
let mut suppressions = IndexVec::default();
for range in comments {
let text = &source[range];
if text.starts_with("# type: ignore") {
suppressions.push(Suppression {
target: None,
kind: SuppressionKind::TypeIgnore,
});
} else if text.starts_with("# knot: ignore") {
suppressions.push(Suppression {
target: None,
kind: SuppressionKind::KnotIgnore,
});
}
}
suppressions
}
#[newtype_index]
pub(crate) struct SuppressionIndex;
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub(crate) struct Suppression {
target: Option<LintId>,
kind: SuppressionKind,
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub(crate) enum SuppressionKind {
/// A `type: ignore` comment
TypeIgnore,
/// A `knot: ignore` comment
KnotIgnore,
}

View File

@@ -1,5 +1,7 @@
use std::hash::Hash;
use context::InferContext;
use diagnostic::{report_not_iterable, report_not_iterable_possibly_unbound};
use indexmap::IndexSet;
use itertools::Itertools;
use ruff_db::diagnostic::Severity;
@@ -14,13 +16,14 @@ pub(crate) use self::infer::{
infer_deferred_types, infer_definition_types, infer_expression_types, infer_scope_types,
};
pub(crate) use self::signatures::Signature;
use crate::module_resolver::file_to_module;
use crate::module_name::ModuleName;
use crate::module_resolver::{file_to_module, resolve_module};
use crate::semantic_index::ast_ids::HasScopedExpressionId;
use crate::semantic_index::definition::Definition;
use crate::semantic_index::symbol::{self as symbol, ScopeId, ScopedSymbolId};
use crate::semantic_index::{
global_scope, semantic_index, symbol_table, use_def_map, BindingWithConstraints,
BindingWithConstraintsIterator, DeclarationsIterator,
global_scope, imported_modules, semantic_index, symbol_table, use_def_map,
BindingWithConstraints, BindingWithConstraintsIterator, DeclarationsIterator,
};
use crate::stdlib::{
builtins_symbol, core_module_symbol, typing_extensions_symbol, CoreStdlibModule,
@@ -28,7 +31,7 @@ use crate::stdlib::{
use crate::symbol::{Boundness, Symbol};
use crate::types::call::{CallDunderResult, CallOutcome};
use crate::types::class_base::ClassBase;
use crate::types::diagnostic::{TypeCheckDiagnosticsBuilder, INVALID_TYPE_FORM};
use crate::types::diagnostic::INVALID_TYPE_FORM;
use crate::types::mro::{Mro, MroError, MroIterator};
use crate::types::narrow::narrowing_constraint;
use crate::{Db, FxOrderSet, Module, Program, PythonVersion};
@@ -36,6 +39,7 @@ use crate::{Db, FxOrderSet, Module, Program, PythonVersion};
mod builder;
mod call;
mod class_base;
mod context;
mod diagnostic;
mod display;
mod infer;
@@ -291,8 +295,8 @@ type DeclaredTypeResult<'db> = Result<Type<'db>, (Type<'db>, Box<[Type<'db>]>)>;
/// `Ok(declared_type)`. If there are conflicting declarations, returns
/// `Err((union_of_declared_types, conflicting_declared_types))`.
///
/// If undeclared is a possibility, `undeclared_ty` type will be part of the return type (and may
/// conflict with other declarations.)
/// If undeclared is a possibility, `undeclared_ty` type will be part of the return type but it
/// will not be considered to be conflicting with any other types.
///
/// # Panics
/// Will panic if there are no declarations and no `undeclared_ty` is provided. This is a logic
@@ -303,27 +307,31 @@ fn declarations_ty<'db>(
declarations: DeclarationsIterator<'_, 'db>,
undeclared_ty: Option<Type<'db>>,
) -> DeclaredTypeResult<'db> {
let decl_types = declarations.map(|declaration| declaration_ty(db, declaration));
let mut declaration_types = declarations.map(|declaration| declaration_ty(db, declaration));
let mut all_types = undeclared_ty.into_iter().chain(decl_types);
let first = all_types.next().expect(
"declarations_ty must not be called with zero declarations and no may-be-undeclared",
);
let Some(first) = declaration_types.next() else {
if let Some(undeclared_ty) = undeclared_ty {
// Short-circuit to return the undeclared type if there are no declarations.
return Ok(undeclared_ty);
}
panic!("declarations_ty must not be called with zero declarations and no undeclared_ty");
};
let mut conflicting: Vec<Type<'db>> = vec![];
let declared_ty = if let Some(second) = all_types.next() {
let mut builder = UnionBuilder::new(db).add(first);
for other in [second].into_iter().chain(all_types) {
if !first.is_equivalent_to(db, other) {
conflicting.push(other);
}
builder = builder.add(other);
let mut builder = UnionBuilder::new(db).add(first);
for other in declaration_types {
if !first.is_equivalent_to(db, other) {
conflicting.push(other);
}
builder.build()
} else {
first
};
builder = builder.add(other);
}
// Avoid considering the undeclared type for the conflicting declaration diagnostics. It
// should still be part of the declared type.
if let Some(undeclared_ty) = undeclared_ty {
builder = builder.add(undeclared_ty);
}
let declared_ty = builder.build();
if conflicting.is_empty() {
Ok(declared_ty)
} else {
@@ -413,7 +421,7 @@ pub enum Type<'db> {
/// A specific function object
FunctionLiteral(FunctionType<'db>),
/// A specific module object
ModuleLiteral(File),
ModuleLiteral(ModuleLiteralType<'db>),
/// A specific class object
ClassLiteral(ClassLiteralType<'db>),
// The set of all class objects that are subclasses of the given class (C), spelled `type[C]`.
@@ -426,6 +434,11 @@ pub enum Type<'db> {
Union(UnionType<'db>),
/// The set of objects in all of the types in the intersection
Intersection(IntersectionType<'db>),
/// Represents objects whose `__bool__` method is deterministic:
/// - `AlwaysTruthy`: `__bool__` always returns `True`
/// - `AlwaysFalsy`: `__bool__` always returns `False`
AlwaysTruthy,
AlwaysFalsy,
/// An integer literal
IntLiteral(i64),
/// A boolean literal, either `True` or `False`.
@@ -446,6 +459,10 @@ pub enum Type<'db> {
}
impl<'db> Type<'db> {
pub const fn is_unknown(&self) -> bool {
matches!(self, Type::Unknown)
}
pub const fn is_never(&self) -> bool {
matches!(self, Type::Never)
}
@@ -475,15 +492,19 @@ impl<'db> Type<'db> {
matches!(self, Type::ClassLiteral(..))
}
pub const fn into_module_literal(self) -> Option<File> {
pub fn module_literal(db: &'db dyn Db, importing_file: File, submodule: Module) -> Self {
Self::ModuleLiteral(ModuleLiteralType::new(db, importing_file, submodule))
}
pub const fn into_module_literal(self) -> Option<ModuleLiteralType<'db>> {
match self {
Type::ModuleLiteral(file) => Some(file),
Type::ModuleLiteral(module) => Some(module),
_ => None,
}
}
#[track_caller]
pub fn expect_module_literal(self) -> File {
pub fn expect_module_literal(self) -> ModuleLiteralType<'db> {
self.into_module_literal()
.expect("Expected a Type::ModuleLiteral variant")
}
@@ -704,6 +725,15 @@ impl<'db> Type<'db> {
.all(|&neg_ty| self.is_disjoint_from(db, neg_ty))
}
// Note that the definition of `Type::AlwaysFalsy` depends on the return value of `__bool__`.
// If `__bool__` always returns True or False, it can be treated as a subtype of `AlwaysTruthy` or `AlwaysFalsy`, respectively.
(left, Type::AlwaysFalsy) => matches!(left.bool(db), Truthiness::AlwaysFalse),
(left, Type::AlwaysTruthy) => matches!(left.bool(db), Truthiness::AlwaysTrue),
// Currently, the only supertype of `AlwaysFalsy` and `AlwaysTruthy` is the universal set (object instance).
(Type::AlwaysFalsy | Type::AlwaysTruthy, _) => {
target.is_equivalent_to(db, KnownClass::Object.to_instance(db))
}
// All `StringLiteral` types are a subtype of `LiteralString`.
(Type::StringLiteral(_), Type::LiteralString) => true,
@@ -1092,6 +1122,16 @@ impl<'db> Type<'db> {
false
}
(Type::AlwaysTruthy, ty) | (ty, Type::AlwaysTruthy) => {
// `Truthiness::Ambiguous` may include `AlwaysTrue` as a subset, so it's not guaranteed to be disjoint.
// Thus, they are only disjoint if `ty.bool() == AlwaysFalse`.
matches!(ty.bool(db), Truthiness::AlwaysFalse)
}
(Type::AlwaysFalsy, ty) | (ty, Type::AlwaysFalsy) => {
// Similarly, they are only disjoint if `ty.bool() == AlwaysTrue`.
matches!(ty.bool(db), Truthiness::AlwaysTrue)
}
(Type::KnownInstance(left), right) => {
left.instance_fallback(db).is_disjoint_from(db, right)
}
@@ -1225,7 +1265,9 @@ impl<'db> Type<'db> {
| Type::LiteralString
| Type::BytesLiteral(_)
| Type::SliceLiteral(_)
| Type::KnownInstance(_) => true,
| Type::KnownInstance(_)
| Type::AlwaysFalsy
| Type::AlwaysTruthy => true,
Type::SubclassOf(SubclassOfType { base }) => matches!(base, ClassBase::Class(_)),
Type::ClassLiteral(_) | Type::Instance(_) => {
// TODO: Ideally, we would iterate over the MRO of the class, check if all
@@ -1327,6 +1369,7 @@ impl<'db> Type<'db> {
//
false
}
Type::AlwaysTruthy | Type::AlwaysFalsy => false,
}
}
@@ -1380,6 +1423,11 @@ impl<'db> Type<'db> {
| KnownClass::ModuleType
| KnownClass::FunctionType
| KnownClass::SpecialForm
| KnownClass::ChainMap
| KnownClass::Counter
| KnownClass::DefaultDict
| KnownClass::Deque
| KnownClass::OrderedDict
| KnownClass::StdlibAlias
| KnownClass::TypeVar,
) => false,
@@ -1392,7 +1440,9 @@ impl<'db> Type<'db> {
| Type::Todo(_)
| Type::Union(..)
| Type::Intersection(..)
| Type::LiteralString => false,
| Type::LiteralString
| Type::AlwaysTruthy
| Type::AlwaysFalsy => false,
}
}
@@ -1408,17 +1458,12 @@ impl<'db> Type<'db> {
}
match self {
Type::Any => Type::Any.into(),
Type::Never => {
// TODO: attribute lookup on Never type
todo_type!().into()
}
Type::Unknown => Type::Unknown.into(),
Type::Any | Type::Unknown | Type::Todo(_) => self.into(),
Type::Never => todo_type!("attribute lookup on Never").into(),
Type::FunctionLiteral(_) => {
// TODO: attribute lookup on function type
todo_type!().into()
todo_type!("Attribute access on `FunctionLiteral` types").into()
}
Type::ModuleLiteral(file) => {
Type::ModuleLiteral(module_ref) => {
// `__dict__` is a very special member that is never overridden by module globals;
// we should always look it up directly as an attribute on `types.ModuleType`,
// never in the global scope of the module.
@@ -1428,7 +1473,30 @@ impl<'db> Type<'db> {
.member(db, "__dict__");
}
let global_lookup = symbol(db, global_scope(db, *file), name);
// If the file that originally imported the module has also imported a submodule
// named [name], then the result is (usually) that submodule, even if the module
// also defines a (non-module) symbol with that name.
//
// Note that technically, either the submodule or the non-module symbol could take
// priority, depending on the ordering of when the submodule is loaded relative to
// the parent module's `__init__.py` file being evaluated. That said, we have
// chosen to always have the submodule take priority. (This matches pyright's
// current behavior, and opposite of mypy's current behavior.)
if let Some(submodule_name) = ModuleName::new(name) {
let importing_file = module_ref.importing_file(db);
let imported_submodules = imported_modules(db, importing_file);
let mut full_submodule_name = module_ref.module(db).name().clone();
full_submodule_name.extend(&submodule_name);
if imported_submodules.contains(&full_submodule_name) {
if let Some(submodule) = resolve_module(db, &full_submodule_name) {
let submodule_ty = Type::module_literal(db, importing_file, submodule);
return Symbol::Type(submodule_ty, Boundness::Bound);
}
}
}
let global_lookup =
symbol(db, global_scope(db, module_ref.module(db).file()), name);
// If it's unbound, check if it's present as an instance on `types.ModuleType`
// or `builtins.object`.
@@ -1508,36 +1576,38 @@ impl<'db> Type<'db> {
Type::Intersection(_) => {
// TODO perform the get_member on each type in the intersection
// TODO return the intersection of those results
todo_type!().into()
todo_type!("Attribute access on `Intersection` types").into()
}
Type::IntLiteral(_) => {
// TODO raise error
todo_type!().into()
Type::IntLiteral(_) => todo_type!("Attribute access on `IntLiteral` types").into(),
Type::BooleanLiteral(_) => {
todo_type!("Attribute access on `BooleanLiteral` types").into()
}
Type::BooleanLiteral(_) => todo_type!().into(),
Type::StringLiteral(_) => {
// TODO defer to `typing.LiteralString`/`builtins.str` methods
// from typeshed's stubs
todo_type!().into()
todo_type!("Attribute access on `StringLiteral` types").into()
}
Type::LiteralString => {
// TODO defer to `typing.LiteralString`/`builtins.str` methods
// from typeshed's stubs
todo_type!().into()
todo_type!("Attribute access on `LiteralString` types").into()
}
Type::BytesLiteral(_) => {
// TODO defer to Type::Instance(<bytes from typeshed>).member
todo_type!().into()
todo_type!("Attribute access on `BytesLiteral` types").into()
}
Type::SliceLiteral(_) => {
// TODO defer to `builtins.slice` methods
todo_type!().into()
todo_type!("Attribute access on `SliceLiteral` types").into()
}
Type::Tuple(_) => {
// TODO: implement tuple methods
todo_type!().into()
todo_type!("Attribute access on heterogeneous tuple types").into()
}
Type::AlwaysTruthy | Type::AlwaysFalsy => {
// TODO return `Callable[[], Literal[True/False]]` for `__bool__` access
KnownClass::Object.to_instance(db).member(db, name)
}
&todo @ Type::Todo(_) => todo.into(),
}
}
@@ -1559,6 +1629,8 @@ impl<'db> Type<'db> {
// TODO: see above
Truthiness::Ambiguous
}
Type::AlwaysTruthy => Truthiness::AlwaysTrue,
Type::AlwaysFalsy => Truthiness::AlwaysFalse,
instance_ty @ Type::Instance(InstanceType { class }) => {
if class.is_known(db, KnownClass::NoneType) {
Truthiness::AlwaysFalse
@@ -1740,12 +1812,8 @@ impl<'db> Type<'db> {
}
}
// `Any` is callable, and its return type is also `Any`.
Type::Any => CallOutcome::callable(Type::Any),
Type::Todo(_) => CallOutcome::callable(todo_type!("call todo")),
Type::Unknown => CallOutcome::callable(Type::Unknown),
// Dynamic types are callable, and the return type is the same dynamic type
Type::Any | Type::Todo(_) | Type::Unknown => CallOutcome::callable(self),
Type::Union(union) => CallOutcome::union(
self,
@@ -1755,8 +1823,7 @@ impl<'db> Type<'db> {
.map(|elem| elem.call(db, arg_types)),
),
// TODO: intersection types
Type::Intersection(_) => CallOutcome::callable(todo_type!()),
Type::Intersection(_) => CallOutcome::callable(todo_type!("Type::Intersection.call()")),
_ => CallOutcome::not_callable(self),
}
@@ -1857,8 +1924,7 @@ impl<'db> Type<'db> {
}) => Type::instance(*class),
Type::SubclassOf(_) => Type::Any,
Type::Union(union) => union.map(db, |element| element.to_instance(db)),
// TODO: we can probably do better here: --Alex
Type::Intersection(_) => todo_type!(),
Type::Intersection(_) => todo_type!("Type::Intersection.to_instance()"),
// TODO: calling `.to_instance()` on any of these should result in a diagnostic,
// since they already indicate that the object is an instance of some kind:
Type::BooleanLiteral(_)
@@ -1871,7 +1937,9 @@ impl<'db> Type<'db> {
| Type::StringLiteral(_)
| Type::SliceLiteral(_)
| Type::Tuple(_)
| Type::LiteralString => Type::Unknown,
| Type::LiteralString
| Type::AlwaysTruthy
| Type::AlwaysFalsy => Type::Unknown,
}
}
@@ -1892,6 +1960,28 @@ impl<'db> Type<'db> {
// We treat `typing.Type` exactly the same as `builtins.type`:
Type::KnownInstance(KnownInstanceType::Type) => Ok(KnownClass::Type.to_instance(db)),
Type::KnownInstance(KnownInstanceType::Tuple) => Ok(KnownClass::Tuple.to_instance(db)),
// Legacy `typing` aliases
Type::KnownInstance(KnownInstanceType::List) => Ok(KnownClass::List.to_instance(db)),
Type::KnownInstance(KnownInstanceType::Dict) => Ok(KnownClass::Dict.to_instance(db)),
Type::KnownInstance(KnownInstanceType::Set) => Ok(KnownClass::Set.to_instance(db)),
Type::KnownInstance(KnownInstanceType::FrozenSet) => {
Ok(KnownClass::FrozenSet.to_instance(db))
}
Type::KnownInstance(KnownInstanceType::ChainMap) => {
Ok(KnownClass::ChainMap.to_instance(db))
}
Type::KnownInstance(KnownInstanceType::Counter) => {
Ok(KnownClass::Counter.to_instance(db))
}
Type::KnownInstance(KnownInstanceType::DefaultDict) => {
Ok(KnownClass::DefaultDict.to_instance(db))
}
Type::KnownInstance(KnownInstanceType::Deque) => Ok(KnownClass::Deque.to_instance(db)),
Type::KnownInstance(KnownInstanceType::OrderedDict) => {
Ok(KnownClass::OrderedDict.to_instance(db))
}
Type::Union(union) => {
let mut builder = UnionBuilder::new(db);
let mut invalid_expressions = smallvec::SmallVec::default();
@@ -2011,6 +2101,7 @@ impl<'db> Type<'db> {
ClassBase::try_from_ty(db, todo_type!("Intersection meta-type"))
.expect("Type::Todo should be a valid ClassBase"),
),
Type::AlwaysTruthy | Type::AlwaysFalsy => KnownClass::Type.to_instance(db),
Type::Todo(todo) => Type::subclass_of_base(ClassBase::Todo(*todo)),
}
}
@@ -2066,6 +2157,12 @@ impl<'db> From<Type<'db>> for Symbol<'db> {
}
}
impl<'db> From<&Type<'db>> for Symbol<'db> {
fn from(value: &Type<'db>) -> Self {
Self::from(*value)
}
}
/// Error struct providing information on type(s) that were deemed to be invalid
/// in a type expression context, and the type we should therefore fallback to
/// for the problematic type expression.
@@ -2076,17 +2173,13 @@ pub struct InvalidTypeExpressionError<'db> {
}
impl<'db> InvalidTypeExpressionError<'db> {
fn into_fallback_type(
self,
diagnostics: &mut TypeCheckDiagnosticsBuilder,
node: &ast::Expr,
) -> Type<'db> {
fn into_fallback_type(self, context: &InferContext, node: &ast::Expr) -> Type<'db> {
let InvalidTypeExpressionError {
fallback_type,
invalid_expressions,
} = self;
for error in invalid_expressions {
diagnostics.add_lint(
context.report_lint(
&INVALID_TYPE_FORM,
node.into(),
format_args!("{}", error.reason()),
@@ -2153,6 +2246,12 @@ pub enum KnownClass {
TypeVar,
TypeAliasType,
NoDefaultType,
// Collections
ChainMap,
Counter,
DefaultDict,
Deque,
OrderedDict,
// sys
VersionInfo,
}
@@ -2183,6 +2282,11 @@ impl<'db> KnownClass {
Self::TypeVar => "TypeVar",
Self::TypeAliasType => "TypeAliasType",
Self::NoDefaultType => "_NoDefaultType",
Self::ChainMap => "ChainMap",
Self::Counter => "Counter",
Self::DefaultDict => "defaultdict",
Self::Deque => "deque",
Self::OrderedDict => "OrderedDict",
// For example, `typing.List` is defined as `List = _Alias()` in typeshed
Self::StdlibAlias => "_Alias",
// This is the name the type of `sys.version_info` has in typeshed,
@@ -2204,10 +2308,11 @@ impl<'db> KnownClass {
.unwrap_or(Type::Unknown)
}
pub fn to_subclass_of(self, db: &'db dyn Db) -> Option<Type<'db>> {
pub fn to_subclass_of(self, db: &'db dyn Db) -> Type<'db> {
self.to_class_literal(db)
.into_class_literal()
.map(|ClassLiteralType { class }| Type::subclass_of(class))
.unwrap_or(Type::subclass_of_base(ClassBase::Unknown))
}
/// Return the module in which we should look up the definition for this class
@@ -2246,6 +2351,11 @@ impl<'db> KnownClass {
CoreStdlibModule::TypingExtensions
}
}
Self::ChainMap
| Self::Counter
| Self::DefaultDict
| Self::Deque
| Self::OrderedDict => CoreStdlibModule::Collections,
}
}
@@ -2273,6 +2383,11 @@ impl<'db> KnownClass {
| Self::ModuleType
| Self::FunctionType
| Self::SpecialForm
| Self::ChainMap
| Self::Counter
| Self::DefaultDict
| Self::Deque
| Self::OrderedDict
| Self::StdlibAlias
| Self::BaseException
| Self::BaseExceptionGroup
@@ -2305,6 +2420,11 @@ impl<'db> KnownClass {
"ModuleType" => Self::ModuleType,
"FunctionType" => Self::FunctionType,
"TypeAliasType" => Self::TypeAliasType,
"ChainMap" => Self::ChainMap,
"Counter" => Self::Counter,
"defaultdict" => Self::DefaultDict,
"deque" => Self::Deque,
"OrderedDict" => Self::OrderedDict,
"_Alias" => Self::StdlibAlias,
"_SpecialForm" => Self::SpecialForm,
"_NoDefaultType" => Self::NoDefaultType,
@@ -2336,6 +2456,11 @@ impl<'db> KnownClass {
| Self::Dict
| Self::Slice
| Self::GenericAlias
| Self::ChainMap
| Self::Counter
| Self::DefaultDict
| Self::Deque
| Self::OrderedDict
| Self::StdlibAlias // no equivalent class exists in typing_extensions, nor ever will
| Self::ModuleType
| Self::VersionInfo
@@ -2371,6 +2496,24 @@ pub enum KnownInstanceType<'db> {
Any,
/// The symbol `typing.Tuple` (which can also be found as `typing_extensions.Tuple`)
Tuple,
/// The symbol `typing.List` (which can also be found as `typing_extensions.List`)
List,
/// The symbol `typing.Dict` (which can also be found as `typing_extensions.Dict`)
Dict,
/// The symbol `typing.Set` (which can also be found as `typing_extensions.Set`)
Set,
/// The symbol `typing.FrozenSet` (which can also be found as `typing_extensions.FrozenSet`)
FrozenSet,
/// The symbol `typing.ChainMap` (which can also be found as `typing_extensions.ChainMap`)
ChainMap,
/// The symbol `typing.Counter` (which can also be found as `typing_extensions.Counter`)
Counter,
/// The symbol `typing.DefaultDict` (which can also be found as `typing_extensions.DefaultDict`)
DefaultDict,
/// The symbol `typing.Deque` (which can also be found as `typing_extensions.Deque`)
Deque,
/// The symbol `typing.OrderedDict` (which can also be found as `typing_extensions.OrderedDict`)
OrderedDict,
/// The symbol `typing.Type` (which can also be found as `typing_extensions.Type`)
Type,
/// A single instance of `typing.TypeVar`
@@ -2391,15 +2534,6 @@ pub enum KnownInstanceType<'db> {
TypeAlias,
TypeGuard,
TypeIs,
List,
Dict,
DefaultDict,
Set,
FrozenSet,
Counter,
Deque,
ChainMap,
OrderedDict,
ReadOnly,
// TODO: fill this enum out with more special forms, etc.
}
@@ -2686,20 +2820,20 @@ enum IterationOutcome<'db> {
impl<'db> IterationOutcome<'db> {
fn unwrap_with_diagnostic(
self,
context: &InferContext<'db>,
iterable_node: ast::AnyNodeRef,
diagnostics: &mut TypeCheckDiagnosticsBuilder<'db>,
) -> Type<'db> {
match self {
Self::Iterable { element_ty } => element_ty,
Self::NotIterable { not_iterable_ty } => {
diagnostics.add_not_iterable(iterable_node, not_iterable_ty);
report_not_iterable(context, iterable_node, not_iterable_ty);
Type::Unknown
}
Self::PossiblyUnboundDunderIter {
iterable_ty,
element_ty,
} => {
diagnostics.add_not_iterable_possibly_unbound(iterable_node, iterable_ty);
report_not_iterable_possibly_unbound(context, iterable_node, iterable_ty);
element_ty
}
}
@@ -2860,6 +2994,18 @@ impl KnownFunction {
}
}
#[salsa::interned]
pub struct ModuleLiteralType<'db> {
/// The file in which this module was imported.
///
/// We need this in order to know which submodules should be attached to it as attributes
/// (because the submodules were also imported in this file).
pub importing_file: File,
/// The imported module.
pub module: Module,
}
/// Representation of a runtime class object.
///
/// Does not in itself represent a type,
@@ -3443,6 +3589,8 @@ pub(crate) mod tests {
SubclassOfAbcClass(&'static str),
StdlibModule(CoreStdlibModule),
SliceLiteral(i32, i32, i32),
AlwaysTruthy,
AlwaysFalsy,
}
impl Ty {
@@ -3501,7 +3649,8 @@ pub(crate) mod tests {
.class,
),
Ty::StdlibModule(module) => {
Type::ModuleLiteral(resolve_module(db, &module.name()).unwrap().file())
let module = resolve_module(db, &module.name()).unwrap();
Type::module_literal(db, module.file(), module)
}
Ty::SliceLiteral(start, stop, step) => Type::SliceLiteral(SliceLiteralType::new(
db,
@@ -3509,6 +3658,8 @@ pub(crate) mod tests {
Some(stop),
Some(step),
)),
Ty::AlwaysTruthy => Type::AlwaysTruthy,
Ty::AlwaysFalsy => Type::AlwaysFalsy,
}
}
}
@@ -3647,6 +3798,12 @@ pub(crate) mod tests {
)]
#[test_case(Ty::SliceLiteral(1, 2, 3), Ty::BuiltinInstance("slice"))]
#[test_case(Ty::SubclassOfBuiltinClass("str"), Ty::Intersection{pos: vec![], neg: vec![Ty::None]})]
#[test_case(Ty::IntLiteral(1), Ty::AlwaysTruthy)]
#[test_case(Ty::IntLiteral(0), Ty::AlwaysFalsy)]
#[test_case(Ty::AlwaysTruthy, Ty::BuiltinInstance("object"))]
#[test_case(Ty::AlwaysFalsy, Ty::BuiltinInstance("object"))]
#[test_case(Ty::Never, Ty::AlwaysTruthy)]
#[test_case(Ty::Never, Ty::AlwaysFalsy)]
fn is_subtype_of(from: Ty, to: Ty) {
let db = setup_db();
assert!(from.into_type(&db).is_subtype_of(&db, to.into_type(&db)));
@@ -3681,6 +3838,10 @@ pub(crate) mod tests {
#[test_case(Ty::BuiltinClassLiteral("str"), Ty::SubclassOfAny)]
#[test_case(Ty::AbcInstance("ABCMeta"), Ty::SubclassOfBuiltinClass("type"))]
#[test_case(Ty::SubclassOfBuiltinClass("str"), Ty::BuiltinClassLiteral("str"))]
#[test_case(Ty::IntLiteral(1), Ty::AlwaysFalsy)]
#[test_case(Ty::IntLiteral(0), Ty::AlwaysTruthy)]
#[test_case(Ty::BuiltinInstance("str"), Ty::AlwaysTruthy)]
#[test_case(Ty::BuiltinInstance("str"), Ty::AlwaysFalsy)]
fn is_not_subtype_of(from: Ty, to: Ty) {
let db = setup_db();
assert!(!from.into_type(&db).is_subtype_of(&db, to.into_type(&db)));
@@ -3815,6 +3976,7 @@ pub(crate) mod tests {
#[test_case(Ty::Tuple(vec![]), Ty::BuiltinClassLiteral("object"))]
#[test_case(Ty::SubclassOfBuiltinClass("object"), Ty::None)]
#[test_case(Ty::SubclassOfBuiltinClass("str"), Ty::LiteralString)]
#[test_case(Ty::AlwaysFalsy, Ty::AlwaysTruthy)]
fn is_disjoint_from(a: Ty, b: Ty) {
let db = setup_db();
let a = a.into_type(&db);
@@ -3845,6 +4007,8 @@ pub(crate) mod tests {
#[test_case(Ty::BuiltinClassLiteral("str"), Ty::BuiltinInstance("type"))]
#[test_case(Ty::BuiltinClassLiteral("str"), Ty::SubclassOfAny)]
#[test_case(Ty::AbcClassLiteral("ABC"), Ty::AbcInstance("ABCMeta"))]
#[test_case(Ty::BuiltinInstance("str"), Ty::AlwaysTruthy)]
#[test_case(Ty::BuiltinInstance("str"), Ty::AlwaysFalsy)]
fn is_not_disjoint_from(a: Ty, b: Ty) {
let db = setup_db();
let a = a.into_type(&db);

View File

@@ -30,6 +30,8 @@ use crate::types::{InstanceType, IntersectionType, KnownClass, Type, UnionType};
use crate::{Db, FxOrderSet};
use smallvec::SmallVec;
use super::Truthiness;
pub(crate) struct UnionBuilder<'db> {
elements: Vec<Type<'db>>,
db: &'db dyn Db,
@@ -243,15 +245,22 @@ impl<'db> InnerIntersectionBuilder<'db> {
}
} else {
// ~Literal[True] & bool = Literal[False]
// ~AlwaysTruthy & bool = Literal[False]
if let Type::Instance(InstanceType { class }) = new_positive {
if class.is_known(db, KnownClass::Bool) {
if let Some(&Type::BooleanLiteral(value)) = self
if let Some(new_type) = self
.negative
.iter()
.find(|element| element.is_boolean_literal())
.find(|element| {
element.is_boolean_literal()
| matches!(element, Type::AlwaysFalsy | Type::AlwaysTruthy)
})
.map(|element| {
Type::BooleanLiteral(element.bool(db) != Truthiness::AlwaysTrue)
})
{
*self = Self::default();
self.positive.insert(Type::BooleanLiteral(!value));
self.positive.insert(new_type);
return;
}
}
@@ -318,15 +327,15 @@ impl<'db> InnerIntersectionBuilder<'db> {
// simplify the representation.
self.add_positive(db, ty);
}
// ~Literal[True] & bool = Literal[False]
Type::BooleanLiteral(bool)
if self
.positive
.iter()
.any(|pos| *pos == KnownClass::Bool.to_instance(db)) =>
// bool & ~Literal[True] = Literal[False]
// bool & ~AlwaysTruthy = Literal[False]
Type::BooleanLiteral(_) | Type::AlwaysFalsy | Type::AlwaysTruthy
if self.positive.contains(&KnownClass::Bool.to_instance(db)) =>
{
*self = Self::default();
self.positive.insert(Type::BooleanLiteral(!bool));
self.positive.insert(Type::BooleanLiteral(
new_negative.bool(db) != Truthiness::AlwaysTrue,
));
}
_ => {
let mut to_remove = SmallVec::<[usize; 1]>::new();
@@ -380,7 +389,7 @@ mod tests {
use super::{IntersectionBuilder, IntersectionType, Type, UnionType};
use crate::db::tests::{setup_db, TestDb};
use crate::types::{global_symbol, todo_type, KnownClass, UnionBuilder};
use crate::types::{global_symbol, todo_type, KnownClass, Truthiness, UnionBuilder};
use ruff_db::files::system_path_to_file;
use ruff_db::system::DbWithTestSystem;
@@ -997,42 +1006,43 @@ mod tests {
assert_eq!(ty, expected);
}
#[test_case(true)]
#[test_case(false)]
fn build_intersection_simplify_split_bool(bool_value: bool) {
#[test_case(Type::BooleanLiteral(true))]
#[test_case(Type::BooleanLiteral(false))]
#[test_case(Type::AlwaysTruthy)]
#[test_case(Type::AlwaysFalsy)]
fn build_intersection_simplify_split_bool(t_splitter: Type) {
let db = setup_db();
let t_bool = KnownClass::Bool.to_instance(&db);
let t_boolean_literal = Type::BooleanLiteral(bool_value);
let bool_value = t_splitter.bool(&db) == Truthiness::AlwaysTrue;
// We add t_object in various orders (in first or second position) in
// the tests below to ensure that the boolean simplification eliminates
// everything from the intersection, not just `bool`.
let t_object = KnownClass::Object.to_instance(&db);
let t_bool = KnownClass::Bool.to_instance(&db);
let ty = IntersectionBuilder::new(&db)
.add_positive(t_object)
.add_positive(t_bool)
.add_negative(t_boolean_literal)
.add_negative(t_splitter)
.build();
assert_eq!(ty, Type::BooleanLiteral(!bool_value));
let ty = IntersectionBuilder::new(&db)
.add_positive(t_bool)
.add_positive(t_object)
.add_negative(t_boolean_literal)
.add_negative(t_splitter)
.build();
assert_eq!(ty, Type::BooleanLiteral(!bool_value));
let ty = IntersectionBuilder::new(&db)
.add_positive(t_object)
.add_negative(t_boolean_literal)
.add_negative(t_splitter)
.add_positive(t_bool)
.build();
assert_eq!(ty, Type::BooleanLiteral(!bool_value));
let ty = IntersectionBuilder::new(&db)
.add_negative(t_boolean_literal)
.add_negative(t_splitter)
.add_positive(t_object)
.add_positive(t_bool)
.build();

View File

@@ -1,4 +1,5 @@
use super::diagnostic::{TypeCheckDiagnosticsBuilder, CALL_NON_CALLABLE};
use super::context::InferContext;
use super::diagnostic::CALL_NON_CALLABLE;
use super::{Severity, Type, TypeArrayDisplay, UnionBuilder};
use crate::Db;
use ruff_db::diagnostic::DiagnosticId;
@@ -86,24 +87,23 @@ impl<'db> CallOutcome<'db> {
}
/// Get the return type of the call, emitting default diagnostics if needed.
pub(super) fn unwrap_with_diagnostic<'a>(
pub(super) fn unwrap_with_diagnostic(
&self,
db: &'db dyn Db,
context: &InferContext<'db>,
node: ast::AnyNodeRef,
diagnostics: &'a mut TypeCheckDiagnosticsBuilder<'db>,
) -> Type<'db> {
match self.return_ty_result(db, node, diagnostics) {
match self.return_ty_result(context, node) {
Ok(return_ty) => return_ty,
Err(NotCallableError::Type {
not_callable_ty,
return_ty,
}) => {
diagnostics.add_lint(
context.report_lint(
&CALL_NON_CALLABLE,
node,
format_args!(
"Object of type `{}` is not callable",
not_callable_ty.display(db)
not_callable_ty.display(context.db())
),
);
return_ty
@@ -113,13 +113,13 @@ impl<'db> CallOutcome<'db> {
called_ty,
return_ty,
}) => {
diagnostics.add_lint(
context.report_lint(
&CALL_NON_CALLABLE,
node,
format_args!(
"Object of type `{}` is not callable (due to union element `{}`)",
called_ty.display(db),
not_callable_ty.display(db),
called_ty.display(context.db()),
not_callable_ty.display(context.db()),
),
);
return_ty
@@ -129,13 +129,13 @@ impl<'db> CallOutcome<'db> {
called_ty,
return_ty,
}) => {
diagnostics.add_lint(
context.report_lint(
&CALL_NON_CALLABLE,
node,
format_args!(
"Object of type `{}` is not callable (due to union elements {})",
called_ty.display(db),
not_callable_tys.display(db),
called_ty.display(context.db()),
not_callable_tys.display(context.db()),
),
);
return_ty
@@ -144,12 +144,12 @@ impl<'db> CallOutcome<'db> {
callable_ty: called_ty,
return_ty,
}) => {
diagnostics.add_lint(
context.report_lint(
&CALL_NON_CALLABLE,
node,
format_args!(
"Object of type `{}` is not callable (possibly unbound `__call__` method)",
called_ty.display(db)
called_ty.display(context.db())
),
);
return_ty
@@ -158,11 +158,10 @@ impl<'db> CallOutcome<'db> {
}
/// Get the return type of the call as a result.
pub(super) fn return_ty_result<'a>(
pub(super) fn return_ty_result(
&self,
db: &'db dyn Db,
context: &InferContext<'db>,
node: ast::AnyNodeRef,
diagnostics: &'a mut TypeCheckDiagnosticsBuilder<'db>,
) -> Result<Type<'db>, NotCallableError<'db>> {
match self {
Self::Callable { return_ty } => Ok(*return_ty),
@@ -170,11 +169,11 @@ impl<'db> CallOutcome<'db> {
return_ty,
revealed_ty,
} => {
diagnostics.add(
context.report_diagnostic(
node,
DiagnosticId::RevealedType,
Severity::Info,
format_args!("Revealed type is `{}`", revealed_ty.display(db)),
format_args!("Revealed type is `{}`", revealed_ty.display(context.db())),
);
Ok(*return_ty)
}
@@ -187,14 +186,16 @@ impl<'db> CallOutcome<'db> {
call_outcome,
} => Err(NotCallableError::PossiblyUnboundDunderCall {
callable_ty: *called_ty,
return_ty: call_outcome.return_ty(db).unwrap_or(Type::Unknown),
return_ty: call_outcome
.return_ty(context.db())
.unwrap_or(Type::Unknown),
}),
Self::Union {
outcomes,
called_ty,
} => {
let mut not_callable = vec![];
let mut union_builder = UnionBuilder::new(db);
let mut union_builder = UnionBuilder::new(context.db());
let mut revealed = false;
for outcome in outcomes {
let return_ty = match outcome {
@@ -210,10 +211,10 @@ impl<'db> CallOutcome<'db> {
*return_ty
} else {
revealed = true;
outcome.unwrap_with_diagnostic(db, node, diagnostics)
outcome.unwrap_with_diagnostic(context, node)
}
}
_ => outcome.unwrap_with_diagnostic(db, node, diagnostics),
_ => outcome.unwrap_with_diagnostic(context, node),
};
union_builder = union_builder.add(return_ty);
}

View File

@@ -70,7 +70,9 @@ impl<'db> ClassBase<'db> {
| Type::Tuple(_)
| Type::SliceLiteral(_)
| Type::ModuleLiteral(_)
| Type::SubclassOf(_) => None,
| Type::SubclassOf(_)
| Type::AlwaysFalsy
| Type::AlwaysTruthy => None,
Type::KnownInstance(known_instance) => match known_instance {
KnownInstanceType::TypeVar(_)
| KnownInstanceType::TypeAliasType(_)
@@ -112,15 +114,24 @@ impl<'db> ClassBase<'db> {
KnownInstanceType::FrozenSet => {
Self::try_from_ty(db, KnownClass::FrozenSet.to_class_literal(db))
}
KnownInstanceType::Callable
| KnownInstanceType::ChainMap
| KnownInstanceType::Counter
| KnownInstanceType::DefaultDict
| KnownInstanceType::Deque
| KnownInstanceType::OrderedDict => Self::try_from_ty(
db,
todo_type!("Support for more typing aliases as base classes"),
),
KnownInstanceType::ChainMap => {
Self::try_from_ty(db, KnownClass::ChainMap.to_class_literal(db))
}
KnownInstanceType::Counter => {
Self::try_from_ty(db, KnownClass::Counter.to_class_literal(db))
}
KnownInstanceType::DefaultDict => {
Self::try_from_ty(db, KnownClass::DefaultDict.to_class_literal(db))
}
KnownInstanceType::Deque => {
Self::try_from_ty(db, KnownClass::Deque.to_class_literal(db))
}
KnownInstanceType::OrderedDict => {
Self::try_from_ty(db, KnownClass::OrderedDict.to_class_literal(db))
}
KnownInstanceType::Callable => {
Self::try_from_ty(db, todo_type!("Support for Callable as a base class"))
}
},
}
}

View File

@@ -0,0 +1,131 @@
use std::fmt;
use drop_bomb::DebugDropBomb;
use ruff_db::{
diagnostic::{DiagnosticId, Severity},
files::File,
};
use ruff_python_ast::AnyNodeRef;
use ruff_text_size::Ranged;
use crate::{
lint::{LintId, LintMetadata},
Db,
};
use super::{TypeCheckDiagnostic, TypeCheckDiagnostics};
/// Context for inferring the types of a single file.
///
/// One context exists for at least for every inferred region but it's
/// possible that inferring a sub-region, like an unpack assignment, creates
/// a sub-context.
///
/// Tracks the reported diagnostics of the inferred region.
///
/// ## Consuming
/// It's important that the context is explicitly consumed before dropping by calling
/// [`InferContext::finish`] and the returned diagnostics must be stored
/// on the current [`TypeInference`](super::infer::TypeInference) result.
pub(crate) struct InferContext<'db> {
db: &'db dyn Db,
file: File,
diagnostics: std::cell::RefCell<TypeCheckDiagnostics>,
bomb: DebugDropBomb,
}
impl<'db> InferContext<'db> {
pub(crate) fn new(db: &'db dyn Db, file: File) -> Self {
Self {
db,
file,
diagnostics: std::cell::RefCell::new(TypeCheckDiagnostics::default()),
bomb: DebugDropBomb::new("`InferContext` needs to be explicitly consumed by calling `::finish` to prevent accidental loss of diagnostics."),
}
}
/// The file for which the types are inferred.
pub(crate) fn file(&self) -> File {
self.file
}
pub(crate) fn db(&self) -> &'db dyn Db {
self.db
}
pub(crate) fn extend<T>(&mut self, other: &T)
where
T: WithDiagnostics,
{
self.diagnostics
.get_mut()
.extend(other.diagnostics().iter().cloned());
}
/// Reports a lint located at `node`.
pub(super) fn report_lint(
&self,
lint: &'static LintMetadata,
node: AnyNodeRef,
message: std::fmt::Arguments,
) {
// Skip over diagnostics if the rule is disabled.
let Some(severity) = self.db.rule_selection().severity(LintId::of(lint)) else {
return;
};
self.report_diagnostic(node, DiagnosticId::Lint(lint.name()), severity, message);
}
/// Adds a new diagnostic.
///
/// The diagnostic does not get added if the rule isn't enabled for this file.
pub(super) fn report_diagnostic(
&self,
node: AnyNodeRef,
id: DiagnosticId,
severity: Severity,
message: std::fmt::Arguments,
) {
if !self.db.is_file_open(self.file) {
return;
}
// TODO: Don't emit the diagnostic if:
// * The enclosing node contains any syntax errors
// * The rule is disabled for this file. We probably want to introduce a new query that
// returns a rule selector for a given file that respects the package's settings,
// any global pragma comments in the file, and any per-file-ignores.
// * Check for suppression comments, bump a counter if the diagnostic is suppressed.
self.diagnostics.borrow_mut().push(TypeCheckDiagnostic {
file: self.file,
id,
message: message.to_string(),
range: node.range(),
severity,
});
}
#[must_use]
pub(crate) fn finish(mut self) -> TypeCheckDiagnostics {
self.bomb.defuse();
let mut diagnostics = self.diagnostics.into_inner();
diagnostics.shrink_to_fit();
diagnostics
}
}
impl fmt::Debug for InferContext<'_> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("TyContext")
.field("file", &self.file)
.field("diagnostics", &self.diagnostics)
.field("defused", &self.bomb)
.finish()
}
}
pub(crate) trait WithDiagnostics {
fn diagnostics(&self) -> &TypeCheckDiagnostics;
}

View File

@@ -1,11 +1,11 @@
use crate::lint::{Level, LintId, LintMetadata, LintRegistryBuilder, LintStatus};
use crate::declare_lint;
use crate::lint::{Level, LintRegistryBuilder, LintStatus};
use crate::types::string_annotation::{
BYTE_STRING_TYPE_ANNOTATION, ESCAPE_CHARACTER_IN_FORWARD_ANNOTATION, FSTRING_TYPE_ANNOTATION,
IMPLICIT_CONCATENATED_STRING_TYPE_ANNOTATION, INVALID_SYNTAX_IN_FORWARD_ANNOTATION,
RAW_STRING_TYPE_ANNOTATION,
};
use crate::types::{ClassLiteralType, Type};
use crate::{declare_lint, Db};
use ruff_db::diagnostic::{Diagnostic, DiagnosticId, Severity};
use ruff_db::files::File;
use ruff_python_ast::{self as ast, AnyNodeRef};
@@ -15,6 +15,8 @@ use std::fmt::Formatter;
use std::ops::Deref;
use std::sync::Arc;
use super::context::InferContext;
/// Registers all known type check lints.
pub(crate) fn register_lints(registry: &mut LintRegistryBuilder) {
registry.register_lint(&CALL_NON_CALLABLE);
@@ -32,6 +34,7 @@ pub(crate) fn register_lints(registry: &mut LintRegistryBuilder) {
registry.register_lint(&INVALID_DECLARATION);
registry.register_lint(&INVALID_EXCEPTION_CAUGHT);
registry.register_lint(&INVALID_PARAMETER_DEFAULT);
registry.register_lint(&INVALID_RAISE);
registry.register_lint(&INVALID_TYPE_FORM);
registry.register_lint(&INVALID_TYPE_VARIABLE_CONSTRAINTS);
registry.register_lint(&NON_SUBSCRIPTABLE);
@@ -246,6 +249,49 @@ declare_lint! {
}
}
declare_lint! {
/// Checks for `raise` statements that raise non-exceptions or use invalid
/// causes for their raised exceptions.
///
/// ## Why is this bad?
/// Only subclasses or instances of `BaseException` can be raised.
/// For an exception's cause, the same rules apply, except that `None` is also
/// permitted. Violating these rules results in a `TypeError` at runtime.
///
/// ## Examples
/// ```python
/// def f():
/// try:
/// something()
/// except NameError:
/// raise "oops!" from f
///
/// def g():
/// raise NotImplemented from 42
/// ```
///
/// Use instead:
/// ```python
/// def f():
/// try:
/// something()
/// except NameError as e:
/// raise RuntimeError("oops!") from e
///
/// def g():
/// raise NotImplementedError from None
/// ```
///
/// ## References
/// - [Python documentation: The `raise` statement](https://docs.python.org/3/reference/simple_stmts.html#raise)
/// - [Python documentation: Built-in Exceptions](https://docs.python.org/3/library/exceptions.html#built-in-exceptions)
pub(crate) static INVALID_RAISE = {
summary: "detects `raise` statements that raise invalid exceptions or use invalid causes",
status: LintStatus::preview("1.0.0"),
default_level: Level::Error,
}
}
declare_lint! {
/// ## What it does
/// Checks for invalid type expressions.
@@ -564,223 +610,181 @@ impl<'a> IntoIterator for &'a TypeCheckDiagnostics {
}
}
pub(super) struct TypeCheckDiagnosticsBuilder<'db> {
db: &'db dyn Db,
file: File,
diagnostics: TypeCheckDiagnostics,
/// Emit a diagnostic declaring that the object represented by `node` is not iterable
pub(super) fn report_not_iterable(context: &InferContext, node: AnyNodeRef, not_iterable_ty: Type) {
context.report_lint(
&NOT_ITERABLE,
node,
format_args!(
"Object of type `{}` is not iterable",
not_iterable_ty.display(context.db())
),
);
}
impl<'db> TypeCheckDiagnosticsBuilder<'db> {
pub(super) fn new(db: &'db dyn Db, file: File) -> Self {
Self {
db,
file,
diagnostics: TypeCheckDiagnostics::default(),
/// Emit a diagnostic declaring that the object represented by `node` is not iterable
/// because its `__iter__` method is possibly unbound.
pub(super) fn report_not_iterable_possibly_unbound(
context: &InferContext,
node: AnyNodeRef,
element_ty: Type,
) {
context.report_lint(
&NOT_ITERABLE,
node,
format_args!(
"Object of type `{}` is not iterable because its `__iter__` method is possibly unbound",
element_ty.display(context.db())
),
);
}
/// Emit a diagnostic declaring that an index is out of bounds for a tuple.
pub(super) fn report_index_out_of_bounds(
context: &InferContext,
kind: &'static str,
node: AnyNodeRef,
tuple_ty: Type,
length: usize,
index: i64,
) {
context.report_lint(
&INDEX_OUT_OF_BOUNDS,
node,
format_args!(
"Index {index} is out of bounds for {kind} `{}` with length {length}",
tuple_ty.display(context.db())
),
);
}
/// Emit a diagnostic declaring that a type does not support subscripting.
pub(super) fn report_non_subscriptable(
context: &InferContext,
node: AnyNodeRef,
non_subscriptable_ty: Type,
method: &str,
) {
context.report_lint(
&NON_SUBSCRIPTABLE,
node,
format_args!(
"Cannot subscript object of type `{}` with no `{method}` method",
non_subscriptable_ty.display(context.db())
),
);
}
pub(super) fn report_unresolved_module<'db>(
context: &InferContext,
import_node: impl Into<AnyNodeRef<'db>>,
level: u32,
module: Option<&str>,
) {
context.report_lint(
&UNRESOLVED_IMPORT,
import_node.into(),
format_args!(
"Cannot resolve import `{}{}`",
".".repeat(level as usize),
module.unwrap_or_default()
),
);
}
pub(super) fn report_slice_step_size_zero(context: &InferContext, node: AnyNodeRef) {
context.report_lint(
&ZERO_STEPSIZE_IN_SLICE,
node,
format_args!("Slice step size can not be zero"),
);
}
pub(super) fn report_invalid_assignment(
context: &InferContext,
node: AnyNodeRef,
declared_ty: Type,
assigned_ty: Type,
) {
match declared_ty {
Type::ClassLiteral(ClassLiteralType { class }) => {
context.report_lint(&INVALID_ASSIGNMENT, node, format_args!(
"Implicit shadowing of class `{}`; annotate to make it explicit if this is intentional",
class.name(context.db())));
}
}
/// Emit a diagnostic declaring that the object represented by `node` is not iterable
pub(super) fn add_not_iterable(&mut self, node: AnyNodeRef, not_iterable_ty: Type<'db>) {
self.add_lint(
&NOT_ITERABLE,
node,
format_args!(
"Object of type `{}` is not iterable",
not_iterable_ty.display(self.db)
),
);
}
/// Emit a diagnostic declaring that the object represented by `node` is not iterable
/// because its `__iter__` method is possibly unbound.
pub(super) fn add_not_iterable_possibly_unbound(
&mut self,
node: AnyNodeRef,
element_ty: Type<'db>,
) {
self.add_lint(
&NOT_ITERABLE,
node,
format_args!(
"Object of type `{}` is not iterable because its `__iter__` method is possibly unbound",
element_ty.display(self.db)
),
);
}
/// Emit a diagnostic declaring that an index is out of bounds for a tuple.
pub(super) fn add_index_out_of_bounds(
&mut self,
kind: &'static str,
node: AnyNodeRef,
tuple_ty: Type<'db>,
length: usize,
index: i64,
) {
self.add_lint(
&INDEX_OUT_OF_BOUNDS,
node,
format_args!(
"Index {index} is out of bounds for {kind} `{}` with length {length}",
tuple_ty.display(self.db)
),
);
}
/// Emit a diagnostic declaring that a type does not support subscripting.
pub(super) fn add_non_subscriptable(
&mut self,
node: AnyNodeRef,
non_subscriptable_ty: Type<'db>,
method: &str,
) {
self.add_lint(
&NON_SUBSCRIPTABLE,
node,
format_args!(
"Cannot subscript object of type `{}` with no `{method}` method",
non_subscriptable_ty.display(self.db)
),
);
}
pub(super) fn add_unresolved_module(
&mut self,
import_node: impl Into<AnyNodeRef<'db>>,
level: u32,
module: Option<&str>,
) {
self.add_lint(
&UNRESOLVED_IMPORT,
import_node.into(),
format_args!(
"Cannot resolve import `{}{}`",
".".repeat(level as usize),
module.unwrap_or_default()
),
);
}
pub(super) fn add_slice_step_size_zero(&mut self, node: AnyNodeRef) {
self.add_lint(
&ZERO_STEPSIZE_IN_SLICE,
node,
format_args!("Slice step size can not be zero"),
);
}
pub(super) fn add_invalid_assignment(
&mut self,
node: AnyNodeRef,
declared_ty: Type<'db>,
assigned_ty: Type<'db>,
) {
match declared_ty {
Type::ClassLiteral(ClassLiteralType { class }) => {
self.add_lint(&INVALID_ASSIGNMENT, node, format_args!(
"Implicit shadowing of class `{}`; annotate to make it explicit if this is intentional",
class.name(self.db)));
}
Type::FunctionLiteral(function) => {
self.add_lint(&INVALID_ASSIGNMENT, node, format_args!(
"Implicit shadowing of function `{}`; annotate to make it explicit if this is intentional",
function.name(self.db)));
}
_ => {
self.add_lint(
&INVALID_ASSIGNMENT,
node,
format_args!(
"Object of type `{}` is not assignable to `{}`",
assigned_ty.display(self.db),
declared_ty.display(self.db),
),
);
}
Type::FunctionLiteral(function) => {
context.report_lint(&INVALID_ASSIGNMENT, node, format_args!(
"Implicit shadowing of function `{}`; annotate to make it explicit if this is intentional",
function.name(context.db())));
}
}
pub(super) fn add_possibly_unresolved_reference(&mut self, expr_name_node: &ast::ExprName) {
let ast::ExprName { id, .. } = expr_name_node;
self.add_lint(
&POSSIBLY_UNRESOLVED_REFERENCE,
expr_name_node.into(),
format_args!("Name `{id}` used when possibly not defined"),
);
}
pub(super) fn add_unresolved_reference(&mut self, expr_name_node: &ast::ExprName) {
let ast::ExprName { id, .. } = expr_name_node;
self.add_lint(
&UNRESOLVED_REFERENCE,
expr_name_node.into(),
format_args!("Name `{id}` used when not defined"),
);
}
pub(super) fn add_invalid_exception_caught(&mut self, db: &dyn Db, node: &ast::Expr, ty: Type) {
self.add_lint(
&INVALID_EXCEPTION_CAUGHT,
node.into(),
format_args!(
"Cannot catch object of type `{}` in an exception handler \
(must be a `BaseException` subclass or a tuple of `BaseException` subclasses)",
ty.display(db)
),
);
}
pub(super) fn add_lint(
&mut self,
lint: &'static LintMetadata,
node: AnyNodeRef,
message: std::fmt::Arguments,
) {
// Skip over diagnostics if the rule is disabled.
let Some(severity) = self.db.rule_selection().severity(LintId::of(lint)) else {
return;
};
self.add(node, DiagnosticId::Lint(lint.name()), severity, message);
}
/// Adds a new diagnostic.
///
/// The diagnostic does not get added if the rule isn't enabled for this file.
pub(super) fn add(
&mut self,
node: AnyNodeRef,
id: DiagnosticId,
severity: Severity,
message: std::fmt::Arguments,
) {
if !self.db.is_file_open(self.file) {
return;
_ => {
context.report_lint(
&INVALID_ASSIGNMENT,
node,
format_args!(
"Object of type `{}` is not assignable to `{}`",
assigned_ty.display(context.db()),
declared_ty.display(context.db()),
),
);
}
// TODO: Don't emit the diagnostic if:
// * The enclosing node contains any syntax errors
// * The rule is disabled for this file. We probably want to introduce a new query that
// returns a rule selector for a given file that respects the package's settings,
// any global pragma comments in the file, and any per-file-ignores.
self.diagnostics.push(TypeCheckDiagnostic {
file: self.file,
id,
message: message.to_string(),
range: node.range(),
severity,
});
}
pub(super) fn extend(&mut self, diagnostics: &TypeCheckDiagnostics) {
self.diagnostics.extend(diagnostics);
}
pub(super) fn finish(mut self) -> TypeCheckDiagnostics {
self.diagnostics.shrink_to_fit();
self.diagnostics
}
}
pub(super) fn report_possibly_unresolved_reference(
context: &InferContext,
expr_name_node: &ast::ExprName,
) {
let ast::ExprName { id, .. } = expr_name_node;
context.report_lint(
&POSSIBLY_UNRESOLVED_REFERENCE,
expr_name_node.into(),
format_args!("Name `{id}` used when possibly not defined"),
);
}
pub(super) fn report_unresolved_reference(context: &InferContext, expr_name_node: &ast::ExprName) {
let ast::ExprName { id, .. } = expr_name_node;
context.report_lint(
&UNRESOLVED_REFERENCE,
expr_name_node.into(),
format_args!("Name `{id}` used when not defined"),
);
}
pub(super) fn report_invalid_exception_caught(context: &InferContext, node: &ast::Expr, ty: Type) {
context.report_lint(
&INVALID_EXCEPTION_CAUGHT,
node.into(),
format_args!(
"Cannot catch object of type `{}` in an exception handler \
(must be a `BaseException` subclass or a tuple of `BaseException` subclasses)",
ty.display(context.db())
),
);
}
pub(crate) fn report_invalid_exception_raised(context: &InferContext, node: &ast::Expr, ty: Type) {
context.report_lint(
&INVALID_RAISE,
node.into(),
format_args!(
"Cannot raise object of type `{}` (must be a `BaseException` subclass or instance)",
ty.display(context.db())
),
);
}
pub(crate) fn report_invalid_exception_cause(context: &InferContext, node: &ast::Expr, ty: Type) {
context.report_lint(
&INVALID_RAISE,
node.into(),
format_args!(
"Cannot use object of type `{}` as exception cause \
(must be a `BaseException` subclass or instance or `None`)",
ty.display(context.db())
),
);
}

View File

@@ -79,8 +79,8 @@ impl Display for DisplayRepresentation<'_> {
// `[Type::Todo]`'s display should be explicit that is not a valid display of
// any other type
Type::Todo(todo) => write!(f, "@Todo{todo}"),
Type::ModuleLiteral(file) => {
write!(f, "<module '{:?}'>", file.path(self.db))
Type::ModuleLiteral(module) => {
write!(f, "<module '{}'>", module.module(self.db).name())
}
// TODO functions and classes should display using a fully qualified name
Type::ClassLiteral(ClassLiteralType { class }) => f.write_str(class.name(self.db)),
@@ -140,6 +140,8 @@ impl Display for DisplayRepresentation<'_> {
}
f.write_str("]")
}
Type::AlwaysTruthy => f.write_str("AlwaysTruthy"),
Type::AlwaysFalsy => f.write_str("AlwaysFalsy"),
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -196,6 +196,7 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
is_positive: bool,
) -> Option<NarrowingConstraints<'db>> {
match expression_node {
ast::Expr::Name(name) => Some(self.evaluate_expr_name(name, is_positive)),
ast::Expr::Compare(expr_compare) => {
self.evaluate_expr_compare(expr_compare, expression, is_positive)
}
@@ -254,6 +255,31 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
}
}
fn evaluate_expr_name(
&mut self,
expr_name: &ast::ExprName,
is_positive: bool,
) -> NarrowingConstraints<'db> {
let ast::ExprName { id, .. } = expr_name;
let symbol = self
.symbols()
.symbol_id_by_name(id)
.expect("Should always have a symbol for every Name node");
let mut constraints = NarrowingConstraints::default();
constraints.insert(
symbol,
if is_positive {
Type::AlwaysFalsy.negate(self.db)
} else {
Type::AlwaysTruthy.negate(self.db)
},
);
constraints
}
fn evaluate_expr_compare(
&mut self,
expr_compare: &ast::ExprCompare,

View File

@@ -75,6 +75,8 @@ fn arbitrary_core_type(g: &mut Gen) -> Ty {
Ty::AbcClassLiteral("ABCMeta"),
Ty::SubclassOfAbcClass("ABC"),
Ty::SubclassOfAbcClass("ABCMeta"),
Ty::AlwaysTruthy,
Ty::AlwaysFalsy,
])
.unwrap()
.clone()

View File

@@ -1,13 +1,13 @@
use ruff_db::files::File;
use ruff_db::source::source_text;
use ruff_python_ast::str::raw_contents;
use ruff_python_ast::{self as ast, ModExpression, StringFlags};
use ruff_python_parser::{parse_expression_range, Parsed};
use ruff_text_size::Ranged;
use crate::declare_lint;
use crate::lint::{Level, LintStatus};
use crate::types::diagnostic::{TypeCheckDiagnostics, TypeCheckDiagnosticsBuilder};
use crate::{declare_lint, Db};
use super::context::InferContext;
declare_lint! {
/// ## What it does
@@ -127,24 +127,23 @@ declare_lint! {
}
}
type AnnotationParseResult = Result<Parsed<ModExpression>, TypeCheckDiagnostics>;
/// Parses the given expression as a string annotation.
pub(crate) fn parse_string_annotation(
db: &dyn Db,
file: File,
context: &InferContext,
string_expr: &ast::ExprStringLiteral,
) -> AnnotationParseResult {
) -> Option<Parsed<ModExpression>> {
let file = context.file();
let db = context.db();
let _span = tracing::trace_span!("parse_string_annotation", string=?string_expr.range(), file=%file.path(db)).entered();
let source = source_text(db.upcast(), file);
let node_text = &source[string_expr.range()];
let mut diagnostics = TypeCheckDiagnosticsBuilder::new(db, file);
if let [string_literal] = string_expr.value.as_slice() {
let prefix = string_literal.flags.prefix();
if prefix.is_raw() {
diagnostics.add_lint(
context.report_lint(
&RAW_STRING_TYPE_ANNOTATION,
string_literal.into(),
format_args!("Type expressions cannot use raw string literal"),
@@ -167,8 +166,8 @@ pub(crate) fn parse_string_annotation(
// """ = 1
// ```
match parse_expression_range(source.as_str(), range_excluding_quotes) {
Ok(parsed) => return Ok(parsed),
Err(parse_error) => diagnostics.add_lint(
Ok(parsed) => return Some(parsed),
Err(parse_error) => context.report_lint(
&INVALID_SYNTAX_IN_FORWARD_ANNOTATION,
string_literal.into(),
format_args!("Syntax error in forward annotation: {}", parse_error.error),
@@ -177,7 +176,7 @@ pub(crate) fn parse_string_annotation(
} else {
// The raw contents of the string doesn't match the parsed content. This could be the
// case for annotations that contain escape sequences.
diagnostics.add_lint(
context.report_lint(
&ESCAPE_CHARACTER_IN_FORWARD_ANNOTATION,
string_expr.into(),
format_args!("Type expressions cannot contain escape characters"),
@@ -185,12 +184,12 @@ pub(crate) fn parse_string_annotation(
}
} else {
// String is implicitly concatenated.
diagnostics.add_lint(
context.report_lint(
&IMPLICIT_CONCATENATED_STRING_TYPE_ANNOTATION,
string_expr.into(),
format_args!("Type expressions cannot span multiple string literals"),
);
}
Err(diagnostics.finish())
None
}

View File

@@ -6,30 +6,34 @@ use rustc_hash::FxHashMap;
use crate::semantic_index::ast_ids::{HasScopedExpressionId, ScopedExpressionId};
use crate::semantic_index::symbol::ScopeId;
use crate::types::{todo_type, Type, TypeCheckDiagnostics, TypeCheckDiagnosticsBuilder};
use crate::types::{todo_type, Type, TypeCheckDiagnostics};
use crate::Db;
use super::context::{InferContext, WithDiagnostics};
/// Unpacks the value expression type to their respective targets.
pub(crate) struct Unpacker<'db> {
db: &'db dyn Db,
context: InferContext<'db>,
targets: FxHashMap<ScopedExpressionId, Type<'db>>,
diagnostics: TypeCheckDiagnosticsBuilder<'db>,
}
impl<'db> Unpacker<'db> {
pub(crate) fn new(db: &'db dyn Db, file: File) -> Self {
Self {
db,
context: InferContext::new(db, file),
targets: FxHashMap::default(),
diagnostics: TypeCheckDiagnosticsBuilder::new(db, file),
}
}
fn db(&self) -> &'db dyn Db {
self.context.db()
}
pub(crate) fn unpack(&mut self, target: &ast::Expr, value_ty: Type<'db>, scope: ScopeId<'db>) {
match target {
ast::Expr::Name(target_name) => {
self.targets
.insert(target_name.scoped_expression_id(self.db, scope), value_ty);
.insert(target_name.scoped_expression_id(self.db(), scope), value_ty);
}
ast::Expr::Starred(ast::ExprStarred { value, .. }) => {
self.unpack(value, value_ty, scope);
@@ -40,11 +44,11 @@ impl<'db> Unpacker<'db> {
let starred_index = elts.iter().position(ast::Expr::is_starred_expr);
let element_types = if let Some(starred_index) = starred_index {
if tuple_ty.len(self.db) >= elts.len() - 1 {
if tuple_ty.len(self.db()) >= elts.len() - 1 {
let mut element_types = Vec::with_capacity(elts.len());
element_types.extend_from_slice(
// SAFETY: Safe because of the length check above.
&tuple_ty.elements(self.db)[..starred_index],
&tuple_ty.elements(self.db())[..starred_index],
);
// E.g., in `(a, *b, c, d) = ...`, the index of starred element `b`
@@ -52,10 +56,10 @@ impl<'db> Unpacker<'db> {
let remaining = elts.len() - (starred_index + 1);
// This index represents the type of the last element that belongs
// to the starred expression, in an exclusive manner.
let starred_end_index = tuple_ty.len(self.db) - remaining;
let starred_end_index = tuple_ty.len(self.db()) - remaining;
// SAFETY: Safe because of the length check above.
let _starred_element_types =
&tuple_ty.elements(self.db)[starred_index..starred_end_index];
&tuple_ty.elements(self.db())[starred_index..starred_end_index];
// TODO: Combine the types into a list type. If the
// starred_element_types is empty, then it should be `List[Any]`.
// combine_types(starred_element_types);
@@ -63,11 +67,11 @@ impl<'db> Unpacker<'db> {
element_types.extend_from_slice(
// SAFETY: Safe because of the length check above.
&tuple_ty.elements(self.db)[starred_end_index..],
&tuple_ty.elements(self.db())[starred_end_index..],
);
Cow::Owned(element_types)
} else {
let mut element_types = tuple_ty.elements(self.db).to_vec();
let mut element_types = tuple_ty.elements(self.db()).to_vec();
// Subtract 1 to insert the starred expression type at the correct
// index.
element_types.resize(elts.len() - 1, Type::Unknown);
@@ -76,7 +80,7 @@ impl<'db> Unpacker<'db> {
Cow::Owned(element_types)
}
} else {
Cow::Borrowed(tuple_ty.elements(self.db).as_ref())
Cow::Borrowed(tuple_ty.elements(self.db()).as_ref())
};
for (index, element) in elts.iter().enumerate() {
@@ -94,9 +98,9 @@ impl<'db> Unpacker<'db> {
// individual character, instead of just an array of `LiteralString`, but
// there would be a cost and it's not clear that it's worth it.
let value_ty = Type::tuple(
self.db,
self.db(),
std::iter::repeat(Type::LiteralString)
.take(string_literal_ty.python_len(self.db)),
.take(string_literal_ty.python_len(self.db())),
);
self.unpack(target, value_ty, scope);
}
@@ -105,8 +109,8 @@ impl<'db> Unpacker<'db> {
Type::LiteralString
} else {
value_ty
.iterate(self.db)
.unwrap_with_diagnostic(AnyNodeRef::from(target), &mut self.diagnostics)
.iterate(self.db())
.unwrap_with_diagnostic(&self.context, AnyNodeRef::from(target))
};
for element in elts {
self.unpack(element, value_ty, scope);
@@ -120,7 +124,7 @@ impl<'db> Unpacker<'db> {
pub(crate) fn finish(mut self) -> UnpackResult<'db> {
self.targets.shrink_to_fit();
UnpackResult {
diagnostics: self.diagnostics.finish(),
diagnostics: self.context.finish(),
targets: self.targets,
}
}
@@ -136,8 +140,10 @@ impl<'db> UnpackResult<'db> {
pub(crate) fn get(&self, expr_id: ScopedExpressionId) -> Option<Type<'db>> {
self.targets.get(&expr_id).copied()
}
}
pub(crate) fn diagnostics(&self) -> &TypeCheckDiagnostics {
impl WithDiagnostics for UnpackResult<'_> {
fn diagnostics(&self) -> &TypeCheckDiagnostics {
&self.diagnostics
}
}

View File

@@ -91,11 +91,11 @@ fn background_request_task<'a, R: traits::BackgroundDocumentRequestHandler>(
let db = match path {
AnySystemPath::System(path) => {
match session.workspace_db_for_path(path.as_std_path()) {
Some(db) => db.snapshot(),
None => session.default_workspace_db().snapshot(),
Some(db) => db.clone(),
None => session.default_workspace_db().clone(),
}
}
AnySystemPath::SystemVirtual(_) => session.default_workspace_db().snapshot(),
AnySystemPath::SystemVirtual(_) => session.default_workspace_db().clone(),
};
let Some(snapshot) = session.take_snapshot(url) else {

View File

@@ -9,6 +9,7 @@ use ruff_db::vendored::VendoredFileSystem;
use ruff_db::{Db as SourceDb, Upcast};
#[salsa::db]
#[derive(Clone)]
pub(crate) struct Db {
workspace_root: SystemPathBuf,
storage: salsa::Storage<Self>,

View File

@@ -21,6 +21,7 @@ pub trait Db: SemanticDb + Upcast<dyn SemanticDb> {
}
#[salsa::db]
#[derive(Clone)]
pub struct RootDatabase {
workspace: Option<Workspace>,
storage: salsa::Storage<RootDatabase>,
@@ -80,17 +81,6 @@ impl RootDatabase {
{
Cancelled::catch(|| f(self))
}
#[must_use]
pub fn snapshot(&self) -> Self {
Self {
workspace: self.workspace,
storage: self.storage.clone(),
files: self.files.snapshot(),
system: Arc::clone(&self.system),
rule_selection: Arc::clone(&self.rule_selection),
}
}
}
impl Upcast<dyn SemanticDb> for RootDatabase {
@@ -184,6 +174,7 @@ pub(crate) mod tests {
use crate::DEFAULT_LINT_REGISTRY;
#[salsa::db]
#[derive(Clone)]
pub(crate) struct TestDb {
storage: salsa::Storage<Self>,
events: Arc<std::sync::Mutex<Vec<Event>>>,

View File

@@ -195,13 +195,13 @@ impl Workspace {
let result = Arc::new(std::sync::Mutex::new(Vec::new()));
let inner_result = Arc::clone(&result);
let db = db.snapshot();
let db = db.clone();
let workspace_span = workspace_span.clone();
rayon::scope(move |scope| {
for file in &files {
let result = inner_result.clone();
let db = db.snapshot();
let db = db.clone();
let workspace_span = workspace_span.clone();
scope.spawn(move |_| {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.8.3"
version = "0.8.4"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -81,7 +81,7 @@ pub(crate) fn analyze_graph(
// Collect and resolve the imports for each file.
let result = Arc::new(Mutex::new(Vec::new()));
let inner_result = Arc::clone(&result);
let db = db.snapshot();
let db = db.clone();
rayon::scope(move |scope| {
for resolved_file in paths {
@@ -137,7 +137,7 @@ pub(crate) fn analyze_graph(
continue;
};
let db = db.snapshot();
let db = db.clone();
let glob_resolver = glob_resolver.clone();
let root = root.clone();
let result = inner_result.clone();

View File

@@ -31,15 +31,10 @@ static EXPECTED_DIAGNOSTICS: &[&str] = &[
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:98:12 Name `char` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:101:12 Name `char` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:104:14 Name `char` used when possibly not defined",
"error[lint:conflicting-declarations] /src/tomllib/_parser.py:108:17 Conflicting declared types for `second_char`: Unknown, str | None",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:115:14 Name `char` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:126:12 Name `char` used when possibly not defined",
"error[lint:conflicting-declarations] /src/tomllib/_parser.py:267:9 Conflicting declared types for `char`: Unknown, str | None",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:348:20 Name `nest` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:353:5 Name `nest` used when possibly not defined",
"error[lint:conflicting-declarations] /src/tomllib/_parser.py:364:9 Conflicting declared types for `char`: Unknown, str | None",
"error[lint:conflicting-declarations] /src/tomllib/_parser.py:381:13 Conflicting declared types for `char`: Unknown, str | None",
"error[lint:conflicting-declarations] /src/tomllib/_parser.py:395:9 Conflicting declared types for `char`: Unknown, str | None",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:453:24 Name `nest` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:455:9 Name `nest` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:482:16 Name `char` used when possibly not defined",
@@ -47,7 +42,6 @@ static EXPECTED_DIAGNOSTICS: &[&str] = &[
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:573:12 Name `char` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:579:12 Name `char` used when possibly not defined",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:580:63 Name `char` used when possibly not defined",
"error[lint:conflicting-declarations] /src/tomllib/_parser.py:590:9 Conflicting declared types for `char`: Unknown, str | None",
"warning[lint:possibly-unresolved-reference] /src/tomllib/_parser.py:629:38 Name `datetime_obj` used when possibly not defined",
];

View File

@@ -48,7 +48,7 @@ pub fn vendored_path_to_file(
}
/// Lookup table that maps [file paths](`FilePath`) to salsa interned [`File`] instances.
#[derive(Default)]
#[derive(Default, Clone)]
pub struct Files {
inner: Arc<FilesInner>,
}
@@ -253,13 +253,6 @@ impl Files {
root.set_revision(db).to(FileRevision::now());
}
}
#[must_use]
pub fn snapshot(&self) -> Self {
Self {
inner: Arc::clone(&self.inner),
}
}
}
impl std::fmt::Debug for Files {

View File

@@ -48,13 +48,13 @@ mod tests {
///
/// Uses an in memory filesystem and it stubs out the vendored files by default.
#[salsa::db]
#[derive(Default)]
#[derive(Default, Clone)]
pub(crate) struct TestDb {
storage: salsa::Storage<Self>,
files: Files,
system: TestSystem,
vendored: VendoredFileSystem,
events: std::sync::Arc<std::sync::Mutex<Vec<salsa::Event>>>,
events: Arc<std::sync::Mutex<Vec<salsa::Event>>>,
}
impl TestDb {

View File

@@ -175,19 +175,26 @@ pub trait DbWithTestSystem: Db + Sized {
///
/// # Panics
/// If the system isn't using the memory file system.
fn write_file(
&mut self,
path: impl AsRef<SystemPath>,
content: impl ToString,
) -> crate::system::Result<()> {
fn write_file(&mut self, path: impl AsRef<SystemPath>, content: impl ToString) -> Result<()> {
let path = path.as_ref();
let result = self
.test_system()
.memory_file_system()
.write_file(path, content);
let memory_fs = self.test_system().memory_file_system();
let sync_ancestors = path
.parent()
.is_some_and(|parent| !memory_fs.exists(parent));
let result = memory_fs.write_file(path, content);
if result.is_ok() {
File::sync_path(self, path);
// Sync the ancestor paths if the path's parent
// directory didn't exist before.
if sync_ancestors {
for ancestor in path.ancestors() {
File::sync_path(self, ancestor);
}
}
}
result

View File

@@ -16,7 +16,7 @@ static EMPTY_VENDORED: std::sync::LazyLock<VendoredFileSystem> = std::sync::Lazy
});
#[salsa::db]
#[derive(Default)]
#[derive(Default, Clone)]
pub struct ModuleDb {
storage: salsa::Storage<Self>,
files: Files,
@@ -55,17 +55,6 @@ impl ModuleDb {
Ok(db)
}
/// Create a snapshot of the current database.
#[must_use]
pub fn snapshot(&self) -> Self {
Self {
storage: self.storage.clone(),
system: self.system.clone(),
files: self.files.snapshot(),
rule_selection: Arc::clone(&self.rule_selection),
}
}
}
impl Upcast<dyn SourceDb> for ModuleDb {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.8.3"
version = "0.8.4"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1,36 +1,80 @@
from airflow import PY36, PY37, PY38, PY39, PY310, PY311, PY312
from airflow.triggers.external_task import TaskStateTrigger
from airflow.api_connexion.security import requires_access
from airflow import (
PY36,
PY37,
PY38,
PY39,
PY310,
PY311,
PY312,
Dataset as DatasetFromRoot,
)
from airflow.api_connexion.security import requires_access, requires_access_dataset
from airflow.auth.managers.base_auth_manager import is_authorized_dataset
from airflow.auth.managers.models.resource_details import DatasetDetails
from airflow.configuration import (
as_dict,
get,
getboolean,
getfloat,
getint,
has_option,
remove_option,
as_dict,
set,
)
from airflow.contrib.aws_athena_hook import AWSAthenaHook
from airflow.metrics.validators import AllowListValidator
from airflow.metrics.validators import BlockListValidator
from airflow.operators.subdag import SubDagOperator
from airflow.sensors.external_task import ExternalTaskSensorLink
from airflow.datasets import (
Dataset,
DatasetAlias,
DatasetAliasEvent,
DatasetAll,
DatasetAny,
expand_alias_to_datasets,
)
from airflow.datasets.metadata import Metadata
from airflow.datasets.manager import (
DatasetManager,
dataset_manager,
resolve_dataset_manager,
)
from airflow.lineage.hook import DatasetLineageInfo
from airflow.listeners.spec.dataset import on_dataset_changed, on_dataset_created
from airflow.metrics.validators import AllowListValidator, BlockListValidator
from airflow.operators import dummy_operator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.branch_operator import BaseBranchOperator
from airflow.operators.dummy import EmptyOperator, DummyOperator
from airflow.operators import dummy_operator
from airflow.operators.dummy import DummyOperator, EmptyOperator
from airflow.operators.email_operator import EmailOperator
from airflow.operators.subdag import SubDagOperator
from airflow.providers.amazon.auth_manager.avp.entities import AvpEntities
from airflow.providers.amazon.aws.datasets import s3
from airflow.providers.common.io.datasets import file as common_io_file
from airflow.providers.fab.auth_manager import fab_auth_manager
from airflow.providers.google.datasets import bigquery, gcs
from airflow.providers.mysql.datasets import mysql
from airflow.providers.openlineage.utils.utils import (
DatasetInfo,
translate_airflow_dataset,
)
from airflow.providers.postgres.datasets import postgres
from airflow.providers.trino.datasets import trino
from airflow.secrets.local_filesystem import get_connection, load_connections
from airflow.security.permissions import RESOURCE_DATASET
from airflow.sensors.base_sensor_operator import BaseSensorOperator
from airflow.sensors.date_time_sensor import DateTimeSensor
from airflow.sensors.external_task import (
ExternalTaskSensorLink as ExternalTaskSensorLinkFromExternalTask,
)
from airflow.sensors.external_task_sensor import (
ExternalTaskMarker,
ExternalTaskSensor,
ExternalTaskSensorLink,
ExternalTaskSensorLink as ExternalTaskSensorLinkFromExternalTaskSensor,
)
from airflow.sensors.time_delta_sensor import TimeDeltaSensor
from airflow.secrets.local_filesystem import get_connection, load_connections
from airflow.timetables.datasets import DatasetOrTimeSchedule
from airflow.timetables.simple import DatasetTriggeredTimetable
from airflow.triggers.external_task import TaskStateTrigger
from airflow.utils import dates
from airflow.utils.dag_cycle_tester import test_cycle
from airflow.utils.dates import (
date_range,
datetime_to_nano,
@@ -44,70 +88,168 @@ from airflow.utils.decorators import apply_defaults
from airflow.utils.file import TemporaryDirectory, mkdirs
from airflow.utils.helpers import chain, cross_downstream
from airflow.utils.state import SHUTDOWN, terminating_states
from airflow.utils.dag_cycle_tester import test_cycle
from airflow.utils.trigger_rule import TriggerRule
from airflow.www.auth import has_access
from airflow.www.auth import has_access, has_access_dataset
from airflow.www.utils import get_sensitive_variables_fields, should_hide_value_for_key
# airflow root
PY36, PY37, PY38, PY39, PY310, PY311, PY312
DatasetFromRoot
# airflow.api_connexion.security
requires_access, requires_access_dataset
# airflow.auth.managers
is_authorized_dataset
DatasetDetails
# airflow.configuration
get, getboolean, getfloat, getint, has_option, remove_option, as_dict, set
# airflow.contrib.*
AWSAthenaHook
TaskStateTrigger
requires_access
# airflow.datasets
Dataset
DatasetAlias
DatasetAliasEvent
DatasetAll
DatasetAny
expand_alias_to_datasets
Metadata
AllowListValidator
BlockListValidator
# airflow.datasets.manager
DatasetManager, dataset_manager, resolve_dataset_manager
# airflow.lineage.hook
DatasetLineageInfo
# airflow.listeners.spec.dataset
on_dataset_changed, on_dataset_created
# airflow.metrics.validators
AllowListValidator, BlockListValidator
# airflow.operators.dummy_operator
dummy_operator.EmptyOperator
dummy_operator.DummyOperator
# airflow.operators.bash_operator
BashOperator
# airflow.operators.branch_operator
BaseBranchOperator
# airflow.operators.dummy
EmptyOperator, DummyOperator
# airflow.operators.email_operator
EmailOperator
# airflow.operators.subdag.*
SubDagOperator
# airflow.providers.amazon
AvpEntities.DATASET
s3.create_dataset
s3.convert_dataset_to_openlineage
s3.sanitize_uri
# airflow.providers.common.io
common_io_file.convert_dataset_to_openlineage
common_io_file.create_dataset
common_io_file.sanitize_uri
# airflow.providers.fab
fab_auth_manager.is_authorized_dataset
# airflow.providers.google
bigquery.sanitize_uri
gcs.create_dataset
gcs.sanitize_uri
gcs.convert_dataset_to_openlineage
# airflow.providers.mysql
mysql.sanitize_uri
# airflow.providers.openlineage
DatasetInfo, translate_airflow_dataset
# airflow.providers.postgres
postgres.sanitize_uri
# airflow.providers.trino
trino.sanitize_uri
# airflow.secrets
get_connection, load_connections
# airflow.security.permissions
RESOURCE_DATASET
# airflow.sensors.base_sensor_operator
BaseSensorOperator
# airflow.sensors.date_time_sensor
DateTimeSensor
# airflow.sensors.external_task
ExternalTaskSensorLinkFromExternalTask
# airflow.sensors.external_task_sensor
ExternalTaskMarker
ExternalTaskSensor
ExternalTaskSensorLinkFromExternalTaskSensor
# airflow.sensors.time_delta_sensor
TimeDeltaSensor
# airflow.timetables
DatasetOrTimeSchedule
DatasetTriggeredTimetable
# airflow.triggers.external_task
TaskStateTrigger
# airflow.utils.date
dates.date_range
dates.days_ago
date_range
days_ago
infer_time_unit
parse_execution_date
round_time
scale_time_units
infer_time_unit
# This one was not deprecated.
datetime_to_nano
dates.datetime_to_nano
get, getboolean, getfloat, getint, has_option, remove_option, as_dict, set
get_connection, load_connections
ExternalTaskSensorLink
BashOperator
BaseBranchOperator
EmptyOperator, DummyOperator
dummy_operator.EmptyOperator
dummy_operator.DummyOperator
EmailOperator
BaseSensorOperator
DateTimeSensor
(ExternalTaskMarker, ExternalTaskSensor, ExternalTaskSensorLink)
TimeDeltaSensor
# airflow.utils.dag_cycle_tester
test_cycle
# airflow.utils.decorators
apply_defaults
TemporaryDirectory
mkdirs
# airflow.utils.file
TemporaryDirectory, mkdirs
chain
cross_downstream
# airflow.utils.helpers
chain, cross_downstream
SHUTDOWN
terminating_states
# airflow.utils.state
SHUTDOWN, terminating_states
# airflow.utils.trigger_rule
TriggerRule.DUMMY
TriggerRule.NONE_FAILED_OR_SKIPPED
test_cycle
# airflow.www.auth
has_access
has_access_dataset
# airflow.www.utils
get_sensitive_variables_fields, should_hide_value_for_key

View File

@@ -21,6 +21,8 @@ safe = password = "s3cr3t"
password = safe = "s3cr3t"
PASSWORD = "s3cr3t"
PassWord = "s3cr3t"
password: str = "s3cr3t"
password: Final = "s3cr3t"
d["password"] = "s3cr3t"
d["pass"] = "s3cr3t"

View File

@@ -244,3 +244,19 @@ def f():
for a in values:
result.append(a + 1) # PERF401
result = []
def f():
values = [1, 2, 3]
result = []
for i in values:
result.append(i + 1) # Ok
del i
# The fix here must parenthesize the walrus operator
# https://github.com/astral-sh/ruff/issues/15047
def f():
items = []
for i in range(5):
if j := i:
items.append(j)

View File

@@ -31,3 +31,13 @@ def single_word():
def single_word_no_dot():
"""singleword"""
def first_word_lots_of_whitespace():
"""
here is the start of my docstring!
What do you think?
"""

View File

@@ -50,6 +50,50 @@ class T:
obj = T()
obj.a = obj.a + 1
a = a+-1
# Regression tests for https://github.com/astral-sh/ruff/issues/11672
test = 0x5
test = test + 0xBA
test2 = b""
test2 = test2 + b"\000"
test3 = ""
test3 = test3 + ( a := R""
f"oo" )
test4 = []
test4 = test4 + ( e
for e in
range(10)
)
test5 = test5 + (
4
*
10
)
test6 = test6 + \
(
4
*
10
)
test7 = \
100 \
+ test7
test8 = \
886 \
+ \
\
test8
# OK
a_list[0] = a_list[:] * 3
index = a_number = a_number + 1

View File

@@ -68,3 +68,9 @@ def negative_cases():
@app.get("/items/{item_id}")
async def read_item(item_id):
return {"item_id": item_id}
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from datetime import date
t = "foo/{date}"

View File

@@ -0,0 +1,91 @@
import re
import pytest
def test_foo():
### Errors
with pytest.raises(FooAtTheEnd, match="foo."): ...
with pytest.raises(PackageExtraSpecifier, match="Install `foo[bar]` to enjoy all features"): ...
with pytest.raises(InnocentQuestion, match="Did you mean to use `Literal` instead?"): ...
with pytest.raises(StringConcatenation, match="Huh"
"?"): ...
with pytest.raises(ManuallyEscapedWindowsPathToDotFile, match="C:\\\\Users\\\\Foo\\\\.config"): ...
with pytest.raises(MiddleDot, match="foo.bar"): ...
with pytest.raises(EndDot, match="foobar."): ...
with pytest.raises(EscapedFollowedByUnescaped, match="foo\\.*bar"): ...
with pytest.raises(UnescapedFollowedByEscaped, match="foo.\\*bar"): ...
## Metasequences
with pytest.raises(StartOfInput, match="foo\\Abar"): ...
with pytest.raises(WordBoundary, match="foo\\bbar"): ...
with pytest.raises(NonWordBoundary, match="foo\\Bbar"): ...
with pytest.raises(Digit, match="foo\\dbar"): ...
with pytest.raises(NonDigit, match="foo\\Dbar"): ...
with pytest.raises(Whitespace, match="foo\\sbar"): ...
with pytest.raises(NonWhitespace, match="foo\\Sbar"): ...
with pytest.raises(WordCharacter, match="foo\\wbar"): ...
with pytest.raises(NonWordCharacter, match="foo\\Wbar"): ...
with pytest.raises(EndOfInput, match="foo\\zbar"): ...
with pytest.raises(StartOfInput2, match="foobar\\A"): ...
with pytest.raises(WordBoundary2, match="foobar\\b"): ...
with pytest.raises(NonWordBoundary2, match="foobar\\B"): ...
with pytest.raises(Digit2, match="foobar\\d"): ...
with pytest.raises(NonDigit2, match="foobar\\D"): ...
with pytest.raises(Whitespace2, match="foobar\\s"): ...
with pytest.raises(NonWhitespace2, match="foobar\\S"): ...
with pytest.raises(WordCharacter2, match="foobar\\w"): ...
with pytest.raises(NonWordCharacter2, match="foobar\\W"): ...
with pytest.raises(EndOfInput2, match="foobar\\z"): ...
### Acceptable false positives
with pytest.raises(NameEscape, match="\\N{EN DASH}"): ...
### No errors
with pytest.raises(NoMatch): ...
with pytest.raises(NonLiteral, match=pattern): ...
with pytest.raises(FunctionCall, match=frobnicate("qux")): ...
with pytest.raises(ReEscaped, match=re.escape("foobar")): ...
with pytest.raises(RawString, match=r"fo()bar"): ...
with pytest.raises(RawStringPart, match=r"foo" '\bar'): ...
with pytest.raises(NoMetacharacters, match="foobar"): ...
with pytest.raises(EndBackslash, match="foobar\\"): ...
with pytest.raises(ManuallyEscaped, match="some\\.fully\\.qualified\\.name"): ...
with pytest.raises(ManuallyEscapedWindowsPath, match="C:\\\\Users\\\\Foo\\\\file\\.py"): ...
with pytest.raises(MiddleEscapedDot, match="foo\\.bar"): ...
with pytest.raises(MiddleEscapedBackslash, match="foo\\\\bar"): ...
with pytest.raises(EndEscapedDot, match="foobar\\."): ...
with pytest.raises(EndEscapedBackslash, match="foobar\\\\"): ...
## Not-so-special metasequences
with pytest.raises(Alert, match="\\f"): ...
with pytest.raises(FormFeed, match="\\f"): ...
with pytest.raises(Newline, match="\\n"): ...
with pytest.raises(CarriageReturn, match="\\r"): ...
with pytest.raises(Tab, match="\\t"): ...
with pytest.raises(VerticalTab, match="\\v"): ...
with pytest.raises(HexEscape, match="\\xFF"): ...
with pytest.raises(_16BitUnicodeEscape, match="\\uFFFF"): ...
with pytest.raises(_32BitUnicodeEscape, match="\\U0010FFFF"): ...
## Escaped metasequences
with pytest.raises(Whitespace, match="foo\\\\sbar"): ...
with pytest.raises(NonWhitespace, match="foo\\\\Sbar"): ...
## Work by accident
with pytest.raises(OctalEscape, match="\\042"): ...

View File

@@ -46,7 +46,7 @@ pub(crate) fn definitions(checker: &mut Checker) {
Rule::EndsInPeriod,
Rule::EndsInPunctuation,
Rule::EscapeSequenceInDocstring,
Rule::FirstLineCapitalized,
Rule::FirstWordUncapitalized,
Rule::FitsOnOneLine,
Rule::IndentWithSpaces,
Rule::MultiLineSummaryFirstLine,
@@ -277,7 +277,7 @@ pub(crate) fn definitions(checker: &mut Checker) {
if checker.enabled(Rule::NoSignature) {
pydocstyle::rules::no_signature(checker, &docstring);
}
if checker.enabled(Rule::FirstLineCapitalized) {
if checker.enabled(Rule::FirstWordUncapitalized) {
pydocstyle::rules::capitalized(checker, &docstring);
}
if checker.enabled(Rule::DocstringStartsWithThis) {

View File

@@ -1105,6 +1105,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::BatchedWithoutExplicitStrict) {
flake8_bugbear::rules::batched_without_explicit_strict(checker, call);
}
if checker.enabled(Rule::PytestRaisesAmbiguousPattern) {
ruff::rules::pytest_raises_ambiguous_pattern(checker, call);
}
}
Expr::Dict(dict) => {
if checker.any_enabled(&[

View File

@@ -1658,6 +1658,15 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::NonPEP695TypeAlias) {
pyupgrade::rules::non_pep695_type_alias(checker, assign_stmt);
}
if checker.enabled(Rule::HardcodedPasswordString) {
if let Some(value) = value.as_deref() {
flake8_bandit::rules::assign_hardcoded_password_string(
checker,
value,
std::slice::from_ref(target),
);
}
}
if checker.settings.rules.enabled(Rule::UnsortedDunderAll) {
ruff::rules::sort_dunder_all_ann_assign(checker, assign_stmt);
}

View File

@@ -566,7 +566,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pydocstyle, "400") => (RuleGroup::Stable, rules::pydocstyle::rules::EndsInPeriod),
(Pydocstyle, "401") => (RuleGroup::Stable, rules::pydocstyle::rules::NonImperativeMood),
(Pydocstyle, "402") => (RuleGroup::Stable, rules::pydocstyle::rules::NoSignature),
(Pydocstyle, "403") => (RuleGroup::Stable, rules::pydocstyle::rules::FirstLineCapitalized),
(Pydocstyle, "403") => (RuleGroup::Stable, rules::pydocstyle::rules::FirstWordUncapitalized),
(Pydocstyle, "404") => (RuleGroup::Stable, rules::pydocstyle::rules::DocstringStartsWithThis),
(Pydocstyle, "405") => (RuleGroup::Stable, rules::pydocstyle::rules::CapitalizeSectionName),
(Pydocstyle, "406") => (RuleGroup::Stable, rules::pydocstyle::rules::NewLineAfterSectionName),
@@ -985,6 +985,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "039") => (RuleGroup::Preview, rules::ruff::rules::UnrawRePattern),
(Ruff, "040") => (RuleGroup::Preview, rules::ruff::rules::InvalidAssertMessageLiteralArgument),
(Ruff, "041") => (RuleGroup::Preview, rules::ruff::rules::UnnecessaryNestedLiteral),
(Ruff, "043") => (RuleGroup::Preview, rules::ruff::rules::PytestRaisesAmbiguousPattern),
(Ruff, "046") => (RuleGroup::Preview, rules::ruff::rules::UnnecessaryCastToInt),
(Ruff, "048") => (RuleGroup::Preview, rules::ruff::rules::MapIntVersionParsing),
(Ruff, "051") => (RuleGroup::Preview, rules::ruff::rules::IfKeyInDictDel),

View File

@@ -160,15 +160,23 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
.semantic()
.resolve_qualified_name(expr)
.and_then(|qualname| match qualname.segments() {
["airflow", "triggers", "external_task", "TaskStateTrigger"] => {
Some((qualname.to_string(), Replacement::None))
}
["airflow", "api_connexion", "security", "requires_access"] => Some((
qualname.to_string(),
Replacement::Name(
"airflow.api_connexion.security.requires_access_*".to_string(),
),
)),
["airflow", "api_connexion", "security", "requires_access_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.api_connexion.security.requires_access_asset".to_string()),
)),
["airflow", "triggers", "external_task", "TaskStateTrigger"] => {
Some((qualname.to_string(), Replacement::None))
}
["airflow", "security", "permissions", "RESOURCE_DATASET"] => Some((
qualname.to_string(),
Replacement::Name("airflow.security.permissions.RESOURCE_ASSET".to_string()),
)),
// airflow.PY\d{1,2}
["airflow", "PY36"] => Some((
qualname.to_string(),
@@ -231,8 +239,22 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
qualname.to_string(),
Replacement::Name("airflow.configuration.conf.set".to_string()),
)),
// airflow.auth.managers
["airflow", "auth", "managers", "models", "resource_details", "DatasetDetails"] => Some((
qualname.to_string(),
Replacement::Name("airflow.auth.managers.models.resource_details.AssetDetails".to_string()),
)),
["airflow", "auth", "managers", "base_auth_manager", "is_authorized_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.auth.managers.base_auth_manager.is_authorized_asset".to_string()),
)),
// airflow.contrib.*
["airflow", "contrib", ..] => Some((qualname.to_string(), Replacement::None)),
["airflow", "contrib", ..] => Some((qualname.to_string(),
Replacement::Message(
"The whole `airflow.contrib` module has been removed."
.to_string(),
),
)),
// airflow.metrics.validators
["airflow", "metrics", "validators", "AllowListValidator"] => Some((
qualname.to_string(),
@@ -246,13 +268,86 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
"airflow.metrics.validators.PatternBlockListValidator".to_string(),
),
)),
// airflow.operators
["airflow", "operators", "subdag", ..] => {
// airflow.datasets
["airflow", "Dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sdk.definitions.asset.Asset".to_string()),
)),
["airflow", "datasets", "DatasetAliasEvent"] => {
Some((qualname.to_string(), Replacement::None))
}
["airflow.sensors.external_task.ExternalTaskSensorLink"] => Some((
["airflow", "datasets", "Dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sensors.external_task.ExternalDagLin".to_string()),
Replacement::Name("airflow.sdk.definitions.asset.Asset".to_string()),
)),
["airflow", "datasets", "DatasetAlias"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sdk.definitions.asset.AssetAlias".to_string()),
)),
["airflow", "datasets", "DatasetAll"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sdk.definitions.asset.AssetAll".to_string()),
)),
["airflow", "datasets", "DatasetAny"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sdk.definitions.asset.AssetAny".to_string()),
)),
["airflow", "datasets", "expand_alias_to_datasets"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sdk.definitions.asset.expand_alias_to_assets".to_string()),
)),
["airflow", "datasets", "metadata", "Metadata"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sdk.definitions.asset.metadata.Metadata".to_string()),
)),
// airflow.datasets.manager
["airflow", "datasets", "manager", "dataset_manager"] => Some((
qualname.to_string(),
Replacement::Name("airflow.assets.manager".to_string()),
)),
["airflow", "datasets", "manager", "resolve_dataset_manager"] => Some((
qualname.to_string(),
Replacement::Name("airflow.assets.resolve_asset_manager".to_string()),
)),
["airflow", "datasets.manager", "DatasetManager"] => Some((
qualname.to_string(),
Replacement::Name("airflow.assets.AssetManager".to_string()),
)),
// airflow.listeners.spec
["airflow", "listeners", "spec", "dataset", "on_dataset_created"] => Some((
qualname.to_string(),
Replacement::Name("airflow.listeners.spec.asset.on_asset_created".to_string()),
)),
["airflow", "listeners", "spec", "dataset", "on_dataset_changed"] => Some((
qualname.to_string(),
Replacement::Name("airflow.listeners.spec.asset.on_asset_changed".to_string()),
)),
// airflow.timetables
["airflow", "timetables", "datasets", "DatasetOrTimeSchedule"] => Some((
qualname.to_string(),
Replacement::Name("airflow.timetables.assets.AssetOrTimeSchedule".to_string()),
)),
["airflow", "timetables", "simple", "DatasetTriggeredTimetable"] => Some((
qualname.to_string(),
Replacement::Name("airflow.timetables.simple.AssetTriggeredTimetable".to_string()),
)),
// airflow.lineage.hook
["airflow", "lineage", "hook", "DatasetLineageInfo"] => Some((
qualname.to_string(),
Replacement::Name("airflow.lineage.hook.AssetLineageInfo".to_string()),
)),
// airflow.operators
["airflow", "operators", "subdag", ..] => {
Some((
qualname.to_string(),
Replacement::Message(
"The whole `airflow.subdag` module has been removed.".to_string(),
),
))
},
["airflow", "sensors", "external_task", "ExternalTaskSensorLink"] => Some((
qualname.to_string(),
Replacement::Name("airflow.sensors.external_task.ExternalDagLink".to_string()),
)),
["airflow", "operators", "bash_operator", "BashOperator"] => Some((
qualname.to_string(),
@@ -305,7 +400,7 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
["airflow", "sensors", "external_task_sensor", "ExternalTaskSensorLink"] => Some((
qualname.to_string(),
Replacement::Name(
"airflow.sensors.external_task.ExternalTaskSensorLink".to_string(),
"airflow.sensors.external_task.ExternalDagLink".to_string(),
),
)),
["airflow", "sensors", "time_delta_sensor", "TimeDeltaSensor"] => Some((
@@ -354,7 +449,6 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
qualname.to_string(),
Replacement::Name("pendulum.today('UTC').add(days=-N, ...)".to_string()),
)),
// airflow.utils.helpers
["airflow", "utils", "helpers", "chain"] => Some((
qualname.to_string(),
@@ -394,6 +488,10 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
qualname.to_string(),
Replacement::Name("airflow.www.auth.has_access_*".to_string()),
)),
["airflow", "www", "auth", "has_access_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.www.auth.has_access_dataset.has_access_asset".to_string()),
)),
["airflow", "www", "utils", "get_sensitive_variables_fields"] => Some((
qualname.to_string(),
Replacement::Name(
@@ -407,6 +505,82 @@ fn removed_name(checker: &mut Checker, expr: &Expr, ranged: impl Ranged) {
"airflow.utils.log.secrets_masker.should_hide_value_for_key".to_string(),
),
)),
// airflow.providers.amazon
["airflow", "providers", "amazon", "aws", "datasets", "s3", "create_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.amazon.aws.assets.s3.create_asset".to_string()),
)),
["airflow", "providers", "amazon", "aws", "datasets", "s3", "convert_dataset_to_openlineage"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.amazon.aws.assets.s3.convert_asset_to_openlineage".to_string()),
)),
["airflow", "providers", "amazon", "aws", "datasets", "s3", "sanitize_uri"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.amazon.aws.assets.s3.sanitize_uri".to_string()),
)),
["airflow", "providers", "amazon", "auth_manager", "avp", "entities", "AvpEntities", "DATASET"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.amazon.auth_manager.avp.entities.AvpEntities.ASSET".to_string()),
)),
// airflow.providers.common.io
["airflow", "providers", "common", "io", "datasets", "file", "create_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.common.io.assets.file.create_asset".to_string()),
)),
["airflow", "providers", "common", "io", "datasets", "file", "convert_dataset_to_openlineage"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.common.io.assets.file.convert_asset_to_openlineage".to_string()),
)),
["airflow", "providers", "common", "io", "datasets", "file", "sanitize_uri"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.common.io.assets.file.sanitize_uri".to_string()),
)),
// airflow.providers.fab
["airflow", "providers", "fab", "auth_manager", "fab_auth_manager", "is_authorized_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.fab.auth_manager.fab_auth_manager.is_authorized_asset".to_string()),
)),
// airflow.providers.google
["airflow", "providers", "google", "datasets", "bigquery", "create_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.google.assets.bigquery.create_asset".to_string()),
)),
["airflow", "providers", "google", "datasets", "gcs", "create_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.google.assets.gcs.create_asset".to_string()),
)),
["airflow", "providers", "google", "datasets", "gcs", "convert_dataset_to_openlineage"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.google.assets.gcs.convert_asset_to_openlineage".to_string()),
)),
["airflow", "providers", "google", "datasets", "gcs", "sanitize_uri"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.google.assets.gcs.sanitize_uri".to_string()),
)),
// airflow.providers.mysql
["airflow", "providers", "mysql", "datasets", "mysql", "sanitize_uri"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.mysql.assets.mysql.sanitize_uri".to_string()),
)),
// airflow.providers.postgres
["airflow", "providers", "postgres", "datasets", "postgres", "sanitize_uri"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.postgres.assets.postgres.sanitize_uri".to_string()),
)),
// airflow.providers.openlineage
["airflow", "providers", "openlineage", "utils", "utils", "DatasetInfo"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.openlineage.utils.utils.AssetInfo".to_string()),
)),
["airflow", "providers", "openlineage", "utils", "utils", "translate_airflow_dataset"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.openlineage.utils.utils.translate_airflow_asset".to_string()),
)),
// airflow.providers.trino
["airflow", "providers", "trino", "datasets", "trino", "sanitize_uri"] => Some((
qualname.to_string(),
Replacement::Name("airflow.providers.trino.assets.trino.sanitize_uri".to_string()),
)),
_ => None,
});
if let Some((deprecated, replacement)) = result {

View File

@@ -98,6 +98,7 @@ S105.py:22:12: S105 Possible hardcoded password assigned to: "PASSWORD"
22 | PASSWORD = "s3cr3t"
| ^^^^^^^^ S105
23 | PassWord = "s3cr3t"
24 | password: str = "s3cr3t"
|
S105.py:23:12: S105 Possible hardcoded password assigned to: "PassWord"
@@ -106,271 +107,290 @@ S105.py:23:12: S105 Possible hardcoded password assigned to: "PassWord"
22 | PASSWORD = "s3cr3t"
23 | PassWord = "s3cr3t"
| ^^^^^^^^ S105
24 |
25 | d["password"] = "s3cr3t"
24 | password: str = "s3cr3t"
25 | password: Final = "s3cr3t"
|
S105.py:25:17: S105 Possible hardcoded password assigned to: "password"
S105.py:24:17: S105 Possible hardcoded password assigned to: "password"
|
22 | PASSWORD = "s3cr3t"
23 | PassWord = "s3cr3t"
24 | password: str = "s3cr3t"
| ^^^^^^^^ S105
25 | password: Final = "s3cr3t"
|
S105.py:25:19: S105 Possible hardcoded password assigned to: "password"
|
23 | PassWord = "s3cr3t"
24 |
25 | d["password"] = "s3cr3t"
| ^^^^^^^^ S105
26 | d["pass"] = "s3cr3t"
27 | d["passwd"] = "s3cr3t"
|
S105.py:26:13: S105 Possible hardcoded password assigned to: "pass"
|
25 | d["password"] = "s3cr3t"
26 | d["pass"] = "s3cr3t"
| ^^^^^^^^ S105
27 | d["passwd"] = "s3cr3t"
28 | d["pwd"] = "s3cr3t"
|
S105.py:27:15: S105 Possible hardcoded password assigned to: "passwd"
|
25 | d["password"] = "s3cr3t"
26 | d["pass"] = "s3cr3t"
27 | d["passwd"] = "s3cr3t"
| ^^^^^^^^ S105
28 | d["pwd"] = "s3cr3t"
29 | d["secret"] = "s3cr3t"
|
S105.py:28:12: S105 Possible hardcoded password assigned to: "pwd"
|
26 | d["pass"] = "s3cr3t"
27 | d["passwd"] = "s3cr3t"
28 | d["pwd"] = "s3cr3t"
| ^^^^^^^^ S105
29 | d["secret"] = "s3cr3t"
30 | d["token"] = "s3cr3t"
|
S105.py:29:15: S105 Possible hardcoded password assigned to: "secret"
|
27 | d["passwd"] = "s3cr3t"
28 | d["pwd"] = "s3cr3t"
29 | d["secret"] = "s3cr3t"
| ^^^^^^^^ S105
30 | d["token"] = "s3cr3t"
31 | d["secrete"] = "s3cr3t"
|
S105.py:30:14: S105 Possible hardcoded password assigned to: "token"
|
28 | d["pwd"] = "s3cr3t"
29 | d["secret"] = "s3cr3t"
30 | d["token"] = "s3cr3t"
| ^^^^^^^^ S105
31 | d["secrete"] = "s3cr3t"
32 | safe = d["password"] = "s3cr3t"
|
S105.py:31:16: S105 Possible hardcoded password assigned to: "secrete"
|
29 | d["secret"] = "s3cr3t"
30 | d["token"] = "s3cr3t"
31 | d["secrete"] = "s3cr3t"
| ^^^^^^^^ S105
32 | safe = d["password"] = "s3cr3t"
33 | d["password"] = safe = "s3cr3t"
|
S105.py:32:24: S105 Possible hardcoded password assigned to: "password"
|
30 | d["token"] = "s3cr3t"
31 | d["secrete"] = "s3cr3t"
32 | safe = d["password"] = "s3cr3t"
| ^^^^^^^^ S105
33 | d["password"] = safe = "s3cr3t"
|
S105.py:33:24: S105 Possible hardcoded password assigned to: "password"
|
31 | d["secrete"] = "s3cr3t"
32 | safe = d["password"] = "s3cr3t"
33 | d["password"] = safe = "s3cr3t"
| ^^^^^^^^ S105
|
S105.py:37:16: S105 Possible hardcoded password assigned to: "password"
|
36 | class MyClass:
37 | password = "s3cr3t"
| ^^^^^^^^ S105
38 | safe = password
|
S105.py:41:20: S105 Possible hardcoded password assigned to: "password"
|
41 | MyClass.password = "s3cr3t"
| ^^^^^^^^ S105
42 | MyClass._pass = "s3cr3t"
43 | MyClass.passwd = "s3cr3t"
|
S105.py:42:17: S105 Possible hardcoded password assigned to: "_pass"
|
41 | MyClass.password = "s3cr3t"
42 | MyClass._pass = "s3cr3t"
| ^^^^^^^^ S105
43 | MyClass.passwd = "s3cr3t"
44 | MyClass.pwd = "s3cr3t"
|
S105.py:43:18: S105 Possible hardcoded password assigned to: "passwd"
|
41 | MyClass.password = "s3cr3t"
42 | MyClass._pass = "s3cr3t"
43 | MyClass.passwd = "s3cr3t"
| ^^^^^^^^ S105
44 | MyClass.pwd = "s3cr3t"
45 | MyClass.secret = "s3cr3t"
|
S105.py:44:15: S105 Possible hardcoded password assigned to: "pwd"
|
42 | MyClass._pass = "s3cr3t"
43 | MyClass.passwd = "s3cr3t"
44 | MyClass.pwd = "s3cr3t"
| ^^^^^^^^ S105
45 | MyClass.secret = "s3cr3t"
46 | MyClass.token = "s3cr3t"
|
S105.py:45:18: S105 Possible hardcoded password assigned to: "secret"
|
43 | MyClass.passwd = "s3cr3t"
44 | MyClass.pwd = "s3cr3t"
45 | MyClass.secret = "s3cr3t"
| ^^^^^^^^ S105
46 | MyClass.token = "s3cr3t"
47 | MyClass.secrete = "s3cr3t"
|
S105.py:46:17: S105 Possible hardcoded password assigned to: "token"
|
44 | MyClass.pwd = "s3cr3t"
45 | MyClass.secret = "s3cr3t"
46 | MyClass.token = "s3cr3t"
| ^^^^^^^^ S105
47 | MyClass.secrete = "s3cr3t"
|
S105.py:47:19: S105 Possible hardcoded password assigned to: "secrete"
|
45 | MyClass.secret = "s3cr3t"
46 | MyClass.token = "s3cr3t"
47 | MyClass.secrete = "s3cr3t"
24 | password: str = "s3cr3t"
25 | password: Final = "s3cr3t"
| ^^^^^^^^ S105
48 |
49 | password == "s3cr3t"
26 |
27 | d["password"] = "s3cr3t"
|
S105.py:49:13: S105 Possible hardcoded password assigned to: "password"
S105.py:27:17: S105 Possible hardcoded password assigned to: "password"
|
47 | MyClass.secrete = "s3cr3t"
48 |
49 | password == "s3cr3t"
25 | password: Final = "s3cr3t"
26 |
27 | d["password"] = "s3cr3t"
| ^^^^^^^^ S105
28 | d["pass"] = "s3cr3t"
29 | d["passwd"] = "s3cr3t"
|
S105.py:28:13: S105 Possible hardcoded password assigned to: "pass"
|
27 | d["password"] = "s3cr3t"
28 | d["pass"] = "s3cr3t"
| ^^^^^^^^ S105
50 | _pass == "s3cr3t"
51 | passwd == "s3cr3t"
29 | d["passwd"] = "s3cr3t"
30 | d["pwd"] = "s3cr3t"
|
S105.py:50:10: S105 Possible hardcoded password assigned to: "_pass"
S105.py:29:15: S105 Possible hardcoded password assigned to: "passwd"
|
49 | password == "s3cr3t"
50 | _pass == "s3cr3t"
| ^^^^^^^^ S105
51 | passwd == "s3cr3t"
52 | pwd == "s3cr3t"
27 | d["password"] = "s3cr3t"
28 | d["pass"] = "s3cr3t"
29 | d["passwd"] = "s3cr3t"
| ^^^^^^^^ S105
30 | d["pwd"] = "s3cr3t"
31 | d["secret"] = "s3cr3t"
|
S105.py:51:11: S105 Possible hardcoded password assigned to: "passwd"
S105.py:30:12: S105 Possible hardcoded password assigned to: "pwd"
|
49 | password == "s3cr3t"
50 | _pass == "s3cr3t"
51 | passwd == "s3cr3t"
| ^^^^^^^^ S105
52 | pwd == "s3cr3t"
53 | secret == "s3cr3t"
|
S105.py:52:8: S105 Possible hardcoded password assigned to: "pwd"
|
50 | _pass == "s3cr3t"
51 | passwd == "s3cr3t"
52 | pwd == "s3cr3t"
| ^^^^^^^^ S105
53 | secret == "s3cr3t"
54 | token == "s3cr3t"
|
S105.py:53:11: S105 Possible hardcoded password assigned to: "secret"
|
51 | passwd == "s3cr3t"
52 | pwd == "s3cr3t"
53 | secret == "s3cr3t"
| ^^^^^^^^ S105
54 | token == "s3cr3t"
55 | secrete == "s3cr3t"
|
S105.py:54:10: S105 Possible hardcoded password assigned to: "token"
|
52 | pwd == "s3cr3t"
53 | secret == "s3cr3t"
54 | token == "s3cr3t"
| ^^^^^^^^ S105
55 | secrete == "s3cr3t"
56 | password == safe == "s3cr3t"
|
S105.py:55:12: S105 Possible hardcoded password assigned to: "secrete"
|
53 | secret == "s3cr3t"
54 | token == "s3cr3t"
55 | secrete == "s3cr3t"
28 | d["pass"] = "s3cr3t"
29 | d["passwd"] = "s3cr3t"
30 | d["pwd"] = "s3cr3t"
| ^^^^^^^^ S105
56 | password == safe == "s3cr3t"
31 | d["secret"] = "s3cr3t"
32 | d["token"] = "s3cr3t"
|
S105.py:56:21: S105 Possible hardcoded password assigned to: "password"
S105.py:31:15: S105 Possible hardcoded password assigned to: "secret"
|
54 | token == "s3cr3t"
55 | secrete == "s3cr3t"
56 | password == safe == "s3cr3t"
29 | d["passwd"] = "s3cr3t"
30 | d["pwd"] = "s3cr3t"
31 | d["secret"] = "s3cr3t"
| ^^^^^^^^ S105
32 | d["token"] = "s3cr3t"
33 | d["secrete"] = "s3cr3t"
|
S105.py:32:14: S105 Possible hardcoded password assigned to: "token"
|
30 | d["pwd"] = "s3cr3t"
31 | d["secret"] = "s3cr3t"
32 | d["token"] = "s3cr3t"
| ^^^^^^^^ S105
33 | d["secrete"] = "s3cr3t"
34 | safe = d["password"] = "s3cr3t"
|
S105.py:33:16: S105 Possible hardcoded password assigned to: "secrete"
|
31 | d["secret"] = "s3cr3t"
32 | d["token"] = "s3cr3t"
33 | d["secrete"] = "s3cr3t"
| ^^^^^^^^ S105
34 | safe = d["password"] = "s3cr3t"
35 | d["password"] = safe = "s3cr3t"
|
S105.py:34:24: S105 Possible hardcoded password assigned to: "password"
|
32 | d["token"] = "s3cr3t"
33 | d["secrete"] = "s3cr3t"
34 | safe = d["password"] = "s3cr3t"
| ^^^^^^^^ S105
35 | d["password"] = safe = "s3cr3t"
|
S105.py:35:24: S105 Possible hardcoded password assigned to: "password"
|
33 | d["secrete"] = "s3cr3t"
34 | safe = d["password"] = "s3cr3t"
35 | d["password"] = safe = "s3cr3t"
| ^^^^^^^^ S105
|
S105.py:39:16: S105 Possible hardcoded password assigned to: "password"
|
38 | class MyClass:
39 | password = "s3cr3t"
| ^^^^^^^^ S105
40 | safe = password
|
S105.py:43:20: S105 Possible hardcoded password assigned to: "password"
|
43 | MyClass.password = "s3cr3t"
| ^^^^^^^^ S105
44 | MyClass._pass = "s3cr3t"
45 | MyClass.passwd = "s3cr3t"
|
S105.py:44:17: S105 Possible hardcoded password assigned to: "_pass"
|
43 | MyClass.password = "s3cr3t"
44 | MyClass._pass = "s3cr3t"
| ^^^^^^^^ S105
45 | MyClass.passwd = "s3cr3t"
46 | MyClass.pwd = "s3cr3t"
|
S105.py:45:18: S105 Possible hardcoded password assigned to: "passwd"
|
43 | MyClass.password = "s3cr3t"
44 | MyClass._pass = "s3cr3t"
45 | MyClass.passwd = "s3cr3t"
| ^^^^^^^^ S105
46 | MyClass.pwd = "s3cr3t"
47 | MyClass.secret = "s3cr3t"
|
S105.py:46:15: S105 Possible hardcoded password assigned to: "pwd"
|
44 | MyClass._pass = "s3cr3t"
45 | MyClass.passwd = "s3cr3t"
46 | MyClass.pwd = "s3cr3t"
| ^^^^^^^^ S105
47 | MyClass.secret = "s3cr3t"
48 | MyClass.token = "s3cr3t"
|
S105.py:47:18: S105 Possible hardcoded password assigned to: "secret"
|
45 | MyClass.passwd = "s3cr3t"
46 | MyClass.pwd = "s3cr3t"
47 | MyClass.secret = "s3cr3t"
| ^^^^^^^^ S105
48 | MyClass.token = "s3cr3t"
49 | MyClass.secrete = "s3cr3t"
|
S105.py:48:17: S105 Possible hardcoded password assigned to: "token"
|
46 | MyClass.pwd = "s3cr3t"
47 | MyClass.secret = "s3cr3t"
48 | MyClass.token = "s3cr3t"
| ^^^^^^^^ S105
49 | MyClass.secrete = "s3cr3t"
|
S105.py:49:19: S105 Possible hardcoded password assigned to: "secrete"
|
47 | MyClass.secret = "s3cr3t"
48 | MyClass.token = "s3cr3t"
49 | MyClass.secrete = "s3cr3t"
| ^^^^^^^^ S105
50 |
51 | password == "s3cr3t"
|
S105.py:51:13: S105 Possible hardcoded password assigned to: "password"
|
49 | MyClass.secrete = "s3cr3t"
50 |
51 | password == "s3cr3t"
| ^^^^^^^^ S105
52 | _pass == "s3cr3t"
53 | passwd == "s3cr3t"
|
S105.py:52:10: S105 Possible hardcoded password assigned to: "_pass"
|
51 | password == "s3cr3t"
52 | _pass == "s3cr3t"
| ^^^^^^^^ S105
53 | passwd == "s3cr3t"
54 | pwd == "s3cr3t"
|
S105.py:53:11: S105 Possible hardcoded password assigned to: "passwd"
|
51 | password == "s3cr3t"
52 | _pass == "s3cr3t"
53 | passwd == "s3cr3t"
| ^^^^^^^^ S105
54 | pwd == "s3cr3t"
55 | secret == "s3cr3t"
|
S105.py:54:8: S105 Possible hardcoded password assigned to: "pwd"
|
52 | _pass == "s3cr3t"
53 | passwd == "s3cr3t"
54 | pwd == "s3cr3t"
| ^^^^^^^^ S105
55 | secret == "s3cr3t"
56 | token == "s3cr3t"
|
S105.py:55:11: S105 Possible hardcoded password assigned to: "secret"
|
53 | passwd == "s3cr3t"
54 | pwd == "s3cr3t"
55 | secret == "s3cr3t"
| ^^^^^^^^ S105
56 | token == "s3cr3t"
57 | secrete == "s3cr3t"
|
S105.py:56:10: S105 Possible hardcoded password assigned to: "token"
|
54 | pwd == "s3cr3t"
55 | secret == "s3cr3t"
56 | token == "s3cr3t"
| ^^^^^^^^ S105
57 | secrete == "s3cr3t"
58 | password == safe == "s3cr3t"
|
S105.py:57:12: S105 Possible hardcoded password assigned to: "secrete"
|
55 | secret == "s3cr3t"
56 | token == "s3cr3t"
57 | secrete == "s3cr3t"
| ^^^^^^^^ S105
58 | password == safe == "s3cr3t"
|
S105.py:58:21: S105 Possible hardcoded password assigned to: "password"
|
56 | token == "s3cr3t"
57 | secrete == "s3cr3t"
58 | password == safe == "s3cr3t"
| ^^^^^^^^ S105
57 |
58 | if token == "1\n2":
59 |
60 | if token == "1\n2":
|
S105.py:58:13: S105 Possible hardcoded password assigned to: "token"
S105.py:60:13: S105 Possible hardcoded password assigned to: "token"
|
56 | password == safe == "s3cr3t"
57 |
58 | if token == "1\n2":
58 | password == safe == "s3cr3t"
59 |
60 | if token == "1\n2":
| ^^^^^^ S105
59 | pass
61 | pass
|
S105.py:61:13: S105 Possible hardcoded password assigned to: "token"
S105.py:63:13: S105 Possible hardcoded password assigned to: "token"
|
59 | pass
60 |
61 | if token == "3\t4":
61 | pass
62 |
63 | if token == "3\t4":
| ^^^^^^ S105
62 | pass
64 | pass
|
S105.py:64:13: S105 Possible hardcoded password assigned to: "token"
S105.py:66:13: S105 Possible hardcoded password assigned to: "token"
|
62 | pass
63 |
64 | if token == "5\r6":
64 | pass
65 |
66 | if token == "5\r6":
| ^^^^^^ S105
65 | pass
67 | pass
|

View File

@@ -144,7 +144,7 @@ impl Violation for PytestAssertInExcept {
/// ...
/// ```
///
/// References
/// ## References
/// - [`pytest` documentation: `pytest.fail`](https://docs.pytest.org/en/latest/reference/reference.html#pytest-fail)
#[derive(ViolationMetadata)]
pub(crate) struct PytestAssertAlwaysFalse;

View File

@@ -104,7 +104,6 @@ impl AlwaysFixableViolation for PytestIncorrectMarkParenthesesStyle {
///
/// ## References
/// - [`pytest` documentation: `pytest.mark.usefixtures`](https://docs.pytest.org/en/latest/reference/reference.html#pytest-mark-usefixtures)
#[derive(ViolationMetadata)]
pub(crate) struct PytestUseFixturesWithoutParameters;

View File

@@ -151,7 +151,7 @@ impl Violation for PytestRaisesWithoutException {
}
}
fn is_pytest_raises(func: &Expr, semantic: &SemanticModel) -> bool {
pub(crate) fn is_pytest_raises(func: &Expr, semantic: &SemanticModel) -> bool {
semantic
.resolve_qualified_name(func)
.is_some_and(|qualified_name| matches!(qualified_name.segments(), ["pytest", "raises"]))

View File

@@ -46,6 +46,11 @@ use ruff_text_size::{Ranged, TextRange};
/// original = list(range(10000))
/// filtered.extend(x for x in original if x % 2)
/// ```
///
/// Take care that if the original for-loop uses an assignment expression
/// as a conditional, such as `if match:=re.match("\d+","123")`, then
/// the corresponding comprehension must wrap the assignment
/// expression in parentheses to avoid a syntax error.
#[derive(ViolationMetadata)]
pub(crate) struct ManualListComprehension {
is_async: bool,
@@ -237,31 +242,12 @@ pub(crate) fn manual_list_comprehension(checker: &mut Checker, for_stmt: &ast::S
// filtered = [x for x in y]
// print(x)
// ```
let last_target_binding = checker
let target_binding = checker
.semantic()
.lookup_symbol(for_stmt_target_id)
.expect("for loop target must exist");
let target_binding = {
let mut bindings = [last_target_binding].into_iter().chain(
checker
.semantic()
.shadowed_bindings(checker.semantic().scope_id, last_target_binding)
.filter_map(|shadowed| shadowed.same_scope().then_some(shadowed.shadowed_id())),
);
bindings
.find_map(|binding_id| {
let binding = checker.semantic().binding(binding_id);
binding
.statement(checker.semantic())
.and_then(ast::Stmt::as_for_stmt)
.is_some_and(|stmt| stmt.range == for_stmt.range)
.then_some(binding)
})
.expect("for target binding must exist")
};
.bindings
.iter()
.find(|binding| for_stmt.target.range() == binding.range)
.unwrap();
// If any references to the loop target variable are after the loop,
// then converting it into a comprehension would cause a NameError
if target_binding
@@ -366,7 +352,21 @@ fn convert_to_list_extend(
let semantic = checker.semantic();
let locator = checker.locator();
let if_str = match if_test {
Some(test) => format!(" if {}", locator.slice(test.range())),
Some(test) => {
// If the test is an assignment expression,
// we must parenthesize it when it appears
// inside the comprehension to avoid a syntax error.
//
// Notice that we do not need `any_over_expr` here,
// since if the assignment expression appears
// internally (e.g. as an operand in a boolean
// operation) then it will already be parenthesized.
if test.is_named_expr() {
format!(" if ({})", locator.slice(test.range()))
} else {
format!(" if {}", locator.slice(test.range()))
}
}
None => String::new(),
};

View File

@@ -202,3 +202,12 @@ PERF401.py:245:13: PERF401 Use `list.extend` to create a transformed list
246 | result = []
|
= help: Replace for loop with list.extend
PERF401.py:262:13: PERF401 Use a list comprehension to create a transformed list
|
260 | for i in range(5):
261 | if j := i:
262 | items.append(j)
| ^^^^^^^^^^^^^^^ PERF401
|
= help: Replace for loop with list comprehension

View File

@@ -484,3 +484,25 @@ PERF401.py:245:13: PERF401 [*] Use `list.extend` to create a transformed list
245 |- result.append(a + 1) # PERF401
244 |+ result.extend(a + 1 for a in values) # PERF401
246 245 | result = []
247 246 |
248 247 | def f():
PERF401.py:262:13: PERF401 [*] Use a list comprehension to create a transformed list
|
260 | for i in range(5):
261 | if j := i:
262 | items.append(j)
| ^^^^^^^^^^^^^^^ PERF401
|
= help: Replace for loop with list comprehension
Unsafe fix
255 255 | # The fix here must parenthesize the walrus operator
256 256 | # https://github.com/astral-sh/ruff/issues/15047
257 257 | def f():
258 |- items = []
259 258 |
260 |- for i in range(5):
261 |- if j := i:
262 |- items.append(j)
259 |+ items = [j for i in range(5) if (j := i)]

View File

@@ -32,8 +32,8 @@ mod tests {
#[test_case(Rule::EndsInPeriod, Path::new("D400_415.py"))]
#[test_case(Rule::EndsInPunctuation, Path::new("D.py"))]
#[test_case(Rule::EndsInPunctuation, Path::new("D400_415.py"))]
#[test_case(Rule::FirstLineCapitalized, Path::new("D.py"))]
#[test_case(Rule::FirstLineCapitalized, Path::new("D403.py"))]
#[test_case(Rule::FirstWordUncapitalized, Path::new("D.py"))]
#[test_case(Rule::FirstWordUncapitalized, Path::new("D403.py"))]
#[test_case(Rule::FitsOnOneLine, Path::new("D.py"))]
#[test_case(Rule::IndentWithSpaces, Path::new("D.py"))]
#[test_case(Rule::UndocumentedMagicMethod, Path::new("D.py"))]

View File

@@ -10,8 +10,8 @@ use crate::docstrings::Docstring;
/// Checks for docstrings that do not start with a capital letter.
///
/// ## Why is this bad?
/// The first character in a docstring should be capitalized for, grammatical
/// correctness and consistency.
/// The first non-whitespace character in a docstring should be
/// capitalized for grammatical correctness and consistency.
///
/// ## Example
/// ```python
@@ -30,16 +30,16 @@ use crate::docstrings::Docstring;
/// - [NumPy Style Guide](https://numpydoc.readthedocs.io/en/latest/format.html)
/// - [Google Python Style Guide - Docstrings](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings)
#[derive(ViolationMetadata)]
pub(crate) struct FirstLineCapitalized {
pub(crate) struct FirstWordUncapitalized {
first_word: String,
capitalized_word: String,
}
impl AlwaysFixableViolation for FirstLineCapitalized {
impl AlwaysFixableViolation for FirstWordUncapitalized {
#[derive_message_formats]
fn message(&self) -> String {
format!(
"First word of the first line should be capitalized: `{}` -> `{}`",
"First word of the docstring should be capitalized: `{}` -> `{}`",
self.first_word, self.capitalized_word
)
}
@@ -59,7 +59,8 @@ pub(crate) fn capitalized(checker: &mut Checker, docstring: &Docstring) {
}
let body = docstring.body();
let first_word = body.split_once(' ').map_or_else(
let trim_start_body = body.trim_start();
let first_word = trim_start_body.split_once(' ').map_or_else(
|| {
// If the docstring is a single word, trim the punctuation marks because
// it makes the ASCII test below fail.
@@ -91,8 +92,10 @@ pub(crate) fn capitalized(checker: &mut Checker, docstring: &Docstring) {
let capitalized_word = uppercase_first_char.to_string() + first_word_chars.as_str();
let leading_whitespace_len = body.text_len() - trim_start_body.text_len();
let mut diagnostic = Diagnostic::new(
FirstLineCapitalized {
FirstWordUncapitalized {
first_word: first_word.to_string(),
capitalized_word: capitalized_word.to_string(),
},
@@ -101,7 +104,7 @@ pub(crate) fn capitalized(checker: &mut Checker, docstring: &Docstring) {
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
capitalized_word,
TextRange::at(body.start(), first_word.text_len()),
TextRange::at(body.start() + leading_whitespace_len, first_word.text_len()),
)));
checker.diagnostics.push(diagnostic);

View File

@@ -1,8 +1,7 @@
---
source: crates/ruff_linter/src/rules/pydocstyle/mod.rs
snapshot_kind: text
---
D403.py:2:5: D403 [*] First word of the first line should be capitalized: `this` -> `This`
D403.py:2:5: D403 [*] First word of the docstring should be capitalized: `this` -> `This`
|
1 | def bad_function():
2 | """this docstring is not capitalized"""
@@ -20,7 +19,7 @@ D403.py:2:5: D403 [*] First word of the first line should be capitalized: `this`
4 4 | def good_function():
5 5 | """This docstring is capitalized."""
D403.py:30:5: D403 [*] First word of the first line should be capitalized: `singleword` -> `Singleword`
D403.py:30:5: D403 [*] First word of the docstring should be capitalized: `singleword` -> `Singleword`
|
29 | def single_word():
30 | """singleword."""
@@ -40,11 +39,13 @@ D403.py:30:5: D403 [*] First word of the first line should be capitalized: `sing
32 32 | def single_word_no_dot():
33 33 | """singleword"""
D403.py:33:5: D403 [*] First word of the first line should be capitalized: `singleword` -> `Singleword`
D403.py:33:5: D403 [*] First word of the docstring should be capitalized: `singleword` -> `Singleword`
|
32 | def single_word_no_dot():
33 | """singleword"""
| ^^^^^^^^^^^^^^^^ D403
34 |
35 | def first_word_lots_of_whitespace():
|
= help: Capitalize `singleword` to `Singleword`
@@ -54,3 +55,32 @@ D403.py:33:5: D403 [*] First word of the first line should be capitalized: `sing
32 32 | def single_word_no_dot():
33 |- """singleword"""
33 |+ """Singleword"""
34 34 |
35 35 | def first_word_lots_of_whitespace():
36 36 | """
D403.py:36:5: D403 [*] First word of the docstring should be capitalized: `here` -> `Here`
|
35 | def first_word_lots_of_whitespace():
36 | """
| _____^
37 | |
38 | |
39 | |
40 | | here is the start of my docstring!
41 | |
42 | | What do you think?
43 | | """
| |_______^ D403
|
= help: Capitalize `here` to `Here`
Safe fix
37 37 |
38 38 |
39 39 |
40 |- here is the start of my docstring!
40 |+ Here is the start of my docstring!
41 41 |
42 42 | What do you think?
43 43 | """

View File

@@ -1,10 +1,10 @@
use ast::{Expr, StmtAugAssign};
use ast::Expr;
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, ViolationMetadata};
use ruff_python_ast as ast;
use ruff_python_ast::comparable::ComparableExpr;
use ruff_python_ast::Operator;
use ruff_python_codegen::Generator;
use ruff_python_ast::parenthesize::parenthesized_range;
use ruff_python_ast::{AstNode, ExprBinOp, ExpressionRef, Operator};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
@@ -103,11 +103,12 @@ pub(crate) fn non_augmented_assignment(checker: &mut Checker, assign: &ast::Stmt
if ComparableExpr::from(target) == ComparableExpr::from(&value.left) {
let mut diagnostic = Diagnostic::new(NonAugmentedAssignment { operator }, assign.range());
diagnostic.set_fix(Fix::unsafe_edit(augmented_assignment(
checker.generator(),
checker,
target,
value.op,
operator,
&value.right,
assign.range(),
value,
assign.range,
)));
checker.diagnostics.push(diagnostic);
return;
@@ -121,11 +122,12 @@ pub(crate) fn non_augmented_assignment(checker: &mut Checker, assign: &ast::Stmt
{
let mut diagnostic = Diagnostic::new(NonAugmentedAssignment { operator }, assign.range());
diagnostic.set_fix(Fix::unsafe_edit(augmented_assignment(
checker.generator(),
checker,
target,
value.op,
operator,
&value.left,
assign.range(),
value,
assign.range,
)));
checker.diagnostics.push(diagnostic);
}
@@ -135,21 +137,30 @@ pub(crate) fn non_augmented_assignment(checker: &mut Checker, assign: &ast::Stmt
///
/// For example, given `x = x + 1`, the fix would be `x += 1`.
fn augmented_assignment(
generator: Generator,
checker: &Checker,
target: &Expr,
operator: Operator,
operator: AugmentedOperator,
right_operand: &Expr,
original_expr: &ExprBinOp,
range: TextRange,
) -> Edit {
Edit::range_replacement(
generator.stmt(&ast::Stmt::AugAssign(StmtAugAssign {
range: TextRange::default(),
target: Box::new(target.clone()),
op: operator,
value: Box::new(right_operand.clone()),
})),
range,
)
let locator = checker.locator();
let right_operand_ref = ExpressionRef::from(right_operand);
let parent = original_expr.as_any_node_ref();
let comment_ranges = checker.comment_ranges();
let source = checker.source();
let right_operand_range =
parenthesized_range(right_operand_ref, parent, comment_ranges, source)
.unwrap_or(right_operand.range());
let right_operand_expr = locator.slice(right_operand_range);
let target_expr = locator.slice(target);
let new_content = format!("{target_expr} {operator} {right_operand_expr}");
Edit::range_replacement(new_content, range)
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]

View File

@@ -269,7 +269,7 @@ non_augmented_assignment.py:29:1: PLR6104 [*] Use `&=` to perform an augmented a
27 27 | to_cube = to_cube**to_cube
28 28 | timeDiffSeconds = timeDiffSeconds % 60
29 |-flags = flags & 0x1
29 |+flags &= 1
29 |+flags &= 0x1
30 30 | flags = flags | 0x1
31 31 | flags = flags ^ 0x1
32 32 | flags = flags << 1
@@ -290,7 +290,7 @@ non_augmented_assignment.py:30:1: PLR6104 [*] Use `|=` to perform an augmented a
28 28 | timeDiffSeconds = timeDiffSeconds % 60
29 29 | flags = flags & 0x1
30 |-flags = flags | 0x1
30 |+flags |= 1
30 |+flags |= 0x1
31 31 | flags = flags ^ 0x1
32 32 | flags = flags << 1
33 33 | flags = flags >> 1
@@ -311,7 +311,7 @@ non_augmented_assignment.py:31:1: PLR6104 [*] Use `^=` to perform an augmented a
29 29 | flags = flags & 0x1
30 30 | flags = flags | 0x1
31 |-flags = flags ^ 0x1
31 |+flags ^= 1
31 |+flags ^= 0x1
32 32 | flags = flags << 1
33 33 | flags = flags >> 1
34 34 | mat1 = mat1 @ mat2
@@ -495,7 +495,7 @@ non_augmented_assignment.py:42:1: PLR6104 [*] Use `*=` to perform an augmented a
40 40 | a_list[:] = a_list[:] * 3
41 41 |
42 |-index = index * (index + 10)
42 |+index *= index + 10
42 |+index *= (index + 10)
43 43 |
44 44 |
45 45 | class T:
@@ -524,8 +524,6 @@ non_augmented_assignment.py:51:1: PLR6104 [*] Use `+=` to perform an augmented a
50 | obj = T()
51 | obj.a = obj.a + 1
| ^^^^^^^^^^^^^^^^^ PLR6104
52 |
53 | # OK
|
= help: Replace with augmented assignment
@@ -536,5 +534,213 @@ non_augmented_assignment.py:51:1: PLR6104 [*] Use `+=` to perform an augmented a
51 |-obj.a = obj.a + 1
51 |+obj.a += 1
52 52 |
53 53 | # OK
54 54 | a_list[0] = a_list[:] * 3
53 53 |
54 54 | a = a+-1
non_augmented_assignment.py:54:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
54 | a = a+-1
| ^^^^^^^^ PLR6104
55 |
56 | # Regression tests for https://github.com/astral-sh/ruff/issues/11672
|
= help: Replace with augmented assignment
Unsafe fix
51 51 | obj.a = obj.a + 1
52 52 |
53 53 |
54 |-a = a+-1
54 |+a += -1
55 55 |
56 56 | # Regression tests for https://github.com/astral-sh/ruff/issues/11672
57 57 | test = 0x5
non_augmented_assignment.py:58:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
56 | # Regression tests for https://github.com/astral-sh/ruff/issues/11672
57 | test = 0x5
58 | test = test + 0xBA
| ^^^^^^^^^^^^^^^^^^ PLR6104
59 |
60 | test2 = b""
|
= help: Replace with augmented assignment
Unsafe fix
55 55 |
56 56 | # Regression tests for https://github.com/astral-sh/ruff/issues/11672
57 57 | test = 0x5
58 |-test = test + 0xBA
58 |+test += 0xBA
59 59 |
60 60 | test2 = b""
61 61 | test2 = test2 + b"\000"
non_augmented_assignment.py:61:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
60 | test2 = b""
61 | test2 = test2 + b"\000"
| ^^^^^^^^^^^^^^^^^^^^^^^ PLR6104
62 |
63 | test3 = ""
|
= help: Replace with augmented assignment
Unsafe fix
58 58 | test = test + 0xBA
59 59 |
60 60 | test2 = b""
61 |-test2 = test2 + b"\000"
61 |+test2 += b"\000"
62 62 |
63 63 | test3 = ""
64 64 | test3 = test3 + ( a := R""
non_augmented_assignment.py:64:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
63 | test3 = ""
64 | / test3 = test3 + ( a := R""
65 | | f"oo" )
| |__________________________________^ PLR6104
66 |
67 | test4 = []
|
= help: Replace with augmented assignment
Unsafe fix
61 61 | test2 = test2 + b"\000"
62 62 |
63 63 | test3 = ""
64 |-test3 = test3 + ( a := R""
64 |+test3 += ( a := R""
65 65 | f"oo" )
66 66 |
67 67 | test4 = []
non_augmented_assignment.py:68:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
67 | test4 = []
68 | / test4 = test4 + ( e
69 | | for e in
70 | | range(10)
71 | | )
| |___________________^ PLR6104
72 |
73 | test5 = test5 + (
|
= help: Replace with augmented assignment
Unsafe fix
65 65 | f"oo" )
66 66 |
67 67 | test4 = []
68 |-test4 = test4 + ( e
68 |+test4 += ( e
69 69 | for e in
70 70 | range(10)
71 71 | )
non_augmented_assignment.py:73:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
71 | )
72 |
73 | / test5 = test5 + (
74 | | 4
75 | | *
76 | | 10
77 | | )
| |_^ PLR6104
78 |
79 | test6 = test6 + \
|
= help: Replace with augmented assignment
Unsafe fix
70 70 | range(10)
71 71 | )
72 72 |
73 |-test5 = test5 + (
73 |+test5 += (
74 74 | 4
75 75 | *
76 76 | 10
non_augmented_assignment.py:79:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
77 | )
78 |
79 | / test6 = test6 + \
80 | | (
81 | | 4
82 | | *
83 | | 10
84 | | )
| |_________^ PLR6104
85 |
86 | test7 = \
|
= help: Replace with augmented assignment
Unsafe fix
76 76 | 10
77 77 | )
78 78 |
79 |-test6 = test6 + \
80 |- (
79 |+test6 += (
81 80 | 4
82 81 | *
83 82 | 10
non_augmented_assignment.py:86:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
84 | )
85 |
86 | / test7 = \
87 | | 100 \
88 | | + test7
| |___________^ PLR6104
89 |
90 | test8 = \
|
= help: Replace with augmented assignment
Unsafe fix
83 83 | 10
84 84 | )
85 85 |
86 |-test7 = \
87 |- 100 \
88 |- + test7
86 |+test7 += 100
89 87 |
90 88 | test8 = \
91 89 | 886 \
non_augmented_assignment.py:90:1: PLR6104 [*] Use `+=` to perform an augmented assignment directly
|
88 | + test7
89 |
90 | / test8 = \
91 | | 886 \
92 | | + \
93 | | \
94 | | test8
| |_________^ PLR6104
|
= help: Replace with augmented assignment
Unsafe fix
87 87 | 100 \
88 88 | + test7
89 89 |
90 |-test8 = \
91 |- 886 \
92 |- + \
93 |- \
94 |- test8
90 |+test8 += 886
95 91 |
96 92 |
97 93 | # OK

View File

@@ -415,6 +415,7 @@ mod tests {
#[test_case(Rule::UnnecessaryRegularExpression, Path::new("RUF055_0.py"))]
#[test_case(Rule::UnnecessaryRegularExpression, Path::new("RUF055_1.py"))]
#[test_case(Rule::UnnecessaryCastToInt, Path::new("RUF046.py"))]
#[test_case(Rule::PytestRaisesAmbiguousPattern, Path::new("RUF043.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",

View File

@@ -209,7 +209,14 @@ fn should_be_fstring(
return false;
}
if semantic
.lookup_symbol(id)
// the parsed expression nodes have incorrect ranges
// so we need to use the range of the literal for the
// lookup in order to get reasonable results.
.simulate_runtime_load_at_location_in_scope(
id,
literal.range(),
semantic.scope_id,
)
.map_or(true, |id| semantic.binding(id).kind.is_builtin())
{
return false;

View File

@@ -23,6 +23,7 @@ pub(crate) use never_union::*;
pub(crate) use none_not_at_end_of_union::*;
pub(crate) use parenthesize_chained_operators::*;
pub(crate) use post_init_default::*;
pub(crate) use pytest_raises_ambiguous_pattern::*;
pub(crate) use quadratic_list_summation::*;
pub(crate) use redirected_noqa::*;
pub(crate) use redundant_bool_literal::*;
@@ -71,6 +72,7 @@ mod never_union;
mod none_not_at_end_of_union;
mod parenthesize_chained_operators;
mod post_init_default;
mod pytest_raises_ambiguous_pattern;
mod quadratic_list_summation;
mod redirected_noqa;
mod redundant_bool_literal;

View File

@@ -0,0 +1,155 @@
use crate::checkers::ast::Checker;
use crate::rules::flake8_pytest_style::rules::is_pytest_raises;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, ViolationMetadata};
use ruff_python_ast as ast;
/// ## What it does
/// Checks for non-raw literal string arguments passed to the `match` parameter
/// of `pytest.raises()` where the string contains at least one unescaped
/// regex metacharacter.
///
/// ## Why is this bad?
/// The `match` argument is implicitly converted to a regex under the hood.
/// It should be made explicit whether the string is meant to be a regex or a "plain" pattern
/// by prefixing the string with the `r` suffix, escaping the metacharacter(s)
/// in the string using backslashes, or wrapping the entire string in a call to
/// `re.escape()`.
///
/// ## Example
///
/// ```python
/// import pytest
///
///
/// with pytest.raises(Exception, match="A full sentence."):
/// do_thing_that_raises()
/// ```
///
/// Use instead:
///
/// ```python
/// import pytest
///
///
/// with pytest.raises(Exception, match=r"A full sentence."):
/// do_thing_that_raises()
/// ```
///
/// Alternatively:
///
/// ```python
/// import pytest
/// import re
///
///
/// with pytest.raises(Exception, match=re.escape("A full sentence.")):
/// do_thing_that_raises()
/// ```
///
/// or:
///
/// ```python
/// import pytest
/// import re
///
///
/// with pytest.raises(Exception, "A full sentence\\."):
/// do_thing_that_raises()
/// ```
///
/// ## References
/// - [Python documentation: `re.escape`](https://docs.python.org/3/library/re.html#re.escape)
/// - [`pytest` documentation: `pytest.raises`](https://docs.pytest.org/en/latest/reference/reference.html#pytest-raises)
#[derive(ViolationMetadata)]
pub(crate) struct PytestRaisesAmbiguousPattern;
impl Violation for PytestRaisesAmbiguousPattern {
#[derive_message_formats]
fn message(&self) -> String {
"Pattern passed to `match=` contains metacharacters but is neither escaped nor raw"
.to_string()
}
fn fix_title(&self) -> Option<String> {
Some("Use a raw string or `re.escape()` to make the intention explicit".to_string())
}
}
/// RUF043
pub(crate) fn pytest_raises_ambiguous_pattern(checker: &mut Checker, call: &ast::ExprCall) {
if !is_pytest_raises(&call.func, checker.semantic()) {
return;
}
// It *can* be passed as a positional argument if you try very hard,
// but pytest only documents it as a keyword argument, and it's quite hard pass it positionally
let Some(ast::Keyword { value, .. }) = call.arguments.find_keyword("match") else {
return;
};
let ast::Expr::StringLiteral(string) = value else {
return;
};
let any_part_is_raw = string.value.iter().any(|part| part.flags.prefix().is_raw());
if any_part_is_raw || !string_has_unescaped_metacharacters(&string.value) {
return;
}
let diagnostic = Diagnostic::new(PytestRaisesAmbiguousPattern, string.range);
checker.diagnostics.push(diagnostic);
}
fn string_has_unescaped_metacharacters(value: &ast::StringLiteralValue) -> bool {
let mut escaped = false;
for character in value.chars() {
if escaped {
if escaped_char_is_regex_metasequence(character) {
return true;
}
escaped = false;
continue;
}
if character == '\\' {
escaped = true;
continue;
}
if char_is_regex_metacharacter(character) {
return true;
}
}
false
}
/// Whether the sequence `\<c>` means anything special:
///
/// * `\A`: Start of input
/// * `\b`, `\B`: Word boundary and non-word-boundary
/// * `\d`, `\D`: Digit and non-digit
/// * `\s`, `\S`: Whitespace and non-whitespace
/// * `\w`, `\W`: Word and non-word character
/// * `\z`: End of input
///
/// `\u`, `\U`, `\N`, `\x`, `\a`, `\f`, `\n`, `\r`, `\t`, `\v`
/// are also valid in normal strings and thus do not count.
/// `\b` means backspace only in character sets,
/// while backreferences (e.g., `\1`) are not valid without groups,
/// both of which should be caught in [`string_has_unescaped_metacharacters`].
const fn escaped_char_is_regex_metasequence(c: char) -> bool {
matches!(c, 'A' | 'b' | 'B' | 'd' | 'D' | 's' | 'S' | 'w' | 'W' | 'z')
}
const fn char_is_regex_metacharacter(c: char) -> bool {
matches!(
c,
'.' | '^' | '$' | '*' | '+' | '?' | '{' | '[' | '\\' | '|' | '('
)
}

View File

@@ -0,0 +1,319 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF043.py:8:43: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
6 | ### Errors
7 |
8 | with pytest.raises(FooAtTheEnd, match="foo."): ...
| ^^^^^^ RUF043
9 | with pytest.raises(PackageExtraSpecifier, match="Install `foo[bar]` to enjoy all features"): ...
10 | with pytest.raises(InnocentQuestion, match="Did you mean to use `Literal` instead?"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:9:53: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
8 | with pytest.raises(FooAtTheEnd, match="foo."): ...
9 | with pytest.raises(PackageExtraSpecifier, match="Install `foo[bar]` to enjoy all features"): ...
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF043
10 | with pytest.raises(InnocentQuestion, match="Did you mean to use `Literal` instead?"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:10:48: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
8 | with pytest.raises(FooAtTheEnd, match="foo."): ...
9 | with pytest.raises(PackageExtraSpecifier, match="Install `foo[bar]` to enjoy all features"): ...
10 | with pytest.raises(InnocentQuestion, match="Did you mean to use `Literal` instead?"): ...
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF043
11 |
12 | with pytest.raises(StringConcatenation, match="Huh"
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:12:51: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
10 | with pytest.raises(InnocentQuestion, match="Did you mean to use `Literal` instead?"): ...
11 |
12 | with pytest.raises(StringConcatenation, match="Huh"
| ___________________________________________________^
13 | | "?"): ...
| |_____________________________________________________^ RUF043
14 | with pytest.raises(ManuallyEscapedWindowsPathToDotFile, match="C:\\\\Users\\\\Foo\\\\.config"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:14:67: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
12 | with pytest.raises(StringConcatenation, match="Huh"
13 | "?"): ...
14 | with pytest.raises(ManuallyEscapedWindowsPathToDotFile, match="C:\\\\Users\\\\Foo\\\\.config"): ...
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF043
15 |
16 | with pytest.raises(MiddleDot, match="foo.bar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:16:41: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
14 | with pytest.raises(ManuallyEscapedWindowsPathToDotFile, match="C:\\\\Users\\\\Foo\\\\.config"): ...
15 |
16 | with pytest.raises(MiddleDot, match="foo.bar"): ...
| ^^^^^^^^^ RUF043
17 | with pytest.raises(EndDot, match="foobar."): ...
18 | with pytest.raises(EscapedFollowedByUnescaped, match="foo\\.*bar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:17:38: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
16 | with pytest.raises(MiddleDot, match="foo.bar"): ...
17 | with pytest.raises(EndDot, match="foobar."): ...
| ^^^^^^^^^ RUF043
18 | with pytest.raises(EscapedFollowedByUnescaped, match="foo\\.*bar"): ...
19 | with pytest.raises(UnescapedFollowedByEscaped, match="foo.\\*bar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:18:58: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
16 | with pytest.raises(MiddleDot, match="foo.bar"): ...
17 | with pytest.raises(EndDot, match="foobar."): ...
18 | with pytest.raises(EscapedFollowedByUnescaped, match="foo\\.*bar"): ...
| ^^^^^^^^^^^^ RUF043
19 | with pytest.raises(UnescapedFollowedByEscaped, match="foo.\\*bar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:19:58: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
17 | with pytest.raises(EndDot, match="foobar."): ...
18 | with pytest.raises(EscapedFollowedByUnescaped, match="foo\\.*bar"): ...
19 | with pytest.raises(UnescapedFollowedByEscaped, match="foo.\\*bar"): ...
| ^^^^^^^^^^^^ RUF043
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:24:44: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
22 | ## Metasequences
23 |
24 | with pytest.raises(StartOfInput, match="foo\\Abar"): ...
| ^^^^^^^^^^^ RUF043
25 | with pytest.raises(WordBoundary, match="foo\\bbar"): ...
26 | with pytest.raises(NonWordBoundary, match="foo\\Bbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:25:44: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
24 | with pytest.raises(StartOfInput, match="foo\\Abar"): ...
25 | with pytest.raises(WordBoundary, match="foo\\bbar"): ...
| ^^^^^^^^^^^ RUF043
26 | with pytest.raises(NonWordBoundary, match="foo\\Bbar"): ...
27 | with pytest.raises(Digit, match="foo\\dbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:26:47: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
24 | with pytest.raises(StartOfInput, match="foo\\Abar"): ...
25 | with pytest.raises(WordBoundary, match="foo\\bbar"): ...
26 | with pytest.raises(NonWordBoundary, match="foo\\Bbar"): ...
| ^^^^^^^^^^^ RUF043
27 | with pytest.raises(Digit, match="foo\\dbar"): ...
28 | with pytest.raises(NonDigit, match="foo\\Dbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:27:37: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
25 | with pytest.raises(WordBoundary, match="foo\\bbar"): ...
26 | with pytest.raises(NonWordBoundary, match="foo\\Bbar"): ...
27 | with pytest.raises(Digit, match="foo\\dbar"): ...
| ^^^^^^^^^^^ RUF043
28 | with pytest.raises(NonDigit, match="foo\\Dbar"): ...
29 | with pytest.raises(Whitespace, match="foo\\sbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:28:40: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
26 | with pytest.raises(NonWordBoundary, match="foo\\Bbar"): ...
27 | with pytest.raises(Digit, match="foo\\dbar"): ...
28 | with pytest.raises(NonDigit, match="foo\\Dbar"): ...
| ^^^^^^^^^^^ RUF043
29 | with pytest.raises(Whitespace, match="foo\\sbar"): ...
30 | with pytest.raises(NonWhitespace, match="foo\\Sbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:29:42: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
27 | with pytest.raises(Digit, match="foo\\dbar"): ...
28 | with pytest.raises(NonDigit, match="foo\\Dbar"): ...
29 | with pytest.raises(Whitespace, match="foo\\sbar"): ...
| ^^^^^^^^^^^ RUF043
30 | with pytest.raises(NonWhitespace, match="foo\\Sbar"): ...
31 | with pytest.raises(WordCharacter, match="foo\\wbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:30:45: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
28 | with pytest.raises(NonDigit, match="foo\\Dbar"): ...
29 | with pytest.raises(Whitespace, match="foo\\sbar"): ...
30 | with pytest.raises(NonWhitespace, match="foo\\Sbar"): ...
| ^^^^^^^^^^^ RUF043
31 | with pytest.raises(WordCharacter, match="foo\\wbar"): ...
32 | with pytest.raises(NonWordCharacter, match="foo\\Wbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:31:45: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
29 | with pytest.raises(Whitespace, match="foo\\sbar"): ...
30 | with pytest.raises(NonWhitespace, match="foo\\Sbar"): ...
31 | with pytest.raises(WordCharacter, match="foo\\wbar"): ...
| ^^^^^^^^^^^ RUF043
32 | with pytest.raises(NonWordCharacter, match="foo\\Wbar"): ...
33 | with pytest.raises(EndOfInput, match="foo\\zbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:32:48: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
30 | with pytest.raises(NonWhitespace, match="foo\\Sbar"): ...
31 | with pytest.raises(WordCharacter, match="foo\\wbar"): ...
32 | with pytest.raises(NonWordCharacter, match="foo\\Wbar"): ...
| ^^^^^^^^^^^ RUF043
33 | with pytest.raises(EndOfInput, match="foo\\zbar"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:33:42: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
31 | with pytest.raises(WordCharacter, match="foo\\wbar"): ...
32 | with pytest.raises(NonWordCharacter, match="foo\\Wbar"): ...
33 | with pytest.raises(EndOfInput, match="foo\\zbar"): ...
| ^^^^^^^^^^^ RUF043
34 |
35 | with pytest.raises(StartOfInput2, match="foobar\\A"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:35:45: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
33 | with pytest.raises(EndOfInput, match="foo\\zbar"): ...
34 |
35 | with pytest.raises(StartOfInput2, match="foobar\\A"): ...
| ^^^^^^^^^^^ RUF043
36 | with pytest.raises(WordBoundary2, match="foobar\\b"): ...
37 | with pytest.raises(NonWordBoundary2, match="foobar\\B"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:36:45: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
35 | with pytest.raises(StartOfInput2, match="foobar\\A"): ...
36 | with pytest.raises(WordBoundary2, match="foobar\\b"): ...
| ^^^^^^^^^^^ RUF043
37 | with pytest.raises(NonWordBoundary2, match="foobar\\B"): ...
38 | with pytest.raises(Digit2, match="foobar\\d"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:37:48: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
35 | with pytest.raises(StartOfInput2, match="foobar\\A"): ...
36 | with pytest.raises(WordBoundary2, match="foobar\\b"): ...
37 | with pytest.raises(NonWordBoundary2, match="foobar\\B"): ...
| ^^^^^^^^^^^ RUF043
38 | with pytest.raises(Digit2, match="foobar\\d"): ...
39 | with pytest.raises(NonDigit2, match="foobar\\D"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:38:38: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
36 | with pytest.raises(WordBoundary2, match="foobar\\b"): ...
37 | with pytest.raises(NonWordBoundary2, match="foobar\\B"): ...
38 | with pytest.raises(Digit2, match="foobar\\d"): ...
| ^^^^^^^^^^^ RUF043
39 | with pytest.raises(NonDigit2, match="foobar\\D"): ...
40 | with pytest.raises(Whitespace2, match="foobar\\s"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:39:41: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
37 | with pytest.raises(NonWordBoundary2, match="foobar\\B"): ...
38 | with pytest.raises(Digit2, match="foobar\\d"): ...
39 | with pytest.raises(NonDigit2, match="foobar\\D"): ...
| ^^^^^^^^^^^ RUF043
40 | with pytest.raises(Whitespace2, match="foobar\\s"): ...
41 | with pytest.raises(NonWhitespace2, match="foobar\\S"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:40:43: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
38 | with pytest.raises(Digit2, match="foobar\\d"): ...
39 | with pytest.raises(NonDigit2, match="foobar\\D"): ...
40 | with pytest.raises(Whitespace2, match="foobar\\s"): ...
| ^^^^^^^^^^^ RUF043
41 | with pytest.raises(NonWhitespace2, match="foobar\\S"): ...
42 | with pytest.raises(WordCharacter2, match="foobar\\w"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:41:46: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
39 | with pytest.raises(NonDigit2, match="foobar\\D"): ...
40 | with pytest.raises(Whitespace2, match="foobar\\s"): ...
41 | with pytest.raises(NonWhitespace2, match="foobar\\S"): ...
| ^^^^^^^^^^^ RUF043
42 | with pytest.raises(WordCharacter2, match="foobar\\w"): ...
43 | with pytest.raises(NonWordCharacter2, match="foobar\\W"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:42:46: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
40 | with pytest.raises(Whitespace2, match="foobar\\s"): ...
41 | with pytest.raises(NonWhitespace2, match="foobar\\S"): ...
42 | with pytest.raises(WordCharacter2, match="foobar\\w"): ...
| ^^^^^^^^^^^ RUF043
43 | with pytest.raises(NonWordCharacter2, match="foobar\\W"): ...
44 | with pytest.raises(EndOfInput2, match="foobar\\z"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:43:49: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
41 | with pytest.raises(NonWhitespace2, match="foobar\\S"): ...
42 | with pytest.raises(WordCharacter2, match="foobar\\w"): ...
43 | with pytest.raises(NonWordCharacter2, match="foobar\\W"): ...
| ^^^^^^^^^^^ RUF043
44 | with pytest.raises(EndOfInput2, match="foobar\\z"): ...
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:44:43: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
42 | with pytest.raises(WordCharacter2, match="foobar\\w"): ...
43 | with pytest.raises(NonWordCharacter2, match="foobar\\W"): ...
44 | with pytest.raises(EndOfInput2, match="foobar\\z"): ...
| ^^^^^^^^^^^ RUF043
|
= help: Use a raw string or `re.escape()` to make the intention explicit
RUF043.py:49:42: RUF043 Pattern passed to `match=` contains metacharacters but is neither escaped nor raw
|
47 | ### Acceptable false positives
48 |
49 | with pytest.raises(NameEscape, match="\\N{EN DASH}"): ...
| ^^^^^^^^^^^^^^ RUF043
|
= help: Use a raw string or `re.escape()` to make the intention explicit

View File

@@ -711,12 +711,43 @@ impl<'a> SemanticModel<'a> {
/// References from within an [`ast::Comprehension`] can produce incorrect
/// results when referring to a [`BindingKind::NamedExprAssignment`].
pub fn simulate_runtime_load(&self, name: &ast::ExprName) -> Option<BindingId> {
let symbol = name.id.as_str();
let range = name.range;
self.simulate_runtime_load_at_location_in_scope(name.id.as_str(), name.range, self.scope_id)
}
/// Simulates a runtime load of the given symbol.
///
/// This should not be run until after all the bindings have been visited.
///
/// The main purpose of this method and what makes this different from
/// [`SemanticModel::lookup_symbol_in_scope`] is that it may be used to
/// perform speculative name lookups.
///
/// In most cases a load can be accurately modeled simply by calling
/// [`SemanticModel::lookup_symbol`] at the right time during semantic
/// analysis, however for speculative lookups this is not the case,
/// since we're aiming to change the semantic meaning of our load.
/// E.g. we want to check what would happen if we changed a forward
/// reference to an immediate load or vice versa.
///
/// Use caution when utilizing this method, since it was primarily designed
/// to work for speculative lookups from within type definitions, which
/// happen to share some nice properties, where attaching each binding
/// to a range in the source code and ordering those bindings based on
/// that range is a good enough approximation of which bindings are
/// available at runtime for which reference.
///
/// References from within an [`ast::Comprehension`] can produce incorrect
/// results when referring to a [`BindingKind::NamedExprAssignment`].
pub fn simulate_runtime_load_at_location_in_scope(
&self,
symbol: &str,
symbol_range: TextRange,
scope_id: ScopeId,
) -> Option<BindingId> {
let mut seen_function = false;
let mut class_variables_visible = true;
let mut source_order_sensitive_lookup = true;
for (index, scope_id) in self.scopes.ancestor_ids(self.scope_id).enumerate() {
for (index, scope_id) in self.scopes.ancestor_ids(scope_id).enumerate() {
let scope = &self.scopes[scope_id];
// Only once we leave a function scope and its enclosing type scope should
@@ -776,7 +807,7 @@ impl<'a> SemanticModel<'a> {
_ => binding.range,
};
if binding_range.ordering(range).is_lt() {
if binding_range.ordering(symbol_range).is_lt() {
return Some(shadowed_id);
}
}

View File

@@ -249,13 +249,13 @@ pub enum SimpleTokenKind {
/// `;`
Semi,
/// '/'
/// `/`
Slash,
/// '*'
/// `*`
Star,
/// `.`.
/// `.`
Dot,
/// `+`

View File

@@ -37,18 +37,12 @@ impl ResolvedClientCapabilities {
.and_then(|workspace_edit| workspace_edit.document_changes)
.unwrap_or_default();
let workspace_refresh = true;
// TODO(jane): Once the bug involving workspace.diagnostic(s) deserialization has been fixed,
// uncomment this.
/*
let workspace_refresh = client_capabilities
.workspace
.as_ref()
.and_then(|workspace| workspace.diagnostic.as_ref())
.and_then(|workspace| workspace.diagnostics.as_ref())
.and_then(|diagnostic| diagnostic.refresh_support)
.unwrap_or_default();
*/
let pull_diagnostics = client_capabilities
.text_document

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_wasm"
version = "0.8.3"
version = "0.8.4"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -95,6 +95,7 @@ Similar to Ruff's CLI, the Ruff Language Server fully supports Jupyter Notebook
capabilities available to Python files.
!!! note
Unlike [`ruff-lsp`](https://github.com/astral-sh/ruff-lsp) and similar to the Ruff's CLI, the
native language server requires user to explicitly include the Jupyter Notebook files in the set
of files to lint and format. Refer to the [Jupyter Notebook discovery](https://docs.astral.sh/ruff/configuration/#jupyter-notebook-discovery)

View File

@@ -22,6 +22,7 @@ The Ruff Language Server was available first in Ruff [v0.4.5](https://astral.sh/
in beta and stabilized in Ruff [v0.5.3](https://github.com/astral-sh/ruff/releases/tag/0.5.3).
!!! note
This is the documentation for Ruff's built-in language server written in Rust (`ruff server`).
If you are looking for the documentation for the `ruff-lsp` language server, please refer to the
[README](https://github.com/astral-sh/ruff-lsp) of the `ruff-lsp` repository.

Some files were not shown because too many files have changed in this diff Show More