Compare commits

...

24 Commits

Author SHA1 Message Date
dylwil3
8bda1837d9 update tests and snapshots 2025-06-06 09:28:47 -05:00
dylwil3
e667d53828 update docs 2025-06-06 09:28:47 -05:00
dylwil3
a071114638 remove preview check for rule 2025-06-06 09:28:47 -05:00
Brent Westbrook
86ae9fbc08 Ruff 0.12
Summary
--

Release branch for Ruff 0.12.0

TODOs
--

- [ ] Drop empty first commit
- [ ] Merge with rebase-merge (**don't squash merge!!!!**)
2025-06-06 09:28:47 -05:00
Brent Westbrook
11966beeec Ruff 0.12
Summary
--

Release branch for Ruff 0.12.0

TODOs
--

- [ ] Drop empty first commit
- [ ] Merge with rebase-merge (**don't squash merge!!!!**)
2025-06-06 10:14:43 -04:00
Alex Waygood
1274521f9f [ty] Track the origin of the environment.python setting for better error messages (#18483) 2025-06-06 13:36:41 +01:00
shimies
8d24760643 Fix doc for Neovim setting examples (#18491)
## Summary
This PR fixes an error in the example Neovim configuration on [this
documentation
page](https://docs.astral.sh/ruff/editors/settings/#configuration).
The `configuration` block should be nested under `settings`, consistent
with other properties and as outlined
[here](https://docs.astral.sh/ruff/editors/setup/#neovim).

I encountered this issue when copying the example to configure ruff
integration in my neovim - the config didn’t work until I corrected the
nesting.

## Test Plan
- [x] Confirmed that the corrected configuration works in a real Neovim
+ Ruff setup
- [x] Verified that the updated configuration renders correctly in
MkDocs
<img width="382" alt="image"
src="https://github.com/user-attachments/assets/0722fb35-8ffa-4b10-90ba-c6e8417e40bf"
/>
2025-06-06 15:19:16 +05:30
Carl Meyer
db8db536f8 [ty] clarify requirements for scope_id argument to in_type_expression (#18488) 2025-06-05 22:46:26 -07:00
Carl Meyer
cb8246bc5f [ty] remove unnecessary Either (#18489)
Just a quick review-comment follow-up.
2025-06-05 18:39:22 -07:00
Dylan
5faf72a4d9 Bump 0.11.13 (#18484) 2025-06-05 15:18:38 -05:00
Micha Reiser
28dbc5c51e [ty] Fix completion order in playground (#18480) 2025-06-05 18:55:54 +02:00
Brent Westbrook
ce216c79cc Remove Message::to_rule (#18447)
## Summary

As the title says, this PR removes the `Message::to_rule` method by
replacing related uses of `Rule` with `NoqaCode` (or the rule's name in
the case of the cache). Where it seemed a `Rule` was really needed, we
convert back to the `Rule` by parsing either the rule name (with
`str::parse`) or the `NoqaCode` (with `Rule::from_code`).

I thought this was kind of like cheating and that it might not resolve
this part of Micha's
[comment](https://github.com/astral-sh/ruff/pull/18391#issuecomment-2933764275):

> because we can't add Rule to Diagnostic or **have it anywhere in our
shared rendering logic**

but after looking again, the only remaining `Rule` conversion in
rendering code is for the SARIF output format. The other two non-test
`Rule` conversions are for caching and writing a fix summary, which I
don't think fall into the shared rendering logic. That leaves the SARIF
format as the only real problem, but maybe we can delay that for now.

The motivation here is that we won't be able to store a `Rule` on the
new `Diagnostic` type, but we should be able to store a `NoqaCode`,
likely as a string.

## Test Plan

Existing tests

##
[Benchmarks](https://codspeed.io/astral-sh/ruff/branches/brent%2Fremove-to-rule)

Almost no perf regression, only -1% on
`linter/default-rules[large/dataset.py]`.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2025-06-05 12:48:29 -04:00
Victorien
33468cc8cc [pyupgrade] Apply UP035 only on py313+ for get_type_hints() (#18476) 2025-06-05 17:16:29 +01:00
Ibraheem Ahmed
8531f4b3ca [ty] Add infrastructure for AST garbage collection (#18445)
## Summary

https://github.com/astral-sh/ty/issues/214 will require a couple
invasive changes that I would like to get merged even before garbage
collection is fully implemented (to avoid rebasing):
- `ParsedModule` can no longer be dereferenced directly. Instead you
need to load a `ParsedModuleRef` to access the AST, which requires a
reference to the salsa database (as it may require re-parsing the AST if
it was collected).
- `AstNodeRef` can only be dereferenced with the `node` method, which
takes a reference to the `ParsedModuleRef`. This allows us to encode the
fact that ASTs do not live as long as the database and may be collected
as soon a given instance of a `ParsedModuleRef` is dropped. There are a
number of places where we currently merge the `'db` and `'ast`
lifetimes, so this requires giving some types/functions two separate
lifetime parameters.
2025-06-05 11:43:18 -04:00
Andrew Gallant
55100209c7 [ty] IDE: add support for object.<CURSOR> completions (#18468)
This PR adds logic for detecting `Name Dot [Name]` token patterns,
finding the corresponding `ExprAttribute`, getting the type of the
object and returning the members available on that object.

Here's a video demonstrating this working:

https://github.com/user-attachments/assets/42ce78e8-5930-4211-a18a-fa2a0434d0eb

Ref astral-sh/ty#86
2025-06-05 11:15:19 -04:00
chiri
c0bb83b882 [perflint] fix missing parentheses for lambda and ternary conditions (PERF401, PERF403) (#18412)
Closes #18405
2025-06-05 09:57:08 -05:00
Brent Westbrook
74a4e9af3d Combine lint and syntax error handling (#18471)
## Summary

This is a spin-off from
https://github.com/astral-sh/ruff/pull/18447#discussion_r2125844669 to
avoid using `Message::noqa_code` to differentiate between lints and
syntax errors. I went through all of the calls on `main` and on the
branch from #18447, and the instance in `ruff_server` noted in the
linked comment was actually the primary place where this was being done.
Other calls to `noqa_code` are typically some variation of
`message.noqa_code().map_or(String::new, format!(...))`, with the major
exception of the gitlab output format:


a120610b5b/crates/ruff_linter/src/message/gitlab.rs (L93-L105)

which obviously assumes that `None` means syntax error. A simple fix
here would be to use `message.name()` for `check_name` instead of the
noqa code, but I'm not sure how breaking that would be. This could just
be:

```rust
 let description = message.body();
 let description = description.strip_prefix("SyntaxError: ").unwrap_or(description).to_string();
 let check_name = message.name();
```

In that case. This sounds reasonable based on the [Code Quality report
format](https://docs.gitlab.com/ci/testing/code_quality/#code-quality-report-format)
docs:

> | Name | Type | Description|
> |-----|-----|----|
> |`check_name` | String | A unique name representing the check, or
rule, associated with this violation. |

## Test Plan

Existing tests
2025-06-05 12:50:02 +00:00
Alex Waygood
8485dbb324 [ty] Fix --python argument for Windows, and improve error messages for bad --python arguments (#18457)
## Summary

Fixes https://github.com/astral-sh/ty/issues/556.

On Windows, system installations have different layouts to virtual
environments. In Windows virtual environments, the Python executable is
found at `<sys.prefix>/Scripts/python.exe`. But in Windows system
installations, the Python executable is found at
`<sys.prefix>/python.exe`. That means that Windows users were able to
point to Python executables inside virtual environments with the
`--python` flag, but they weren't able to point to Python executables
inside system installations.

This PR fixes that issue. It also makes a couple of other changes:
- Nearly all `sys.prefix` resolution is moved inside `site_packages.rs`.
That was the original design of the `site-packages` resolution logic,
but features implemented since the initial implementation have added
some resolution and validation to `resolver.rs` inside the module
resolver. That means that we've ended up with a somewhat confusing code
structure and a situation where several checks are unnecessarily
duplicated between the two modules.
- I noticed that we had quite bad error messages if you e.g. pointed to
a path that didn't exist on disk with `--python` (we just gave a
somewhat impenetrable message saying that we "failed to canonicalize"
the path). I improved the error messages here and added CLI tests for
`--python` and the `environment.python` configuration setting.

## Test Plan

- Existing tests pass
- Added new CLI tests
- I manually checked that virtual-environment discovery still works if
no configuration is given
- Micha did some manual testing to check that pointing `--python` to a
system-installation executable now works on Windows
2025-06-05 08:19:15 +01:00
Shunsuke Shibayama
0858896bc4 [ty] type narrowing by attribute/subscript assignments (#18041)
## Summary

This PR partially solves https://github.com/astral-sh/ty/issues/164
(derived from #17643).

Currently, the definitions we manage are limited to those for simple
name (symbol) targets, but we expand this to track definitions for
attribute and subscript targets as well.

This was originally planned as part of the work in #17643, but the
changes are significant, so I made it a separate PR.
After merging this PR, I will reflect this changes in #17643.

There is still some incomplete work remaining, but the basic features
have been implemented, so I am publishing it as a draft PR.
Here is the TODO list (there may be more to come):
* [x] Complete rewrite and refactoring of documentation (removing
`Symbol` and replacing it with `Place`)
* [x] More thorough testing
* [x] Consolidation of duplicated code (maybe we can consolidate the
handling related to name, attribute, and subscript)

This PR replaces the current `Symbol` API with the `Place` API, which is
a concept that includes attributes and subscripts (the term is borrowed
from Rust).

## Test Plan

`mdtest/narrow/assignment.md` is added.

---------

Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-06-04 17:24:27 -07:00
Alex Waygood
ce8b744f17 [ty] Only calculate information for unresolved-reference subdiagnostic if we know we'll emit the diagnostic (#18465)
## Summary

This optimizes some of the logic added in
https://github.com/astral-sh/ruff/pull/18444. In general, we only
calculate information for subdiagnostics if we know we'll actually emit
the diagnostic. The check to see whether we'll emit the diagnostic is
work we'll definitely have to do whereas the the work to gather
information for a subdiagnostic isn't work we necessarily have to do if
the diagnostic isn't going to be emitted at all.

This PR makes us lazier about gathering the information we need for the
subdiagnostic, and moves all the subdiagnostic logic into one function
rather than having some `unresolved-reference` subdiagnostic logic in
`infer.rs` and some in `diagnostic.rs`.

## Test Plan

`cargo test -p ty_python_semantic`
2025-06-04 20:41:00 +01:00
Alex Waygood
5a8cdab771 [ty] Only consider a type T a subtype of a protocol P if all of P's members are fully bound on T (#18466)
## Summary

Fixes https://github.com/astral-sh/ty/issues/578

## Test Plan

mdtests
2025-06-04 19:39:14 +00:00
Alex Waygood
3a8191529c [ty] Exclude members starting with _abc_ from a protocol interface (#18467)
## Summary

As well as excluding a hardcoded set of special attributes, CPython at
runtime also excludes any attributes or declarations starting with
`_abc_` from the set of members that make up a protocol interface. I
missed this in my initial implementation.

This is a bit of a CPython implementation detail, but I do think it's
important that we try to model the runtime as best we can here. The
closer we are to the runtime behaviour, the closer we come to sound
behaviour when narrowing types from `isinstance()` checks against
runtime-checkable protocols (for example)

## Test Plan

Extended an existing mdtest
2025-06-04 20:34:09 +01:00
lipefree
e658778ced [ty] Add subdiagnostic suggestion to unresolved-reference diagnostic when variable exists on self (#18444)
## Summary

Closes https://github.com/astral-sh/ty/issues/502.

In the following example:
```py
class Foo:
    x: int

    def method(self):
        y = x
```
The user may intended to use `y = self.x` in `method`. 

This is now added as a subdiagnostic in the following form : 

`info: An attribute with the same name as 'x' is defined, consider using
'self.x'`

## Test Plan

Added mdtest with snapshot diagnostics.
2025-06-04 08:13:50 -07:00
David Peter
f1883d71a4 [ty] IDE: only provide declarations and bindings as completions (#18456)
## Summary

Previously, all symbols where provided as possible completions. In an
example like the following, both `foo` and `f` were suggested as
completions, because `f` itself is a symbol.
```py
foo = 1

f<CURSOR>
```
Similarly, in the following example, `hidden_symbol` was suggested, even
though it is not statically visible:
```py
if 1 + 2 != 3:
    hidden_symbol = 1

hidden_<CURSOR>
```

With the change suggested here, we only use statically visible
declarations and bindings as a source for completions.


## Test Plan

- Updated snapshot tests
- New test for statically hidden definitions
- Added test for star import
2025-06-04 16:11:05 +02:00
138 changed files with 6247 additions and 3774 deletions

View File

@@ -1,5 +1,31 @@
# Changelog
## 0.11.13
### Preview features
- \[`airflow`\] Add unsafe fix for module moved cases (`AIR301`,`AIR311`,`AIR312`,`AIR302`) ([#18367](https://github.com/astral-sh/ruff/pull/18367),[#18366](https://github.com/astral-sh/ruff/pull/18366),[#18363](https://github.com/astral-sh/ruff/pull/18363),[#18093](https://github.com/astral-sh/ruff/pull/18093))
- \[`refurb`\] Add coverage of `set` and `frozenset` calls (`FURB171`) ([#18035](https://github.com/astral-sh/ruff/pull/18035))
- \[`refurb`\] Mark `FURB180` fix unsafe when class has bases ([#18149](https://github.com/astral-sh/ruff/pull/18149))
### Bug fixes
- \[`perflint`\] Fix missing parentheses for lambda and ternary conditions (`PERF401`, `PERF403`) ([#18412](https://github.com/astral-sh/ruff/pull/18412))
- \[`pyupgrade`\] Apply `UP035` only on py313+ for `get_type_hints()` ([#18476](https://github.com/astral-sh/ruff/pull/18476))
- \[`pyupgrade`\] Make fix unsafe if it deletes comments (`UP004`,`UP050`) ([#18393](https://github.com/astral-sh/ruff/pull/18393), [#18390](https://github.com/astral-sh/ruff/pull/18390))
### Rule changes
- \[`fastapi`\] Avoid false positive for class dependencies (`FAST003`) ([#18271](https://github.com/astral-sh/ruff/pull/18271))
### Documentation
- Update editor setup docs for Neovim and Vim ([#18324](https://github.com/astral-sh/ruff/pull/18324))
### Other changes
- Support Python 3.14 template strings (t-strings) in formatter and parser ([#17851](https://github.com/astral-sh/ruff/pull/17851))
## 0.11.12
### Preview features

8
Cargo.lock generated
View File

@@ -2501,7 +2501,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.11.12"
version = "0.11.13"
dependencies = [
"anyhow",
"argfile",
@@ -2738,7 +2738,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.11.12"
version = "0.11.13"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3074,7 +3074,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.11.12"
version = "0.11.13"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -3965,6 +3965,7 @@ dependencies = [
"anyhow",
"bitflags 2.9.1",
"camino",
"colored 3.0.0",
"compact_str",
"countme",
"dir-test",
@@ -3977,6 +3978,7 @@ dependencies = [
"ordermap",
"quickcheck",
"quickcheck_macros",
"ruff_annotate_snippets",
"ruff_db",
"ruff_index",
"ruff_macros",

View File

@@ -148,8 +148,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.11.12/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.11.12/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.11.13/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.11.13/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -182,7 +182,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.11.12
rev: v0.11.13
hooks:
# Run the linter.
- id: ruff

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.11.12"
version = "0.11.13"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -439,7 +439,10 @@ impl LintCacheData {
let messages = messages
.iter()
.filter_map(|msg| msg.to_rule().map(|rule| (rule, msg)))
// Parse the kebab-case rule name into a `Rule`. This will fail for syntax errors, so
// this also serves to filter them out, but we shouldn't be caching files with syntax
// errors anyway.
.filter_map(|msg| Some((msg.name().parse().ok()?, msg)))
.map(|(rule, msg)| {
// Make sure that all message use the same source file.
assert_eq!(

View File

@@ -30,7 +30,7 @@ impl<'a> Explanation<'a> {
let (linter, _) = Linter::parse_code(&code).unwrap();
let fix = rule.fixable().to_string();
Self {
name: rule.as_ref(),
name: rule.name().as_str(),
code,
linter: linter.name(),
summary: rule.message_formats()[0],
@@ -44,7 +44,7 @@ impl<'a> Explanation<'a> {
fn format_rule_text(rule: Rule) -> String {
let mut output = String::new();
let _ = write!(&mut output, "# {} ({})", rule.as_ref(), rule.noqa_code());
let _ = write!(&mut output, "# {} ({})", rule.name(), rule.noqa_code());
output.push('\n');
output.push('\n');

View File

@@ -165,9 +165,9 @@ impl AddAssign for FixMap {
continue;
}
let fixed_in_file = self.0.entry(filename).or_default();
for (rule, count) in fixed {
for (rule, name, count) in fixed.iter() {
if count > 0 {
*fixed_in_file.entry(rule).or_default() += count;
*fixed_in_file.entry(rule).or_default(name) += count;
}
}
}
@@ -305,7 +305,7 @@ pub(crate) fn lint_path(
ParseSource::None,
);
let transformed = source_kind;
let fixed = FxHashMap::default();
let fixed = FixTable::default();
(result, transformed, fixed)
}
} else {
@@ -319,7 +319,7 @@ pub(crate) fn lint_path(
ParseSource::None,
);
let transformed = source_kind;
let fixed = FxHashMap::default();
let fixed = FixTable::default();
(result, transformed, fixed)
};
@@ -473,7 +473,7 @@ pub(crate) fn lint_stdin(
}
let transformed = source_kind;
let fixed = FxHashMap::default();
let fixed = FixTable::default();
(result, transformed, fixed)
}
} else {
@@ -487,7 +487,7 @@ pub(crate) fn lint_stdin(
ParseSource::None,
);
let transformed = source_kind;
let fixed = FxHashMap::default();
let fixed = FixTable::default();
(result, transformed, fixed)
};

View File

@@ -7,6 +7,7 @@ use bitflags::bitflags;
use colored::Colorize;
use itertools::{Itertools, iterate};
use ruff_linter::codes::NoqaCode;
use ruff_linter::linter::FixTable;
use serde::Serialize;
use ruff_linter::fs::relativize_path;
@@ -80,7 +81,7 @@ impl Printer {
let fixed = diagnostics
.fixed
.values()
.flat_map(std::collections::HashMap::values)
.flat_map(FixTable::counts)
.sum::<usize>();
if self.flags.intersects(Flags::SHOW_VIOLATIONS) {
@@ -302,7 +303,7 @@ impl Printer {
let statistics: Vec<ExpandedStatistics> = diagnostics
.messages
.iter()
.map(|message| (message.to_noqa_code(), message))
.map(|message| (message.noqa_code(), message))
.sorted_by_key(|(code, message)| (*code, message.fixable()))
.fold(
vec![],
@@ -472,13 +473,13 @@ fn show_fix_status(fix_mode: flags::FixMode, fixables: Option<&FixableStatistics
fn print_fix_summary(writer: &mut dyn Write, fixed: &FixMap) -> Result<()> {
let total = fixed
.values()
.map(|table| table.values().sum::<usize>())
.map(|table| table.counts().sum::<usize>())
.sum::<usize>();
assert!(total > 0);
let num_digits = num_digits(
*fixed
fixed
.values()
.filter_map(|table| table.values().max())
.filter_map(|table| table.counts().max())
.max()
.unwrap(),
);
@@ -498,12 +499,11 @@ fn print_fix_summary(writer: &mut dyn Write, fixed: &FixMap) -> Result<()> {
relativize_path(filename).bold(),
":".cyan()
)?;
for (rule, count) in table.iter().sorted_by_key(|(.., count)| Reverse(*count)) {
for (code, name, count) in table.iter().sorted_by_key(|(.., count)| Reverse(*count)) {
writeln!(
writer,
" {count:>num_digits$} × {} ({})",
rule.noqa_code().to_string().red().bold(),
rule.as_ref(),
" {count:>num_digits$} × {code} ({name})",
code = code.to_string().red().bold(),
)?;
}
}

View File

@@ -566,7 +566,7 @@ fn venv() -> Result<()> {
----- stderr -----
ruff failed
Cause: Invalid search path settings
Cause: Failed to discover the site-packages directory: Invalid `--python` argument: `none` could not be canonicalized
Cause: Failed to discover the site-packages directory: Invalid `--python` argument `none`: does not point to a Python executable or a directory on disk
");
});

View File

@@ -1,5 +1,4 @@
use std::fmt::Formatter;
use std::ops::Deref;
use std::sync::Arc;
use ruff_python_ast::ModModule;
@@ -18,7 +17,7 @@ use crate::source::source_text;
/// The query is only cached when the [`source_text()`] hasn't changed. This is because
/// comparing two ASTs is a non-trivial operation and every offset change is directly
/// reflected in the changed AST offsets.
/// The other reason is that Ruff's AST doesn't implement `Eq` which Sala requires
/// The other reason is that Ruff's AST doesn't implement `Eq` which Salsa requires
/// for determining if a query result is unchanged.
#[salsa::tracked(returns(ref), no_eq)]
pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
@@ -36,7 +35,10 @@ pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
ParsedModule::new(parsed)
}
/// Cheap cloneable wrapper around the parsed module.
/// A wrapper around a parsed module.
///
/// This type manages instances of the module AST. A particular instance of the AST
/// is represented with the [`ParsedModuleRef`] type.
#[derive(Clone)]
pub struct ParsedModule {
inner: Arc<Parsed<ModModule>>,
@@ -49,17 +51,11 @@ impl ParsedModule {
}
}
/// Consumes `self` and returns the Arc storing the parsed module.
pub fn into_arc(self) -> Arc<Parsed<ModModule>> {
self.inner
}
}
impl Deref for ParsedModule {
type Target = Parsed<ModModule>;
fn deref(&self) -> &Self::Target {
&self.inner
/// Loads a reference to the parsed module.
pub fn load(&self, _db: &dyn Db) -> ParsedModuleRef {
ParsedModuleRef {
module_ref: self.inner.clone(),
}
}
}
@@ -77,6 +73,30 @@ impl PartialEq for ParsedModule {
impl Eq for ParsedModule {}
/// Cheap cloneable wrapper around an instance of a module AST.
#[derive(Clone)]
pub struct ParsedModuleRef {
module_ref: Arc<Parsed<ModModule>>,
}
impl ParsedModuleRef {
pub fn as_arc(&self) -> &Arc<Parsed<ModModule>> {
&self.module_ref
}
pub fn into_arc(self) -> Arc<Parsed<ModModule>> {
self.module_ref
}
}
impl std::ops::Deref for ParsedModuleRef {
type Target = Parsed<ModModule>;
fn deref(&self) -> &Self::Target {
&self.module_ref
}
}
#[cfg(test)]
mod tests {
use crate::Db;
@@ -98,7 +118,7 @@ mod tests {
let file = system_path_to_file(&db, path).unwrap();
let parsed = parsed_module(&db, file);
let parsed = parsed_module(&db, file).load(&db);
assert!(parsed.has_valid_syntax());
@@ -114,7 +134,7 @@ mod tests {
let file = system_path_to_file(&db, path).unwrap();
let parsed = parsed_module(&db, file);
let parsed = parsed_module(&db, file).load(&db);
assert!(parsed.has_valid_syntax());
@@ -130,7 +150,7 @@ mod tests {
let virtual_file = db.files().virtual_file(&db, path);
let parsed = parsed_module(&db, virtual_file.file());
let parsed = parsed_module(&db, virtual_file.file()).load(&db);
assert!(parsed.has_valid_syntax());
@@ -146,7 +166,7 @@ mod tests {
let virtual_file = db.files().virtual_file(&db, path);
let parsed = parsed_module(&db, virtual_file.file());
let parsed = parsed_module(&db, virtual_file.file()).load(&db);
assert!(parsed.has_valid_syntax());
@@ -177,7 +197,7 @@ else:
let file = vendored_path_to_file(&db, VendoredPath::new("path.pyi")).unwrap();
let parsed = parsed_module(&db, file);
let parsed = parsed_module(&db, file).load(&db);
assert!(parsed.has_valid_syntax());
}

View File

@@ -29,7 +29,7 @@ pub(crate) fn main(args: &Args) -> Result<()> {
if let Some(explanation) = rule.explanation() {
let mut output = String::new();
let _ = writeln!(&mut output, "# {} ({})", rule.as_ref(), rule.noqa_code());
let _ = writeln!(&mut output, "# {} ({})", rule.name(), rule.noqa_code());
let (linter, _) = Linter::parse_code(&rule.noqa_code().to_string()).unwrap();
if linter.url().is_some() {
@@ -101,7 +101,7 @@ pub(crate) fn main(args: &Args) -> Result<()> {
let filename = PathBuf::from(ROOT_DIR)
.join("docs")
.join("rules")
.join(rule.as_ref())
.join(&*rule.name())
.with_extension("md");
if args.dry_run {

View File

@@ -55,7 +55,7 @@ fn generate_table(table_out: &mut String, rules: impl IntoIterator<Item = Rule>,
FixAvailability::None => format!("<span {SYMBOL_STYLE}></span>"),
};
let rule_name = rule.as_ref();
let rule_name = rule.name();
// If the message ends in a bracketed expression (like: "Use {replacement}"), escape the
// brackets. Otherwise, it'll be interpreted as an HTML attribute via the `attr_list`

View File

@@ -10,7 +10,7 @@ use ruff_python_ast::PythonVersion;
use ty_python_semantic::lint::{LintRegistry, RuleSelection};
use ty_python_semantic::{
Db, Program, ProgramSettings, PythonPath, PythonPlatform, PythonVersionSource,
PythonVersionWithSource, SearchPathSettings, default_lint_registry,
PythonVersionWithSource, SearchPathSettings, SysPrefixPathOrigin, default_lint_registry,
};
static EMPTY_VENDORED: std::sync::LazyLock<VendoredFileSystem> = std::sync::LazyLock::new(|| {
@@ -37,7 +37,8 @@ impl ModuleDb {
) -> Result<Self> {
let mut search_paths = SearchPathSettings::new(src_roots);
if let Some(venv_path) = venv_path {
search_paths.python_path = PythonPath::from_cli_flag(venv_path);
search_paths.python_path =
PythonPath::sys_prefix(venv_path, SysPrefixPathOrigin::PythonCliFlag);
}
let db = Self::default();

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.11.12"
version = "0.11.13"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -266,3 +266,15 @@ def f():
result = list() # this should be replaced with a comprehension
for i in values:
result.append(i + 1) # PERF401
def f():
src = [1]
dst = []
for i in src:
if True if True else False:
dst.append(i)
for i in src:
if lambda: 0:
dst.append(i)

View File

@@ -151,3 +151,16 @@ def foo():
result = {}
for idx, name in indices, fruit:
result[name] = idx # PERF403
def foo():
src = (("x", 1),)
dst = {}
for k, v in src:
if True if True else False:
dst[k] = v
for k, v in src:
if lambda: 0:
dst[k] = v

View File

@@ -110,6 +110,8 @@ from typing_extensions import CapsuleType
# UP035 on py313+ only
from typing_extensions import deprecated
# UP035 on py313+ only
from typing_extensions import get_type_hints
# https://github.com/astral-sh/ruff/issues/15780
from typing_extensions import is_typeddict

View File

@@ -65,7 +65,7 @@ use crate::docstrings::extraction::ExtractionTarget;
use crate::importer::{ImportRequest, Importer, ResolutionError};
use crate::noqa::NoqaMapping;
use crate::package::PackageRoot;
use crate::preview::{is_semantic_errors_enabled, is_undefined_export_in_dunder_init_enabled};
use crate::preview::is_semantic_errors_enabled;
use crate::registry::{AsRule, Rule};
use crate::rules::pyflakes::rules::{
LateFutureImport, ReturnOutsideFunction, YieldOutsideFunction,
@@ -2900,17 +2900,13 @@ impl<'a> Checker<'a> {
}
} else {
if self.enabled(Rule::UndefinedExport) {
if is_undefined_export_in_dunder_init_enabled(self.settings)
|| !self.path.ends_with("__init__.py")
{
self.report_diagnostic(
pyflakes::rules::UndefinedExport {
name: name.to_string(),
},
range,
)
.set_parent(definition.start());
}
self.report_diagnostic(
pyflakes::rules::UndefinedExport {
name: name.to_string(),
},
range,
)
.set_parent(definition.start());
}
}
}

View File

@@ -4,13 +4,13 @@
/// `--select`. For pylint this is e.g. C0414 and E0118 but also C and E01.
use std::fmt::Formatter;
use strum_macros::{AsRefStr, EnumIter};
use strum_macros::EnumIter;
use crate::registry::Linter;
use crate::rule_selector::is_single_rule_selector;
use crate::rules;
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct NoqaCode(&'static str, &'static str);
impl NoqaCode {

View File

@@ -1,7 +1,7 @@
use std::collections::BTreeSet;
use itertools::Itertools;
use rustc_hash::{FxHashMap, FxHashSet};
use rustc_hash::FxHashSet;
use ruff_diagnostics::{IsolationLevel, SourceMap};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
@@ -59,13 +59,13 @@ fn apply_fixes<'a>(
let mut last_pos: Option<TextSize> = None;
let mut applied: BTreeSet<&Edit> = BTreeSet::default();
let mut isolated: FxHashSet<u32> = FxHashSet::default();
let mut fixed = FxHashMap::default();
let mut fixed = FixTable::default();
let mut source_map = SourceMap::default();
for (rule, fix) in diagnostics
.filter_map(|msg| msg.to_rule().map(|rule| (rule, msg)))
.filter_map(|(rule, diagnostic)| diagnostic.fix().map(|fix| (rule, fix)))
.sorted_by(|(rule1, fix1), (rule2, fix2)| cmp_fix(*rule1, *rule2, fix1, fix2))
for (code, name, fix) in diagnostics
.filter_map(|msg| msg.noqa_code().map(|code| (code, msg.name(), msg)))
.filter_map(|(code, name, diagnostic)| diagnostic.fix().map(|fix| (code, name, fix)))
.sorted_by(|(_, name1, fix1), (_, name2, fix2)| cmp_fix(name1, name2, fix1, fix2))
{
let mut edits = fix
.edits()
@@ -110,7 +110,7 @@ fn apply_fixes<'a>(
}
applied.extend(applied_edits.drain(..));
*fixed.entry(rule).or_default() += 1;
*fixed.entry(code).or_default(name) += 1;
}
// Add the remaining content.
@@ -125,34 +125,44 @@ fn apply_fixes<'a>(
}
/// Compare two fixes.
fn cmp_fix(rule1: Rule, rule2: Rule, fix1: &Fix, fix2: &Fix) -> std::cmp::Ordering {
fn cmp_fix(name1: &str, name2: &str, fix1: &Fix, fix2: &Fix) -> std::cmp::Ordering {
// Always apply `RedefinedWhileUnused` before `UnusedImport`, as the latter can end up fixing
// the former. But we can't apply this just for `RedefinedWhileUnused` and `UnusedImport` because it violates
// `< is transitive: a < b and b < c implies a < c. The same must hold for both == and >.`
// See https://github.com/astral-sh/ruff/issues/12469#issuecomment-2244392085
match (rule1, rule2) {
(Rule::RedefinedWhileUnused, Rule::RedefinedWhileUnused) => std::cmp::Ordering::Equal,
(Rule::RedefinedWhileUnused, _) => std::cmp::Ordering::Less,
(_, Rule::RedefinedWhileUnused) => std::cmp::Ordering::Greater,
_ => std::cmp::Ordering::Equal,
let redefined_while_unused = Rule::RedefinedWhileUnused.name().as_str();
if (name1, name2) == (redefined_while_unused, redefined_while_unused) {
std::cmp::Ordering::Equal
} else if name1 == redefined_while_unused {
std::cmp::Ordering::Less
} else if name2 == redefined_while_unused {
std::cmp::Ordering::Greater
} else {
std::cmp::Ordering::Equal
}
// Apply fixes in order of their start position.
.then_with(|| fix1.min_start().cmp(&fix2.min_start()))
// Break ties in the event of overlapping rules, for some specific combinations.
.then_with(|| match (&rule1, &rule2) {
.then_with(|| {
let rules = (name1, name2);
// Apply `MissingTrailingPeriod` fixes before `NewLineAfterLastParagraph` fixes.
(Rule::MissingTrailingPeriod, Rule::NewLineAfterLastParagraph) => std::cmp::Ordering::Less,
(Rule::NewLineAfterLastParagraph, Rule::MissingTrailingPeriod) => {
let missing_trailing_period = Rule::MissingTrailingPeriod.name().as_str();
let newline_after_last_paragraph = Rule::NewLineAfterLastParagraph.name().as_str();
let if_else_instead_of_dict_get = Rule::IfElseBlockInsteadOfDictGet.name().as_str();
let if_else_instead_of_if_exp = Rule::IfElseBlockInsteadOfIfExp.name().as_str();
if rules == (missing_trailing_period, newline_after_last_paragraph) {
std::cmp::Ordering::Less
} else if rules == (newline_after_last_paragraph, missing_trailing_period) {
std::cmp::Ordering::Greater
}
// Apply `IfElseBlockInsteadOfDictGet` fixes before `IfElseBlockInsteadOfIfExp` fixes.
(Rule::IfElseBlockInsteadOfDictGet, Rule::IfElseBlockInsteadOfIfExp) => {
else if rules == (if_else_instead_of_dict_get, if_else_instead_of_if_exp) {
std::cmp::Ordering::Less
}
(Rule::IfElseBlockInsteadOfIfExp, Rule::IfElseBlockInsteadOfDictGet) => {
} else if rules == (if_else_instead_of_if_exp, if_else_instead_of_dict_get) {
std::cmp::Ordering::Greater
} else {
std::cmp::Ordering::Equal
}
_ => std::cmp::Ordering::Equal,
})
}
@@ -197,7 +207,7 @@ mod tests {
source_map,
} = apply_fixes(diagnostics.iter(), &locator);
assert_eq!(code, "");
assert_eq!(fixes.values().sum::<usize>(), 0);
assert_eq!(fixes.counts().sum::<usize>(), 0);
assert!(source_map.markers().is_empty());
}
@@ -234,7 +244,7 @@ print("hello world")
"#
.trim()
);
assert_eq!(fixes.values().sum::<usize>(), 1);
assert_eq!(fixes.counts().sum::<usize>(), 1);
assert_eq!(
source_map.markers(),
&[
@@ -275,7 +285,7 @@ class A(Bar):
"
.trim(),
);
assert_eq!(fixes.values().sum::<usize>(), 1);
assert_eq!(fixes.counts().sum::<usize>(), 1);
assert_eq!(
source_map.markers(),
&[
@@ -312,7 +322,7 @@ class A:
"
.trim()
);
assert_eq!(fixes.values().sum::<usize>(), 1);
assert_eq!(fixes.counts().sum::<usize>(), 1);
assert_eq!(
source_map.markers(),
&[
@@ -353,7 +363,7 @@ class A(object):
"
.trim()
);
assert_eq!(fixes.values().sum::<usize>(), 2);
assert_eq!(fixes.counts().sum::<usize>(), 2);
assert_eq!(
source_map.markers(),
&[
@@ -395,7 +405,7 @@ class A:
"
.trim(),
);
assert_eq!(fixes.values().sum::<usize>(), 1);
assert_eq!(fixes.counts().sum::<usize>(), 1);
assert_eq!(
source_map.markers(),
&[

View File

@@ -1,4 +1,5 @@
use std::borrow::Cow;
use std::collections::hash_map::Entry;
use std::path::Path;
use anyhow::{Result, anyhow};
@@ -22,6 +23,7 @@ use crate::checkers::imports::check_imports;
use crate::checkers::noqa::check_noqa;
use crate::checkers::physical_lines::check_physical_lines;
use crate::checkers::tokens::check_tokens;
use crate::codes::NoqaCode;
use crate::directives::Directives;
use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
use crate::fix::{FixResult, fix_file};
@@ -84,7 +86,53 @@ impl LinterResult {
}
}
pub type FixTable = FxHashMap<Rule, usize>;
#[derive(Debug, Default, PartialEq)]
struct FixCount {
rule_name: &'static str,
count: usize,
}
/// A mapping from a noqa code to the corresponding lint name and a count of applied fixes.
#[derive(Debug, Default, PartialEq)]
pub struct FixTable(FxHashMap<NoqaCode, FixCount>);
impl FixTable {
pub fn counts(&self) -> impl Iterator<Item = usize> {
self.0.values().map(|fc| fc.count)
}
pub fn entry(&mut self, code: NoqaCode) -> FixTableEntry {
FixTableEntry(self.0.entry(code))
}
pub fn iter(&self) -> impl Iterator<Item = (NoqaCode, &'static str, usize)> {
self.0
.iter()
.map(|(code, FixCount { rule_name, count })| (*code, *rule_name, *count))
}
pub fn keys(&self) -> impl Iterator<Item = NoqaCode> {
self.0.keys().copied()
}
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
}
pub struct FixTableEntry<'a>(Entry<'a, NoqaCode, FixCount>);
impl<'a> FixTableEntry<'a> {
pub fn or_default(self, rule_name: &'static str) -> &'a mut usize {
&mut (self
.0
.or_insert(FixCount {
rule_name,
count: 0,
})
.count)
}
}
pub struct FixerResult<'a> {
/// The result returned by the linter, after applying any fixes.
@@ -581,7 +629,7 @@ pub fn lint_fix<'a>(
let mut transformed = Cow::Borrowed(source_kind);
// Track the number of fixed errors across iterations.
let mut fixed = FxHashMap::default();
let mut fixed = FixTable::default();
// As an escape hatch, bail after 100 iterations.
let mut iterations = 0;
@@ -650,12 +698,7 @@ pub fn lint_fix<'a>(
// syntax error. Return the original code.
if has_valid_syntax && has_no_syntax_errors {
if let Some(error) = parsed.errors().first() {
report_fix_syntax_error(
path,
transformed.source_code(),
error,
fixed.keys().copied(),
);
report_fix_syntax_error(path, transformed.source_code(), error, fixed.keys());
return Err(anyhow!("Fix introduced a syntax error"));
}
}
@@ -670,8 +713,8 @@ pub fn lint_fix<'a>(
{
if iterations < MAX_ITERATIONS {
// Count the number of fixed errors.
for (rule, count) in applied {
*fixed.entry(rule).or_default() += count;
for (rule, name, count) in applied.iter() {
*fixed.entry(rule).or_default(name) += count;
}
transformed = Cow::Owned(transformed.updated(fixed_contents, &source_map));
@@ -698,10 +741,10 @@ pub fn lint_fix<'a>(
}
}
fn collect_rule_codes(rules: impl IntoIterator<Item = Rule>) -> String {
fn collect_rule_codes(rules: impl IntoIterator<Item = NoqaCode>) -> String {
rules
.into_iter()
.map(|rule| rule.noqa_code().to_string())
.map(|rule| rule.to_string())
.sorted_unstable()
.dedup()
.join(", ")
@@ -709,7 +752,7 @@ fn collect_rule_codes(rules: impl IntoIterator<Item = Rule>) -> String {
#[expect(clippy::print_stderr)]
fn report_failed_to_converge_error(path: &Path, transformed: &str, messages: &[Message]) {
let codes = collect_rule_codes(messages.iter().filter_map(Message::to_rule));
let codes = collect_rule_codes(messages.iter().filter_map(Message::noqa_code));
if cfg!(debug_assertions) {
eprintln!(
"{}{} Failed to converge after {} iterations in `{}` with rule codes {}:---\n{}\n---",
@@ -745,7 +788,7 @@ fn report_fix_syntax_error(
path: &Path,
transformed: &str,
error: &ParseError,
rules: impl IntoIterator<Item = Rule>,
rules: impl IntoIterator<Item = NoqaCode>,
) {
let codes = collect_rule_codes(rules);
if cfg!(debug_assertions) {

View File

@@ -33,7 +33,7 @@ impl Emitter for AzureEmitter {
line = location.line,
col = location.column,
code = message
.to_noqa_code()
.noqa_code()
.map_or_else(String::new, |code| format!("code={code};")),
body = message.body(),
)?;

View File

@@ -33,7 +33,7 @@ impl Emitter for GithubEmitter {
writer,
"::error title=Ruff{code},file={file},line={row},col={column},endLine={end_row},endColumn={end_column}::",
code = message
.to_noqa_code()
.noqa_code()
.map_or_else(String::new, |code| format!(" ({code})")),
file = message.filename(),
row = source_location.line,
@@ -50,7 +50,7 @@ impl Emitter for GithubEmitter {
column = location.column,
)?;
if let Some(code) = message.to_noqa_code() {
if let Some(code) = message.noqa_code() {
write!(writer, " {code}")?;
}

View File

@@ -90,7 +90,7 @@ impl Serialize for SerializedMessages<'_> {
}
fingerprints.insert(message_fingerprint);
let (description, check_name) = if let Some(code) = message.to_noqa_code() {
let (description, check_name) = if let Some(code) = message.noqa_code() {
(message.body().to_string(), code.to_string())
} else {
let description = message.body();

View File

@@ -81,8 +81,8 @@ pub(crate) fn message_to_json_value(message: &Message, context: &EmitterContext)
}
json!({
"code": message.to_noqa_code().map(|code| code.to_string()),
"url": message.to_rule().and_then(|rule| rule.url()),
"code": message.noqa_code().map(|code| code.to_string()),
"url": message.to_url(),
"message": message.body(),
"fix": fix,
"cell": notebook_cell_index,

View File

@@ -59,7 +59,7 @@ impl Emitter for JunitEmitter {
body = message.body()
));
let mut case = TestCase::new(
if let Some(code) = message.to_noqa_code() {
if let Some(code) = message.noqa_code() {
format!("org.ruff.{code}")
} else {
"org.ruff".to_string()

View File

@@ -224,30 +224,22 @@ impl Message {
self.fix().is_some()
}
/// Returns the [`Rule`] corresponding to the diagnostic message.
pub fn to_rule(&self) -> Option<Rule> {
if self.is_syntax_error() {
None
} else {
Some(self.name().parse().expect("Expected a valid rule name"))
}
}
/// Returns the [`NoqaCode`] corresponding to the diagnostic message.
pub fn to_noqa_code(&self) -> Option<NoqaCode> {
pub fn noqa_code(&self) -> Option<NoqaCode> {
self.noqa_code
}
/// Returns the URL for the rule documentation, if it exists.
pub fn to_url(&self) -> Option<String> {
// TODO(brent) Rule::url calls Rule::explanation, which calls ViolationMetadata::explain,
// which when derived (seems always to be the case?) is always `Some`, so I think it's
// pretty safe to inline the Rule::url implementation here, using `self.name()`:
//
// format!("{}/rules/{}", env!("CARGO_PKG_HOMEPAGE"), self.name())
//
// at least in the case of diagnostics, I guess syntax errors will return None
self.to_rule().and_then(|rule| rule.url())
if self.is_syntax_error() {
None
} else {
Some(format!(
"{}/rules/{}",
env!("CARGO_PKG_HOMEPAGE"),
self.name()
))
}
}
/// Returns the filename for the message.

View File

@@ -26,7 +26,7 @@ impl Emitter for PylintEmitter {
message.compute_start_location().line
};
let body = if let Some(code) = message.to_noqa_code() {
let body = if let Some(code) = message.noqa_code() {
format!("[{code}] {body}", body = message.body())
} else {
message.body().to_string()

View File

@@ -71,7 +71,7 @@ fn message_to_rdjson_value(message: &Message) -> Value {
"range": rdjson_range(start_location, end_location),
},
"code": {
"value": message.to_noqa_code().map(|code| code.to_string()),
"value": message.noqa_code().map(|code| code.to_string()),
"url": message.to_url(),
},
"suggestions": rdjson_suggestions(fix.edits(), &source_code),
@@ -84,7 +84,7 @@ fn message_to_rdjson_value(message: &Message) -> Value {
"range": rdjson_range(start_location, end_location),
},
"code": {
"value": message.to_noqa_code().map(|code| code.to_string()),
"value": message.noqa_code().map(|code| code.to_string()),
"url": message.to_url(),
},
})

View File

@@ -8,7 +8,7 @@ use serde_json::json;
use ruff_source_file::OneIndexed;
use crate::VERSION;
use crate::codes::Rule;
use crate::codes::NoqaCode;
use crate::fs::normalize_path;
use crate::message::{Emitter, EmitterContext, Message};
use crate::registry::{Linter, RuleNamespace};
@@ -27,7 +27,7 @@ impl Emitter for SarifEmitter {
.map(SarifResult::from_message)
.collect::<Result<Vec<_>>>()?;
let unique_rules: HashSet<_> = results.iter().filter_map(|result| result.rule).collect();
let unique_rules: HashSet<_> = results.iter().filter_map(|result| result.code).collect();
let mut rules: Vec<SarifRule> = unique_rules.into_iter().map(SarifRule::from).collect();
rules.sort_by(|a, b| a.code.cmp(&b.code));
@@ -61,13 +61,19 @@ struct SarifRule<'a> {
url: Option<String>,
}
impl From<Rule> for SarifRule<'_> {
fn from(rule: Rule) -> Self {
let code = rule.noqa_code().to_string();
let (linter, _) = Linter::parse_code(&code).unwrap();
impl From<NoqaCode> for SarifRule<'_> {
fn from(code: NoqaCode) -> Self {
let code_str = code.to_string();
// This is a manual re-implementation of Rule::from_code, but we also want the Linter. This
// avoids calling Linter::parse_code twice.
let (linter, suffix) = Linter::parse_code(&code_str).unwrap();
let rule = linter
.all_rules()
.find(|rule| rule.noqa_code().suffix() == suffix)
.expect("Expected a valid noqa code corresponding to a rule");
Self {
name: rule.into(),
code,
code: code_str,
linter: linter.name(),
summary: rule.message_formats()[0],
explanation: rule.explanation(),
@@ -106,7 +112,7 @@ impl Serialize for SarifRule<'_> {
#[derive(Debug)]
struct SarifResult {
rule: Option<Rule>,
code: Option<NoqaCode>,
level: String,
message: String,
uri: String,
@@ -123,7 +129,7 @@ impl SarifResult {
let end_location = message.compute_end_location();
let path = normalize_path(&*message.filename());
Ok(Self {
rule: message.to_rule(),
code: message.noqa_code(),
level: "error".to_string(),
message: message.body().to_string(),
uri: url::Url::from_file_path(&path)
@@ -143,7 +149,7 @@ impl SarifResult {
let end_location = message.compute_end_location();
let path = normalize_path(&*message.filename());
Ok(Self {
rule: message.to_rule(),
code: message.noqa_code(),
level: "error".to_string(),
message: message.body().to_string(),
uri: path.display().to_string(),
@@ -178,7 +184,7 @@ impl Serialize for SarifResult {
}
}
}],
"ruleId": self.rule.map(|rule| rule.noqa_code().to_string()),
"ruleId": self.code.map(|code| code.to_string()),
})
.serialize(serializer)
}

View File

@@ -151,7 +151,7 @@ impl Display for RuleCodeAndBody<'_> {
if let Some(fix) = self.message.fix() {
// Do not display an indicator for inapplicable fixes
if fix.applies(self.unsafe_fixes.required_applicability()) {
if let Some(code) = self.message.to_noqa_code() {
if let Some(code) = self.message.noqa_code() {
write!(f, "{} ", code.to_string().red().bold())?;
}
return write!(
@@ -164,7 +164,7 @@ impl Display for RuleCodeAndBody<'_> {
}
}
if let Some(code) = self.message.to_noqa_code() {
if let Some(code) = self.message.noqa_code() {
write!(
f,
"{code} {body}",
@@ -254,7 +254,7 @@ impl Display for MessageCodeFrame<'_> {
let label = self
.message
.to_noqa_code()
.noqa_code()
.map_or_else(String::new, |code| code.to_string());
let line_start = self.notebook_index.map_or_else(

View File

@@ -12,13 +12,14 @@ use log::warn;
use ruff_python_trivia::{CommentRanges, Cursor, indentation_at_offset};
use ruff_source_file::{LineEnding, LineRanges};
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use rustc_hash::FxHashSet;
use crate::Edit;
use crate::Locator;
use crate::codes::NoqaCode;
use crate::fs::relativize_path;
use crate::message::Message;
use crate::registry::{Rule, RuleSet};
use crate::registry::Rule;
use crate::rule_redirects::get_redirect_target;
/// Generates an array of edits that matches the length of `messages`.
@@ -780,7 +781,7 @@ fn build_noqa_edits_by_diagnostic(
if let Some(noqa_edit) = generate_noqa_edit(
comment.directive,
comment.line,
RuleSet::from_rule(comment.rule),
FxHashSet::from_iter([comment.code]),
locator,
line_ending,
) {
@@ -816,7 +817,7 @@ fn build_noqa_edits_by_line<'a>(
offset,
matches
.into_iter()
.map(|NoqaComment { rule, .. }| rule)
.map(|NoqaComment { code, .. }| code)
.collect(),
locator,
line_ending,
@@ -829,7 +830,7 @@ fn build_noqa_edits_by_line<'a>(
struct NoqaComment<'a> {
line: TextSize,
rule: Rule,
code: NoqaCode,
directive: Option<&'a Directive<'a>>,
}
@@ -845,13 +846,11 @@ fn find_noqa_comments<'a>(
// Mark any non-ignored diagnostics.
for message in messages {
let Some(rule) = message.to_rule() else {
let Some(code) = message.noqa_code() else {
comments_by_line.push(None);
continue;
};
let code = rule.noqa_code();
match &exemption {
FileExemption::All(_) => {
// If the file is exempted, don't add any noqa directives.
@@ -900,7 +899,7 @@ fn find_noqa_comments<'a>(
if !codes.includes(code) {
comments_by_line.push(Some(NoqaComment {
line: directive_line.start(),
rule,
code,
directive: Some(directive),
}));
}
@@ -912,7 +911,7 @@ fn find_noqa_comments<'a>(
// There's no existing noqa directive that suppresses the diagnostic.
comments_by_line.push(Some(NoqaComment {
line: locator.line_start(noqa_offset),
rule,
code,
directive: None,
}));
}
@@ -922,7 +921,7 @@ fn find_noqa_comments<'a>(
struct NoqaEdit<'a> {
edit_range: TextRange,
rules: RuleSet,
noqa_codes: FxHashSet<NoqaCode>,
codes: Option<&'a Codes<'a>>,
line_ending: LineEnding,
}
@@ -941,18 +940,15 @@ impl NoqaEdit<'_> {
Some(codes) => {
push_codes(
writer,
self.rules
self.noqa_codes
.iter()
.map(|rule| rule.noqa_code().to_string())
.map(ToString::to_string)
.chain(codes.iter().map(ToString::to_string))
.sorted_unstable(),
);
}
None => {
push_codes(
writer,
self.rules.iter().map(|rule| rule.noqa_code().to_string()),
);
push_codes(writer, self.noqa_codes.iter().map(ToString::to_string));
}
}
write!(writer, "{}", self.line_ending.as_str()).unwrap();
@@ -968,7 +964,7 @@ impl Ranged for NoqaEdit<'_> {
fn generate_noqa_edit<'a>(
directive: Option<&'a Directive>,
offset: TextSize,
rules: RuleSet,
noqa_codes: FxHashSet<NoqaCode>,
locator: &Locator,
line_ending: LineEnding,
) -> Option<NoqaEdit<'a>> {
@@ -997,7 +993,7 @@ fn generate_noqa_edit<'a>(
Some(NoqaEdit {
edit_range,
rules,
noqa_codes,
codes,
line_ending,
})

View File

@@ -111,11 +111,6 @@ pub(crate) const fn is_support_slices_in_literal_concatenation_enabled(
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/11370
pub(crate) const fn is_undefined_export_in_dunder_init_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/14236
pub(crate) const fn is_allow_nested_roots_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()

View File

@@ -1,6 +1,7 @@
//! Remnant of the registry of all [`Rule`] implementations, now it's reexporting from codes.rs
//! with some helper symbols
use ruff_db::diagnostic::LintName;
use strum_macros::EnumIter;
pub use codes::Rule;
@@ -348,9 +349,18 @@ impl Rule {
/// Return the URL for the rule documentation, if it exists.
pub fn url(&self) -> Option<String> {
self.explanation()
.is_some()
.then(|| format!("{}/rules/{}", env!("CARGO_PKG_HOMEPAGE"), self.as_ref()))
self.explanation().is_some().then(|| {
format!(
"{}/rules/{name}",
env!("CARGO_PKG_HOMEPAGE"),
name = self.name()
)
})
}
pub fn name(&self) -> LintName {
let name: &'static str = self.into();
LintName::of(name)
}
}
@@ -421,7 +431,7 @@ pub mod clap_completion {
fn possible_values(&self) -> Option<Box<dyn Iterator<Item = PossibleValue> + '_>> {
Some(Box::new(Rule::iter().map(|rule| {
let name = rule.noqa_code().to_string();
let help = rule.as_ref().to_string();
let help = rule.name().as_str();
PossibleValue::new(name).help(help)
})))
}
@@ -443,7 +453,7 @@ mod tests {
assert!(
rule.explanation().is_some(),
"Rule {} is missing documentation",
rule.as_ref()
rule.name()
);
}
}
@@ -460,10 +470,10 @@ mod tests {
.collect();
for rule in Rule::iter() {
let rule_name = rule.as_ref();
let rule_name = rule.name();
for pattern in &patterns {
assert!(
!pattern.matches(rule_name),
!pattern.matches(&rule_name),
"{rule_name} does not match naming convention, see CONTRIBUTING.md"
);
}

View File

@@ -302,9 +302,8 @@ impl Display for RuleSet {
} else {
writeln!(f, "[")?;
for rule in self {
let name = rule.as_ref();
let code = rule.noqa_code();
writeln!(f, "\t{name} ({code}),")?;
writeln!(f, "\t{name} ({code}),", name = rule.name())?;
}
write!(f, "]")?;
}

View File

@@ -485,8 +485,7 @@ pub mod clap_completion {
prefix.linter().common_prefix(),
prefix.short_code()
);
let name: &'static str = rule.into();
return Some(PossibleValue::new(code).help(name));
return Some(PossibleValue::new(code).help(rule.name().as_str()));
}
None

View File

@@ -3,7 +3,6 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -18,7 +17,7 @@ mod tests {
#[test_case(Rule::FastApiNonAnnotatedDependency, Path::new("FAST002_1.py"))]
#[test_case(Rule::FastApiUnusedPathParameter, Path::new("FAST003.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("fastapi").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),
@@ -32,7 +31,7 @@ mod tests {
#[test_case(Rule::FastApiNonAnnotatedDependency, Path::new("FAST002_0.py"))]
#[test_case(Rule::FastApiNonAnnotatedDependency, Path::new("FAST002_1.py"))]
fn rules_py38(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}_py38", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}_py38", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("fastapi").join(path).as_path(),
&settings::LinterSettings {

View File

@@ -16,7 +16,7 @@ mod tests {
#[test_case(Rule::LineContainsTodo; "T003")]
#[test_case(Rule::LineContainsXxx; "T004")]
fn rules(rule_code: Rule) -> Result<()> {
let snapshot = format!("{}_T00.py", rule_code.as_ref());
let snapshot = format!("{}_T00.py", rule_code.name());
let diagnostics = test_path(
Path::new("flake8_fixme/T00.py"),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -29,7 +29,7 @@ mod tests {
#[test_case(Rule::FormatInGetTextFuncCall, Path::new("INT002.py"))]
#[test_case(Rule::PrintfInGetTextFuncCall, Path::new("INT003.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_gettext").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -3,7 +3,6 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -15,7 +14,7 @@ mod tests {
#[test_case(Rule::UnnecessaryParenOnRaiseException, Path::new("RSE102.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_raise").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -4,7 +4,6 @@ pub mod settings;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use crate::registry::Rule;
@@ -18,7 +17,7 @@ mod tests {
#[test_case(Rule::PrivateMemberAccess, Path::new("SLF001.py"))]
#[test_case(Rule::PrivateMemberAccess, Path::new("SLF001_1.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_self").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -2,7 +2,6 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -20,7 +19,7 @@ mod tests {
#[test_case(Rule::InvalidTodoCapitalization, Path::new("TD006.py"))]
#[test_case(Rule::MissingSpaceAfterTodoColon, Path::new("TD007.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_todos").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -6,7 +6,6 @@ pub mod settings;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -55,7 +54,7 @@ mod tests {
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("typing_modules_1.py"))]
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("typing_modules_2.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),
@@ -70,7 +69,7 @@ mod tests {
#[test_case(Rule::QuotedTypeAlias, Path::new("TC008.py"))]
#[test_case(Rule::QuotedTypeAlias, Path::new("TC008_typing_execution_context.py"))]
fn type_alias_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings::for_rules(vec![
@@ -84,11 +83,7 @@ mod tests {
#[test_case(Rule::QuotedTypeAlias, Path::new("TC008_union_syntax_pre_py310.py"))]
fn type_alias_rules_pre_py310(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"pre_py310_{}_{}",
rule_code.as_ref(),
path.to_string_lossy()
);
let snapshot = format!("pre_py310_{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -107,7 +102,7 @@ mod tests {
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("quote3.py"))]
#[test_case(Rule::TypingOnlyThirdPartyImport, Path::new("quote3.py"))]
fn quote(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("quote_{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("quote_{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -126,7 +121,7 @@ mod tests {
#[test_case(Rule::TypingOnlyStandardLibraryImport, Path::new("init_var.py"))]
#[test_case(Rule::TypingOnlyStandardLibraryImport, Path::new("kw_only.py"))]
fn strict(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("strict_{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("strict_{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -170,7 +165,7 @@ mod tests {
Path::new("exempt_type_checking_3.py")
)]
fn exempt_type_checking(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -207,7 +202,7 @@ mod tests {
Path::new("runtime_evaluated_base_classes_5.py")
)]
fn runtime_evaluated_base_classes(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -238,7 +233,7 @@ mod tests {
Path::new("runtime_evaluated_decorators_3.py")
)]
fn runtime_evaluated_decorators(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -264,7 +259,7 @@ mod tests {
Path::new("module/undefined.py")
)]
fn base_class_same_file(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
@@ -282,7 +277,7 @@ mod tests {
#[test_case(Rule::RuntimeImportInTypeCheckingBlock, Path::new("module/app.py"))]
#[test_case(Rule::TypingOnlyStandardLibraryImport, Path::new("module/routes.py"))]
fn decorator_same_file(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {

View File

@@ -4,7 +4,6 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -22,7 +21,7 @@ mod tests {
#[test_case(Rule::Numpy2Deprecation, Path::new("NPY201_2.py"))]
#[test_case(Rule::Numpy2Deprecation, Path::new("NPY201_3.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("numpy").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -346,10 +346,11 @@ fn convert_to_dict_comprehension(
// since if the assignment expression appears
// internally (e.g. as an operand in a boolean
// operation) then it will already be parenthesized.
if test.is_named_expr() {
format!(" if ({})", locator.slice(test.range()))
} else {
format!(" if {}", locator.slice(test.range()))
match test {
Expr::Named(_) | Expr::If(_) | Expr::Lambda(_) => {
format!(" if ({})", locator.slice(test.range()))
}
_ => format!(" if {}", locator.slice(test.range())),
}
}
None => String::new(),

View File

@@ -358,7 +358,7 @@ fn convert_to_list_extend(
fix_type: ComprehensionType,
binding: &Binding,
for_stmt: &ast::StmtFor,
if_test: Option<&ast::Expr>,
if_test: Option<&Expr>,
to_append: &Expr,
checker: &Checker,
) -> Result<Fix> {
@@ -374,10 +374,11 @@ fn convert_to_list_extend(
// since if the assignment expression appears
// internally (e.g. as an operand in a boolean
// operation) then it will already be parenthesized.
if test.is_named_expr() {
format!(" if ({})", locator.slice(test.range()))
} else {
format!(" if {}", locator.slice(test.range()))
match test {
Expr::Named(_) | Expr::If(_) | Expr::Lambda(_) => {
format!(" if ({})", locator.slice(test.range()))
}
_ => format!(" if {}", locator.slice(test.range())),
}
}
None => String::new(),

View File

@@ -219,5 +219,27 @@ PERF401.py:268:9: PERF401 Use a list comprehension to create a transformed list
267 | for i in values:
268 | result.append(i + 1) # PERF401
| ^^^^^^^^^^^^^^^^^^^^ PERF401
269 |
270 | def f():
|
= help: Replace for loop with list comprehension
PERF401.py:276:13: PERF401 Use a list comprehension to create a transformed list
|
274 | for i in src:
275 | if True if True else False:
276 | dst.append(i)
| ^^^^^^^^^^^^^ PERF401
277 |
278 | for i in src:
|
= help: Replace for loop with list comprehension
PERF401.py:280:13: PERF401 Use `list.extend` to create a transformed list
|
278 | for i in src:
279 | if lambda: 0:
280 | dst.append(i)
| ^^^^^^^^^^^^^ PERF401
|
= help: Replace for loop with list.extend

View File

@@ -128,3 +128,23 @@ PERF403.py:153:9: PERF403 Use a dictionary comprehension instead of a for-loop
| ^^^^^^^^^^^^^^^^^^ PERF403
|
= help: Replace for loop with dict comprehension
PERF403.py:162:13: PERF403 Use a dictionary comprehension instead of a for-loop
|
160 | for k, v in src:
161 | if True if True else False:
162 | dst[k] = v
| ^^^^^^^^^^ PERF403
163 |
164 | for k, v in src:
|
= help: Replace for loop with dict comprehension
PERF403.py:166:13: PERF403 Use a dictionary comprehension instead of a for-loop
|
164 | for k, v in src:
165 | if lambda: 0:
166 | dst[k] = v
| ^^^^^^^^^^ PERF403
|
= help: Replace for loop with dict comprehension

View File

@@ -517,6 +517,8 @@ PERF401.py:268:9: PERF401 [*] Use a list comprehension to create a transformed l
267 | for i in values:
268 | result.append(i + 1) # PERF401
| ^^^^^^^^^^^^^^^^^^^^ PERF401
269 |
270 | def f():
|
= help: Replace for loop with list comprehension
@@ -529,3 +531,49 @@ PERF401.py:268:9: PERF401 [*] Use a list comprehension to create a transformed l
268 |- result.append(i + 1) # PERF401
266 |+ # this should be replaced with a comprehension
267 |+ result = [i + 1 for i in values] # PERF401
269 268 |
270 269 | def f():
271 270 | src = [1]
PERF401.py:276:13: PERF401 [*] Use a list comprehension to create a transformed list
|
274 | for i in src:
275 | if True if True else False:
276 | dst.append(i)
| ^^^^^^^^^^^^^ PERF401
277 |
278 | for i in src:
|
= help: Replace for loop with list comprehension
Unsafe fix
269 269 |
270 270 | def f():
271 271 | src = [1]
272 |- dst = []
273 272 |
274 |- for i in src:
275 |- if True if True else False:
276 |- dst.append(i)
273 |+ dst = [i for i in src if (True if True else False)]
277 274 |
278 275 | for i in src:
279 276 | if lambda: 0:
PERF401.py:280:13: PERF401 [*] Use `list.extend` to create a transformed list
|
278 | for i in src:
279 | if lambda: 0:
280 | dst.append(i)
| ^^^^^^^^^^^^^ PERF401
|
= help: Replace for loop with list.extend
Unsafe fix
275 275 | if True if True else False:
276 276 | dst.append(i)
277 277 |
278 |- for i in src:
279 |- if lambda: 0:
280 |- dst.append(i)
278 |+ dst.extend(i for i in src if (lambda: 0))

View File

@@ -305,3 +305,49 @@ PERF403.py:153:9: PERF403 [*] Use a dictionary comprehension instead of a for-lo
152 |- for idx, name in indices, fruit:
153 |- result[name] = idx # PERF403
151 |+ result = {name: idx for idx, name in (indices, fruit)} # PERF403
154 152 |
155 153 |
156 154 | def foo():
PERF403.py:162:13: PERF403 [*] Use a dictionary comprehension instead of a for-loop
|
160 | for k, v in src:
161 | if True if True else False:
162 | dst[k] = v
| ^^^^^^^^^^ PERF403
163 |
164 | for k, v in src:
|
= help: Replace for loop with dict comprehension
Unsafe fix
155 155 |
156 156 | def foo():
157 157 | src = (("x", 1),)
158 |- dst = {}
159 158 |
160 |- for k, v in src:
161 |- if True if True else False:
162 |- dst[k] = v
159 |+ dst = {k: v for k, v in src if (True if True else False)}
163 160 |
164 161 | for k, v in src:
165 162 | if lambda: 0:
PERF403.py:166:13: PERF403 [*] Use `dict.update` instead of a for-loop
|
164 | for k, v in src:
165 | if lambda: 0:
166 | dst[k] = v
| ^^^^^^^^^^ PERF403
|
= help: Replace for loop with `dict.update`
Unsafe fix
161 161 | if True if True else False:
162 162 | dst[k] = v
163 163 |
164 |- for k, v in src:
165 |- if lambda: 0:
166 |- dst[k] = v
164 |+ dst.update({k: v for k, v in src if (lambda: 0)})

View File

@@ -4,7 +4,6 @@ pub mod settings;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -20,7 +19,7 @@ mod tests {
#[test_case(Rule::DocstringMissingException, Path::new("DOC501.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("pydoclint").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),
@@ -36,7 +35,7 @@ mod tests {
#[test_case(Rule::DocstringMissingException, Path::new("DOC501_google.py"))]
#[test_case(Rule::DocstringExtraneousException, Path::new("DOC502_google.py"))]
fn rules_google_style(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("pydoclint").join(path).as_path(),
&settings::LinterSettings {
@@ -58,7 +57,7 @@ mod tests {
#[test_case(Rule::DocstringMissingException, Path::new("DOC501_numpy.py"))]
#[test_case(Rule::DocstringExtraneousException, Path::new("DOC502_numpy.py"))]
fn rules_numpy_style(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("pydoclint").join(path).as_path(),
&settings::LinterSettings {
@@ -79,7 +78,7 @@ mod tests {
fn rules_google_style_ignore_one_line(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"{}_{}_ignore_one_line",
rule_code.as_ref(),
rule_code.name(),
path.to_string_lossy()
);
let diagnostics = test_path(

View File

@@ -233,7 +233,6 @@ mod tests {
#[test_case(Rule::UnusedImport, Path::new("F401_27__all_mistyped/__init__.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_28__all_multiple/__init__.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_29__all_conditional/__init__.py"))]
#[test_case(Rule::UndefinedExport, Path::new("__init__.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",
@@ -776,8 +775,10 @@ mod tests {
messages.sort_by_key(Ranged::start);
let actual = messages
.iter()
.filter_map(Message::to_rule)
.filter(|msg| !msg.is_syntax_error())
.map(Message::name)
.collect::<Vec<_>>();
let expected: Vec<_> = expected.iter().map(|rule| rule.name().as_str()).collect();
assert_eq!(actual, expected);
}

View File

@@ -14,7 +14,7 @@ use crate::Violation;
/// Including an undefined name in `__all__` is likely to raise `NameError` at
/// runtime, when the module is imported.
///
/// In [preview], this rule will flag undefined names in `__init__.py` file,
/// This rule will flag undefined names in `__init__.py` file
/// even if those names implicitly refer to other modules in the package. Users
/// that rely on implicit exports should disable this rule in `__init__.py`
/// files via [`lint.per-file-ignores`].
@@ -37,8 +37,6 @@ use crate::Violation;
///
/// ## References
/// - [Python documentation: `__all__`](https://docs.python.org/3/tutorial/modules.html#importing-from-a-package)
///
/// [preview]: https://docs.astral.sh/ruff/preview/
#[derive(ViolationMetadata)]
pub(crate) struct UndefinedExport {
pub name: String,

View File

@@ -9,3 +9,27 @@ __init__.py:1:8: F401 `os` imported but unused
3 | print(__path__)
|
= help: Remove unused import: `os`
__init__.py:5:12: F822 Undefined name `a` in `__all__`
|
3 | print(__path__)
4 |
5 | __all__ = ["a", "b", "c"]
| ^^^ F822
|
__init__.py:5:17: F822 Undefined name `b` in `__all__`
|
3 | print(__path__)
4 |
5 | __all__ = ["a", "b", "c"]
| ^^^ F822
|
__init__.py:5:22: F822 Undefined name `c` in `__all__`
|
3 | print(__path__)
4 |
5 | __all__ = ["a", "b", "c"]
| ^^^ F822
|

View File

@@ -1,26 +0,0 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
__init__.py:5:12: F822 Undefined name `a` in `__all__`
|
3 | print(__path__)
4 |
5 | __all__ = ["a", "b", "c"]
| ^^^ F822
|
__init__.py:5:17: F822 Undefined name `b` in `__all__`
|
3 | print(__path__)
4 |
5 | __all__ = ["a", "b", "c"]
| ^^^ F822
|
__init__.py:5:22: F822 Undefined name `c` in `__all__`
|
3 | print(__path__)
4 |
5 | __all__ = ["a", "b", "c"]
| ^^^ F822
|

View File

@@ -271,7 +271,7 @@ const TYPING_TO_RE_39: &[&str] = &["Match", "Pattern"];
const TYPING_RE_TO_RE_39: &[&str] = &["Match", "Pattern"];
// Members of `typing_extensions` that were moved to `typing`.
const TYPING_EXTENSIONS_TO_TYPING_39: &[&str] = &["Annotated", "get_type_hints"];
const TYPING_EXTENSIONS_TO_TYPING_39: &[&str] = &["Annotated"];
// Members of `typing` that were moved _and_ renamed (and thus cannot be
// automatically fixed).
@@ -373,6 +373,9 @@ const TYPING_EXTENSIONS_TO_TYPING_313: &[&str] = &[
"NoDefault",
"ReadOnly",
"TypeIs",
// Introduced in Python 3.5,
// but typing_extensions backports features from py313:
"get_type_hints",
// Introduced in Python 3.6,
// but typing_extensions backports features from py313:
"ContextManager",

View File

@@ -1179,6 +1179,8 @@ UP035.py:111:1: UP035 [*] Import from `warnings` instead: `deprecated`
110 | # UP035 on py313+ only
111 | from typing_extensions import deprecated
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP035
112 |
113 | # UP035 on py313+ only
|
= help: Import from `warnings`
@@ -1189,5 +1191,25 @@ UP035.py:111:1: UP035 [*] Import from `warnings` instead: `deprecated`
111 |-from typing_extensions import deprecated
111 |+from warnings import deprecated
112 112 |
113 113 |
114 114 | # https://github.com/astral-sh/ruff/issues/15780
113 113 | # UP035 on py313+ only
114 114 | from typing_extensions import get_type_hints
UP035.py:114:1: UP035 [*] Import from `typing` instead: `get_type_hints`
|
113 | # UP035 on py313+ only
114 | from typing_extensions import get_type_hints
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP035
115 |
116 | # https://github.com/astral-sh/ruff/issues/15780
|
= help: Import from `typing`
Safe fix
111 111 | from typing_extensions import deprecated
112 112 |
113 113 | # UP035 on py313+ only
114 |-from typing_extensions import get_type_hints
114 |+from typing import get_type_hints
115 115 |
116 116 | # https://github.com/astral-sh/ruff/issues/15780
117 117 | from typing_extensions import is_typeddict

View File

@@ -7,6 +7,7 @@ use ruff_python_semantic::analyze::function_type::is_stub;
use crate::Violation;
use crate::checkers::ast::Checker;
use crate::rules::fastapi::rules::is_fastapi_route;
/// ## What it does

View File

@@ -4,7 +4,6 @@ pub(crate) mod rules;
#[cfg(test)]
mod tests {
use std::convert::AsRef;
use std::path::Path;
use anyhow::Result;
@@ -25,7 +24,7 @@ mod tests {
#[test_case(Rule::ErrorInsteadOfException, Path::new("TRY400.py"))]
#[test_case(Rule::VerboseLogMessage, Path::new("TRY401.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.as_ref(), path.to_string_lossy());
let snapshot = format!("{}_{}", rule_code.name(), path.to_string_lossy());
let diagnostics = test_path(
Path::new("tryceratops").join(path).as_path(),
&settings::LinterSettings::for_rule(rule_code),

View File

@@ -20,6 +20,7 @@ use ruff_python_parser::{ParseError, ParseOptions};
use ruff_python_trivia::textwrap::dedent;
use ruff_source_file::SourceFileBuilder;
use crate::codes::Rule;
use crate::fix::{FixResult, fix_file};
use crate::linter::check_path;
use crate::message::{Emitter, EmitterContext, Message, TextEmitter};
@@ -233,8 +234,9 @@ Source with applied fixes:
let messages = messages
.into_iter()
.filter_map(|msg| Some((msg.to_rule()?, msg)))
.map(|(rule, mut diagnostic)| {
.filter_map(|msg| Some((msg.noqa_code()?, msg)))
.map(|(code, mut diagnostic)| {
let rule = Rule::from_code(&code.to_string()).unwrap();
let fixable = diagnostic.fix().is_some_and(|fix| {
matches!(
fix.applicability(),

View File

@@ -174,7 +174,7 @@ pub(crate) fn map_codes(func: &ItemFn) -> syn::Result<TokenStream> {
output.extend(quote! {
impl #linter {
pub fn rules(&self) -> ::std::vec::IntoIter<Rule> {
pub(crate) fn rules(&self) -> ::std::vec::IntoIter<Rule> {
match self { #prefix_into_iter_match_arms }
}
}
@@ -182,7 +182,7 @@ pub(crate) fn map_codes(func: &ItemFn) -> syn::Result<TokenStream> {
}
output.extend(quote! {
impl RuleCodePrefix {
pub fn parse(linter: &Linter, code: &str) -> Result<Self, crate::registry::FromCodeError> {
pub(crate) fn parse(linter: &Linter, code: &str) -> Result<Self, crate::registry::FromCodeError> {
use std::str::FromStr;
Ok(match linter {
@@ -190,7 +190,7 @@ pub(crate) fn map_codes(func: &ItemFn) -> syn::Result<TokenStream> {
})
}
pub fn rules(&self) -> ::std::vec::IntoIter<Rule> {
pub(crate) fn rules(&self) -> ::std::vec::IntoIter<Rule> {
match self {
#(RuleCodePrefix::#linter_idents(prefix) => prefix.clone().rules(),)*
}
@@ -319,7 +319,7 @@ See also https://github.com/astral-sh/ruff/issues/2186.
matches!(self.group(), RuleGroup::Preview)
}
pub fn is_stable(&self) -> bool {
pub(crate) fn is_stable(&self) -> bool {
matches!(self.group(), RuleGroup::Stable)
}
@@ -371,7 +371,7 @@ fn generate_iter_impl(
quote! {
impl Linter {
/// Rules not in the preview.
pub fn rules(self: &Linter) -> ::std::vec::IntoIter<Rule> {
pub(crate) fn rules(self: &Linter) -> ::std::vec::IntoIter<Rule> {
match self {
#linter_rules_match_arms
}
@@ -385,7 +385,7 @@ fn generate_iter_impl(
}
impl RuleCodePrefix {
pub fn iter() -> impl Iterator<Item = RuleCodePrefix> {
pub(crate) fn iter() -> impl Iterator<Item = RuleCodePrefix> {
use strum::IntoEnumIterator;
let mut prefixes = Vec::new();
@@ -436,7 +436,6 @@ fn register_rules<'a>(input: impl Iterator<Item = &'a Rule>) -> TokenStream {
PartialOrd,
Ord,
::ruff_macros::CacheKey,
AsRefStr,
::strum_macros::IntoStaticStr,
::strum_macros::EnumString,
::serde::Serialize,

View File

@@ -165,7 +165,7 @@ where
pub fn formatted_file(db: &dyn Db, file: File) -> Result<Option<String>, FormatModuleError> {
let options = db.format_options(file);
let parsed = parsed_module(db.upcast(), file);
let parsed = parsed_module(db.upcast(), file).load(db.upcast());
if let Some(first) = parsed.errors().first() {
return Err(FormatModuleError::ParseError(first.clone()));
@@ -174,7 +174,7 @@ pub fn formatted_file(db: &dyn Db, file: File) -> Result<Option<String>, FormatM
let comment_ranges = CommentRanges::from(parsed.tokens());
let source = source_text(db.upcast(), file);
let formatted = format_node(parsed, &comment_ranges, &source, options)?;
let formatted = format_node(&parsed, &comment_ranges, &source, options)?;
let printed = formatted.print()?;
if printed.as_code() == &*source {

View File

@@ -12,7 +12,6 @@ use crate::{
use ruff_diagnostics::{Applicability, Edit, Fix};
use ruff_linter::{
Locator,
codes::Rule,
directives::{Flags, extract_directives},
generate_noqa_edits,
linter::check_path,
@@ -166,26 +165,17 @@ pub(crate) fn check(
messages
.into_iter()
.zip(noqa_edits)
.filter_map(|(message, noqa_edit)| match message.to_rule() {
Some(rule) => Some(to_lsp_diagnostic(
rule,
&message,
noqa_edit,
&source_kind,
locator.to_index(),
encoding,
)),
None => {
if show_syntax_errors {
Some(syntax_error_to_lsp_diagnostic(
&message,
&source_kind,
locator.to_index(),
encoding,
))
} else {
None
}
.filter_map(|(message, noqa_edit)| {
if message.is_syntax_error() && !show_syntax_errors {
None
} else {
Some(to_lsp_diagnostic(
&message,
noqa_edit,
&source_kind,
locator.to_index(),
encoding,
))
}
});
@@ -241,7 +231,6 @@ pub(crate) fn fixes_for_diagnostics(
/// Generates an LSP diagnostic with an associated cell index for the diagnostic to go in.
/// If the source kind is a text document, the cell index will always be `0`.
fn to_lsp_diagnostic(
rule: Rule,
diagnostic: &Message,
noqa_edit: Option<Edit>,
source_kind: &SourceKind,
@@ -253,11 +242,13 @@ fn to_lsp_diagnostic(
let body = diagnostic.body().to_string();
let fix = diagnostic.fix();
let suggestion = diagnostic.suggestion();
let code = diagnostic.noqa_code();
let fix = fix.and_then(|fix| fix.applies(Applicability::Unsafe).then_some(fix));
let data = (fix.is_some() || noqa_edit.is_some())
.then(|| {
let code = code?.to_string();
let edits = fix
.into_iter()
.flat_map(Fix::edits)
@@ -274,14 +265,12 @@ fn to_lsp_diagnostic(
title: suggestion.unwrap_or(name).to_string(),
noqa_edit,
edits,
code: rule.noqa_code().to_string(),
code,
})
.ok()
})
.flatten();
let code = rule.noqa_code().to_string();
let range: lsp_types::Range;
let cell: usize;
@@ -297,14 +286,25 @@ fn to_lsp_diagnostic(
range = diagnostic_range.to_range(source_kind.source_code(), index, encoding);
}
let (severity, tags, code) = if let Some(code) = code {
let code = code.to_string();
(
Some(severity(&code)),
tags(&code),
Some(lsp_types::NumberOrString::String(code)),
)
} else {
(None, None, None)
};
(
cell,
lsp_types::Diagnostic {
range,
severity: Some(severity(&code)),
tags: tags(&code),
code: Some(lsp_types::NumberOrString::String(code)),
code_description: rule.url().and_then(|url| {
severity,
tags,
code,
code_description: diagnostic.to_url().and_then(|url| {
Some(lsp_types::CodeDescription {
href: lsp_types::Url::parse(&url).ok()?,
})
@@ -317,45 +317,6 @@ fn to_lsp_diagnostic(
)
}
fn syntax_error_to_lsp_diagnostic(
syntax_error: &Message,
source_kind: &SourceKind,
index: &LineIndex,
encoding: PositionEncoding,
) -> (usize, lsp_types::Diagnostic) {
let range: lsp_types::Range;
let cell: usize;
if let Some(notebook_index) = source_kind.as_ipy_notebook().map(Notebook::index) {
NotebookRange { cell, range } = syntax_error.range().to_notebook_range(
source_kind.source_code(),
index,
notebook_index,
encoding,
);
} else {
cell = usize::default();
range = syntax_error
.range()
.to_range(source_kind.source_code(), index, encoding);
}
(
cell,
lsp_types::Diagnostic {
range,
severity: Some(lsp_types::DiagnosticSeverity::ERROR),
tags: None,
code: None,
code_description: None,
source: Some(DIAGNOSTIC_NAME.into()),
message: syntax_error.body().to_string(),
related_information: None,
data: None,
},
)
}
fn diagnostic_edit_range(
range: TextRange,
source_kind: &SourceKind,

View File

@@ -85,7 +85,7 @@ pub(crate) fn hover(
fn format_rule_text(rule: Rule) -> String {
let mut output = String::new();
let _ = write!(&mut output, "# {} ({})", rule.as_ref(), rule.noqa_code());
let _ = write!(&mut output, "# {} ({})", rule.name(), rule.noqa_code());
output.push('\n');
output.push('\n');

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_wasm"
version = "0.11.12"
version = "0.11.13"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -207,36 +207,23 @@ impl Workspace {
let messages: Vec<ExpandedMessage> = messages
.into_iter()
.map(|msg| {
let message = msg.body().to_string();
let range = msg.range();
match msg.to_noqa_code() {
Some(code) => ExpandedMessage {
code: Some(code.to_string()),
message,
start_location: source_code.line_column(range.start()).into(),
end_location: source_code.line_column(range.end()).into(),
fix: msg.fix().map(|fix| ExpandedFix {
message: msg.suggestion().map(ToString::to_string),
edits: fix
.edits()
.iter()
.map(|edit| ExpandedEdit {
location: source_code.line_column(edit.start()).into(),
end_location: source_code.line_column(edit.end()).into(),
content: edit.content().map(ToString::to_string),
})
.collect(),
}),
},
None => ExpandedMessage {
code: None,
message,
start_location: source_code.line_column(range.start()).into(),
end_location: source_code.line_column(range.end()).into(),
fix: None,
},
}
.map(|msg| ExpandedMessage {
code: msg.noqa_code().map(|code| code.to_string()),
message: msg.body().to_string(),
start_location: source_code.line_column(msg.start()).into(),
end_location: source_code.line_column(msg.end()).into(),
fix: msg.fix().map(|fix| ExpandedFix {
message: msg.suggestion().map(ToString::to_string),
edits: fix
.edits()
.iter()
.map(|edit| ExpandedEdit {
location: source_code.line_column(edit.start()).into(),
end_location: source_code.line_column(edit.end()).into(),
content: edit.content().map(ToString::to_string),
})
.collect(),
}),
})
.collect();

View File

@@ -1098,7 +1098,7 @@ impl LintConfiguration {
// approach to give each pair it's own `warn_user_once`.
for (preferred, expendable, message) in INCOMPATIBLE_CODES {
if rules.enabled(*preferred) && rules.enabled(*expendable) {
warn_user_once_by_id!(expendable.as_ref(), "{}", message);
warn_user_once_by_id!(expendable.name().as_str(), "{}", message);
rules.disable(*expendable);
}
}

108
crates/ty/docs/rules.md generated
View File

@@ -52,7 +52,7 @@ Calling a non-callable object will raise a `TypeError` at runtime.
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20call-non-callable)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L93)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L92)
</details>
## `conflicting-argument-forms`
@@ -83,7 +83,7 @@ f(int) # error
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20conflicting-argument-forms)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L137)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L136)
</details>
## `conflicting-declarations`
@@ -113,7 +113,7 @@ a = 1
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20conflicting-declarations)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L163)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L162)
</details>
## `conflicting-metaclass`
@@ -144,7 +144,7 @@ class C(A, B): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20conflicting-metaclass)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L188)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L187)
</details>
## `cyclic-class-definition`
@@ -175,7 +175,7 @@ class B(A): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20cyclic-class-definition)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L214)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L213)
</details>
## `duplicate-base`
@@ -201,7 +201,7 @@ class B(A, A): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20duplicate-base)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L258)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L257)
</details>
## `escape-character-in-forward-annotation`
@@ -338,7 +338,7 @@ TypeError: multiple bases have instance lay-out conflict
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20incompatible-slots)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L279)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L278)
</details>
## `inconsistent-mro`
@@ -367,7 +367,7 @@ class C(A, B): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20inconsistent-mro)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L365)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L364)
</details>
## `index-out-of-bounds`
@@ -392,7 +392,7 @@ t[3] # IndexError: tuple index out of range
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20index-out-of-bounds)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L389)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L388)
</details>
## `invalid-argument-type`
@@ -418,7 +418,7 @@ func("foo") # error: [invalid-argument-type]
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-argument-type)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L409)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L408)
</details>
## `invalid-assignment`
@@ -445,7 +445,7 @@ a: int = ''
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-assignment)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L449)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L448)
</details>
## `invalid-attribute-access`
@@ -478,7 +478,7 @@ C.instance_var = 3 # error: Cannot assign to instance variable
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-attribute-access)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1397)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1396)
</details>
## `invalid-base`
@@ -501,7 +501,7 @@ class A(42): ... # error: [invalid-base]
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-base)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L471)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L470)
</details>
## `invalid-context-manager`
@@ -527,7 +527,7 @@ with 1:
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-context-manager)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L522)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L521)
</details>
## `invalid-declaration`
@@ -555,7 +555,7 @@ a: str
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-declaration)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L543)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L542)
</details>
## `invalid-exception-caught`
@@ -596,7 +596,7 @@ except ZeroDivisionError:
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-exception-caught)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L566)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L565)
</details>
## `invalid-generic-class`
@@ -627,7 +627,7 @@ class C[U](Generic[T]): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-generic-class)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L602)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L601)
</details>
## `invalid-legacy-type-variable`
@@ -660,7 +660,7 @@ def f(t: TypeVar("U")): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-legacy-type-variable)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L628)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L627)
</details>
## `invalid-metaclass`
@@ -692,7 +692,7 @@ class B(metaclass=f): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-metaclass)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L677)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L676)
</details>
## `invalid-overload`
@@ -740,7 +740,7 @@ def foo(x: int) -> int: ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-overload)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L704)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L703)
</details>
## `invalid-parameter-default`
@@ -765,7 +765,7 @@ def f(a: int = ''): ...
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-parameter-default)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L747)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L746)
</details>
## `invalid-protocol`
@@ -798,7 +798,7 @@ TypeError: Protocols can only inherit from other protocols, got <class 'int'>
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-protocol)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L337)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L336)
</details>
## `invalid-raise`
@@ -846,7 +846,7 @@ def g():
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-raise)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L767)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L766)
</details>
## `invalid-return-type`
@@ -870,7 +870,7 @@ def func() -> int:
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-return-type)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L430)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L429)
</details>
## `invalid-super-argument`
@@ -914,7 +914,7 @@ super(B, A) # error: `A` does not satisfy `issubclass(A, B)`
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-super-argument)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L810)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L809)
</details>
## `invalid-syntax-in-forward-annotation`
@@ -954,7 +954,7 @@ NewAlias = TypeAliasType(get_name(), int) # error: TypeAliasType name mus
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-alias-type)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L656)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L655)
</details>
## `invalid-type-checking-constant`
@@ -983,7 +983,7 @@ TYPE_CHECKING = ''
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-checking-constant)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L849)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L848)
</details>
## `invalid-type-form`
@@ -1012,7 +1012,7 @@ b: Annotated[int] # `Annotated` expects at least two arguments
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-form)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L873)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L872)
</details>
## `invalid-type-variable-constraints`
@@ -1046,7 +1046,7 @@ T = TypeVar('T', bound=str) # valid bound TypeVar
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20invalid-type-variable-constraints)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L897)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L896)
</details>
## `missing-argument`
@@ -1070,7 +1070,7 @@ func() # TypeError: func() missing 1 required positional argument: 'x'
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20missing-argument)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L926)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L925)
</details>
## `no-matching-overload`
@@ -1098,7 +1098,7 @@ func("string") # error: [no-matching-overload]
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20no-matching-overload)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L945)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L944)
</details>
## `non-subscriptable`
@@ -1121,7 +1121,7 @@ Subscripting an object that does not support it will raise a `TypeError` at runt
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20non-subscriptable)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L968)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L967)
</details>
## `not-iterable`
@@ -1146,7 +1146,7 @@ for i in 34: # TypeError: 'int' object is not iterable
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20not-iterable)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L986)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L985)
</details>
## `parameter-already-assigned`
@@ -1172,7 +1172,7 @@ f(1, x=2) # Error raised here
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20parameter-already-assigned)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1037)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1036)
</details>
## `raw-string-type-annotation`
@@ -1231,7 +1231,7 @@ static_assert(int(2.0 * 3.0) == 6) # error: does not have a statically known tr
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20static-assert-error)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1373)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1372)
</details>
## `subclass-of-final-class`
@@ -1259,7 +1259,7 @@ class B(A): ... # Error raised here
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20subclass-of-final-class)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1128)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1127)
</details>
## `too-many-positional-arguments`
@@ -1285,7 +1285,7 @@ f("foo") # Error raised here
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20too-many-positional-arguments)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1173)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1172)
</details>
## `type-assertion-failure`
@@ -1312,7 +1312,7 @@ def _(x: int):
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20type-assertion-failure)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1151)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1150)
</details>
## `unavailable-implicit-super-arguments`
@@ -1356,7 +1356,7 @@ class A:
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unavailable-implicit-super-arguments)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1194)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1193)
</details>
## `unknown-argument`
@@ -1382,7 +1382,7 @@ f(x=1, y=2) # Error raised here
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unknown-argument)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1251)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1250)
</details>
## `unresolved-attribute`
@@ -1409,7 +1409,7 @@ A().foo # AttributeError: 'A' object has no attribute 'foo'
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-attribute)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1272)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1271)
</details>
## `unresolved-import`
@@ -1433,7 +1433,7 @@ import foo # ModuleNotFoundError: No module named 'foo'
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-import)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1294)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1293)
</details>
## `unresolved-reference`
@@ -1457,7 +1457,7 @@ print(x) # NameError: name 'x' is not defined
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unresolved-reference)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1313)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1312)
</details>
## `unsupported-bool-conversion`
@@ -1493,7 +1493,7 @@ b1 < b2 < b1 # exception raised here
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unsupported-bool-conversion)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1006)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1005)
</details>
## `unsupported-operator`
@@ -1520,7 +1520,7 @@ A() + A() # TypeError: unsupported operand type(s) for +: 'A' and 'A'
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unsupported-operator)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1332)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1331)
</details>
## `zero-stepsize-in-slice`
@@ -1544,7 +1544,7 @@ l[1:10:0] # ValueError: slice step cannot be zero
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20zero-stepsize-in-slice)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1354)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1353)
</details>
## `invalid-ignore-comment`
@@ -1600,7 +1600,7 @@ A.c # AttributeError: type object 'A' has no attribute 'c'
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-unbound-attribute)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1058)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1057)
</details>
## `possibly-unbound-implicit-call`
@@ -1631,7 +1631,7 @@ A()[0] # TypeError: 'A' object is not subscriptable
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-unbound-implicit-call)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L111)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L110)
</details>
## `possibly-unbound-import`
@@ -1662,7 +1662,7 @@ from module import a # ImportError: cannot import name 'a' from 'module'
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-unbound-import)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1080)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1079)
</details>
## `redundant-cast`
@@ -1688,7 +1688,7 @@ cast(int, f()) # Redundant
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20redundant-cast)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1425)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1424)
</details>
## `undefined-reveal`
@@ -1711,7 +1711,7 @@ reveal_type(1) # NameError: name 'reveal_type' is not defined
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20undefined-reveal)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1233)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1232)
</details>
## `unknown-rule`
@@ -1779,7 +1779,7 @@ class D(C): ... # error: [unsupported-base]
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20unsupported-base)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L489)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L488)
</details>
## `division-by-zero`
@@ -1802,7 +1802,7 @@ Dividing by zero raises a `ZeroDivisionError` at runtime.
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20division-by-zero)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L240)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L239)
</details>
## `possibly-unresolved-reference`
@@ -1829,7 +1829,7 @@ print(x) # NameError: name 'x' is not defined
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20possibly-unresolved-reference)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1106)
* [View source](https://github.com/astral-sh/ruff/blob/main/crates%2Fty_python_semantic%2Fsrc%2Ftypes%2Fdiagnostic.rs#L1105)
</details>
## `unused-ignore-comment`

View File

@@ -18,10 +18,9 @@ use clap::{CommandFactory, Parser};
use colored::Colorize;
use crossbeam::channel as crossbeam_channel;
use rayon::ThreadPoolBuilder;
use ruff_db::Upcast;
use ruff_db::diagnostic::{Diagnostic, DisplayDiagnosticConfig, Severity};
use ruff_db::max_parallelism;
use ruff_db::system::{OsSystem, SystemPath, SystemPathBuf};
use ruff_db::{Upcast, max_parallelism};
use salsa::plumbing::ZalsaDatabase;
use ty_project::metadata::options::ProjectOptionsOverrides;
use ty_project::watch::ProjectWatcher;

View File

@@ -918,6 +918,247 @@ fn cli_unknown_rules() -> anyhow::Result<()> {
Ok(())
}
#[test]
fn python_cli_argument_virtual_environment() -> anyhow::Result<()> {
let path_to_executable = if cfg!(windows) {
"my-venv/Scripts/python.exe"
} else {
"my-venv/bin/python"
};
let other_venv_path = "my-venv/foo/some_other_file.txt";
let case = TestCase::with_files([
("test.py", ""),
(
if cfg!(windows) {
"my-venv/Lib/site-packages/foo.py"
} else {
"my-venv/lib/python3.13/site-packages/foo.py"
},
"",
),
(path_to_executable, ""),
(other_venv_path, ""),
])?;
// Passing a path to the installation works
assert_cmd_snapshot!(case.command().arg("--python").arg("my-venv"), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
");
// And so does passing a path to the executable inside the installation
assert_cmd_snapshot!(case.command().arg("--python").arg(path_to_executable), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
");
// But random other paths inside the installation are rejected
assert_cmd_snapshot!(case.command().arg("--python").arg(other_venv_path), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
ty failed
Cause: Invalid search path settings
Cause: Failed to discover the site-packages directory: Invalid `--python` argument `<temp_dir>/my-venv/foo/some_other_file.txt`: does not point to a Python executable or a directory on disk
");
// And so are paths that do not exist on disk
assert_cmd_snapshot!(case.command().arg("--python").arg("not-a-directory-or-executable"), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
ty failed
Cause: Invalid search path settings
Cause: Failed to discover the site-packages directory: Invalid `--python` argument `<temp_dir>/not-a-directory-or-executable`: does not point to a Python executable or a directory on disk
");
Ok(())
}
#[test]
fn python_cli_argument_system_installation() -> anyhow::Result<()> {
let path_to_executable = if cfg!(windows) {
"Python3.11/python.exe"
} else {
"Python3.11/bin/python"
};
let case = TestCase::with_files([
("test.py", ""),
(
if cfg!(windows) {
"Python3.11/Lib/site-packages/foo.py"
} else {
"Python3.11/lib/python3.11/site-packages/foo.py"
},
"",
),
(path_to_executable, ""),
])?;
// Passing a path to the installation works
assert_cmd_snapshot!(case.command().arg("--python").arg("Python3.11"), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
");
// And so does passing a path to the executable inside the installation
assert_cmd_snapshot!(case.command().arg("--python").arg(path_to_executable), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
");
Ok(())
}
#[test]
fn config_file_broken_python_setting() -> anyhow::Result<()> {
let case = TestCase::with_files([
(
"pyproject.toml",
r#"
[project]
name = "test"
version = "0.1.0"
description = "Some description"
readme = "README.md"
requires-python = ">=3.13"
dependencies = []
[tool.ty.environment]
python = "not-a-directory-or-executable"
"#,
),
("test.py", ""),
])?;
assert_cmd_snapshot!(case.command(), @r#"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
ty failed
Cause: Invalid search path settings
Cause: Failed to discover the site-packages directory: Invalid `environment.python` setting
--> Invalid setting in configuration file `<temp_dir>/pyproject.toml`
|
9 |
10 | [tool.ty.environment]
11 | python = "not-a-directory-or-executable"
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ does not point to a Python executable or a directory on disk
|
"#);
Ok(())
}
#[test]
fn config_file_python_setting_directory_with_no_site_packages() -> anyhow::Result<()> {
let case = TestCase::with_files([
(
"pyproject.toml",
r#"
[tool.ty.environment]
python = "directory-but-no-site-packages"
"#,
),
("directory-but-no-site-packages/lib/foo.py", ""),
("test.py", ""),
])?;
assert_cmd_snapshot!(case.command(), @r#"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
ty failed
Cause: Invalid search path settings
Cause: Failed to discover the site-packages directory: Invalid `environment.python` setting
--> Invalid setting in configuration file `<temp_dir>/pyproject.toml`
|
1 |
2 | [tool.ty.environment]
3 | python = "directory-but-no-site-packages"
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Could not find a `site-packages` directory for this Python installation/executable
|
"#);
Ok(())
}
// This error message is never emitted on Windows, because Windows installations have simpler layouts
#[cfg(not(windows))]
#[test]
fn unix_system_installation_with_no_lib_directory() -> anyhow::Result<()> {
let case = TestCase::with_files([
(
"pyproject.toml",
r#"
[tool.ty.environment]
python = "directory-but-no-site-packages"
"#,
),
("directory-but-no-site-packages/foo.py", ""),
("test.py", ""),
])?;
assert_cmd_snapshot!(case.command(), @r#"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
ty failed
Cause: Invalid search path settings
Cause: Failed to discover the site-packages directory: Failed to iterate over the contents of the `lib` directory of the Python installation
--> Invalid setting in configuration file `<temp_dir>/pyproject.toml`
|
1 |
2 | [tool.ty.environment]
3 | python = "directory-but-no-site-packages"
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
"#);
Ok(())
}
#[test]
fn exit_code_only_warnings() -> anyhow::Result<()> {
let case = TestCase::with_file("test.py", r"print(x) # [unresolved-reference]")?;

View File

@@ -1,10 +1,13 @@
use std::cmp::Ordering;
use ruff_db::files::File;
use ruff_db::parsed::{ParsedModule, parsed_module};
use ruff_python_parser::TokenAt;
use ruff_db::parsed::{ParsedModuleRef, parsed_module};
use ruff_python_ast as ast;
use ruff_python_parser::{Token, TokenAt, TokenKind};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::Db;
use crate::find_node::{CoveringNode, covering_node};
use crate::find_node::covering_node;
#[derive(Debug, Clone)]
pub struct Completion {
@@ -12,15 +15,18 @@ pub struct Completion {
}
pub fn completion(db: &dyn Db, file: File, offset: TextSize) -> Vec<Completion> {
let parsed = parsed_module(db.upcast(), file);
let parsed = parsed_module(db.upcast(), file).load(db.upcast());
let Some(target) = find_target(parsed, offset) else {
let Some(target) = CompletionTargetTokens::find(&parsed, offset).ast(&parsed) else {
return vec![];
};
let model = ty_python_semantic::SemanticModel::new(db.upcast(), file);
let mut completions = model.completions(target.node());
completions.sort();
let mut completions = match target {
CompletionTargetAst::ObjectDot { expr } => model.attribute_completions(expr),
CompletionTargetAst::Scoped { node } => model.scoped_completions(node),
};
completions.sort_by(|name1, name2| compare_suggestions(name1, name2));
completions.dedup();
completions
.into_iter()
@@ -28,30 +34,253 @@ pub fn completion(db: &dyn Db, file: File, offset: TextSize) -> Vec<Completion>
.collect()
}
fn find_target(parsed: &ParsedModule, offset: TextSize) -> Option<CoveringNode> {
let offset = match parsed.tokens().at_offset(offset) {
TokenAt::None => {
return Some(covering_node(
parsed.syntax().into(),
TextRange::empty(offset),
));
/// The kind of tokens identified under the cursor.
#[derive(Debug)]
enum CompletionTargetTokens<'t> {
/// A `object.attribute` token form was found, where
/// `attribute` may be empty.
///
/// This requires a name token followed by a dot token.
ObjectDot {
/// The token preceding the dot.
object: &'t Token,
/// The token, if non-empty, following the dot.
///
/// This is currently unused, but we should use this
/// eventually to remove completions that aren't a
/// prefix of what has already been typed. (We are
/// currently relying on the LSP client to do this.)
#[expect(dead_code)]
attribute: Option<&'t Token>,
},
/// A token was found under the cursor, but it didn't
/// match any of our anticipated token patterns.
Generic { token: &'t Token },
/// No token was found, but we have the offset of the
/// cursor.
Unknown { offset: TextSize },
}
impl<'t> CompletionTargetTokens<'t> {
/// Look for the best matching token pattern at the given offset.
fn find(parsed: &ParsedModuleRef, offset: TextSize) -> CompletionTargetTokens<'_> {
static OBJECT_DOT_EMPTY: [TokenKind; 2] = [TokenKind::Name, TokenKind::Dot];
static OBJECT_DOT_NON_EMPTY: [TokenKind; 3] =
[TokenKind::Name, TokenKind::Dot, TokenKind::Name];
let offset = match parsed.tokens().at_offset(offset) {
TokenAt::None => return CompletionTargetTokens::Unknown { offset },
TokenAt::Single(tok) => tok.end(),
TokenAt::Between(_, tok) => tok.start(),
};
let before = parsed.tokens().before(offset);
if let Some([object, _dot]) = token_suffix_by_kinds(before, OBJECT_DOT_EMPTY) {
CompletionTargetTokens::ObjectDot {
object,
attribute: None,
}
} else if let Some([object, _dot, attribute]) =
token_suffix_by_kinds(before, OBJECT_DOT_NON_EMPTY)
{
CompletionTargetTokens::ObjectDot {
object,
attribute: Some(attribute),
}
} else {
let Some(last) = before.last() else {
return CompletionTargetTokens::Unknown { offset };
};
CompletionTargetTokens::Generic { token: last }
}
TokenAt::Single(tok) => tok.end(),
TokenAt::Between(_, tok) => tok.start(),
};
let before = parsed.tokens().before(offset);
let last = before.last()?;
let covering_node = covering_node(parsed.syntax().into(), last.range());
Some(covering_node)
}
/// Returns a corresponding AST node for these tokens.
///
/// If no plausible AST node could be found, then `None` is returned.
fn ast(&self, parsed: &'t ParsedModuleRef) -> Option<CompletionTargetAst<'t>> {
match *self {
CompletionTargetTokens::ObjectDot { object, .. } => {
let covering_node = covering_node(parsed.syntax().into(), object.range())
.find(|node| node.is_expr_attribute())
.ok()?;
match covering_node.node() {
ast::AnyNodeRef::ExprAttribute(expr) => {
Some(CompletionTargetAst::ObjectDot { expr })
}
_ => None,
}
}
CompletionTargetTokens::Generic { token } => {
let covering_node = covering_node(parsed.syntax().into(), token.range());
Some(CompletionTargetAst::Scoped {
node: covering_node.node(),
})
}
CompletionTargetTokens::Unknown { offset } => {
let range = TextRange::empty(offset);
let covering_node = covering_node(parsed.syntax().into(), range);
Some(CompletionTargetAst::Scoped {
node: covering_node.node(),
})
}
}
}
}
/// The AST node patterns that we support identifying under the cursor.
#[derive(Debug)]
enum CompletionTargetAst<'t> {
/// A `object.attribute` scenario, where we want to
/// list attributes on `object` for completions.
ObjectDot { expr: &'t ast::ExprAttribute },
/// A scoped scenario, where we want to list all items available in
/// the most narrow scope containing the giving AST node.
Scoped { node: ast::AnyNodeRef<'t> },
}
/// Returns a suffix of `tokens` corresponding to the `kinds` given.
///
/// If a suffix of `tokens` with the given `kinds` could not be found,
/// then `None` is returned.
///
/// This is useful for matching specific patterns of token sequences
/// in order to identify what kind of completions we should offer.
fn token_suffix_by_kinds<const N: usize>(
tokens: &[Token],
kinds: [TokenKind; N],
) -> Option<[&Token; N]> {
if kinds.len() > tokens.len() {
return None;
}
for (token, expected_kind) in tokens.iter().rev().zip(kinds.iter().rev()) {
if &token.kind() != expected_kind {
return None;
}
}
Some(std::array::from_fn(|i| {
&tokens[tokens.len() - (kinds.len() - i)]
}))
}
/// Order completions lexicographically, with these exceptions:
///
/// 1) A `_[^_]` prefix sorts last and
/// 2) A `__` prefix sorts last except before (1)
///
/// This has the effect of putting all dunder attributes after "normal"
/// attributes, and all single-underscore attributes after dunder attributes.
fn compare_suggestions(name1: &str, name2: &str) -> Ordering {
/// A helper type for sorting completions based only on name.
///
/// This sorts "normal" names first, then dunder names and finally
/// single-underscore names. This matches the order of the variants defined for
/// this enum, which is in turn picked up by the derived trait implementation
/// for `Ord`.
#[derive(Clone, Copy, Eq, PartialEq, PartialOrd, Ord)]
enum Kind {
Normal,
Dunder,
Sunder,
}
impl Kind {
fn classify(name: &str) -> Kind {
// Dunder needs a prefix and suffix double underscore.
// When there's only a prefix double underscore, this
// results in explicit name mangling. We let that be
// classified as-if they were single underscore names.
//
// Ref: <https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers>
if name.starts_with("__") && name.ends_with("__") {
Kind::Dunder
} else if name.starts_with('_') {
Kind::Sunder
} else {
Kind::Normal
}
}
}
let (kind1, kind2) = (Kind::classify(name1), Kind::classify(name2));
kind1.cmp(&kind2).then_with(|| name1.cmp(name2))
}
#[cfg(test)]
mod tests {
use insta::assert_snapshot;
use ruff_python_parser::{Mode, ParseOptions, TokenKind, Tokens};
use crate::completion;
use crate::tests::{CursorTest, cursor_test};
use super::token_suffix_by_kinds;
#[test]
fn token_suffixes_match() {
insta::assert_debug_snapshot!(
token_suffix_by_kinds(&tokenize("foo.x"), [TokenKind::Newline]),
@r"
Some(
[
Newline 5..5,
],
)
",
);
insta::assert_debug_snapshot!(
token_suffix_by_kinds(&tokenize("foo.x"), [TokenKind::Name, TokenKind::Newline]),
@r"
Some(
[
Name 4..5,
Newline 5..5,
],
)
",
);
let all = [
TokenKind::Name,
TokenKind::Dot,
TokenKind::Name,
TokenKind::Newline,
];
insta::assert_debug_snapshot!(
token_suffix_by_kinds(&tokenize("foo.x"), all),
@r"
Some(
[
Name 0..3,
Dot 3..4,
Name 4..5,
Newline 5..5,
],
)
",
);
}
#[test]
fn token_suffixes_nomatch() {
insta::assert_debug_snapshot!(
token_suffix_by_kinds(&tokenize("foo.x"), [TokenKind::Name]),
@"None",
);
let too_many = [
TokenKind::Dot,
TokenKind::Name,
TokenKind::Dot,
TokenKind::Name,
TokenKind::Newline,
];
insta::assert_debug_snapshot!(
token_suffix_by_kinds(&tokenize("foo.x"), too_many),
@"None",
);
}
// At time of writing (2025-05-22), the tests below show some of the
// naivete of our completions. That is, we don't even take what has been
// typed into account. We just kind return all possible completions
@@ -113,8 +342,7 @@ import re
re.<CURSOR>
",
);
assert_snapshot!(test.completions(), @"re");
test.assert_completions_include("findall");
}
#[test]
@@ -127,10 +355,7 @@ f<CURSOR>
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -143,10 +368,7 @@ g<CURSOR>
",
);
assert_snapshot!(test.completions(), @r"
foo
g
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -175,10 +397,7 @@ f<CURSOR>
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -208,7 +427,6 @@ def foo():
);
assert_snapshot!(test.completions(), @r"
f
foo
foofoo
");
@@ -259,7 +477,6 @@ def foo():
);
assert_snapshot!(test.completions(), @r"
f
foo
foofoo
");
@@ -276,7 +493,6 @@ def foo():
);
assert_snapshot!(test.completions(), @r"
f
foo
foofoo
");
@@ -295,7 +511,6 @@ def frob(): ...
);
assert_snapshot!(test.completions(), @r"
f
foo
foofoo
frob
@@ -315,7 +530,6 @@ def frob(): ...
);
assert_snapshot!(test.completions(), @r"
f
foo
frob
");
@@ -334,7 +548,6 @@ def frob(): ...
);
assert_snapshot!(test.completions(), @r"
f
foo
foofoo
foofoofoo
@@ -451,15 +664,10 @@ def frob(): ...
",
);
// It's not totally clear why `for` shows up in the
// symbol tables of the detected scopes here. My guess
// is that there's perhaps some sub-optimal behavior
// here because the list comprehension as written is not
// valid.
assert_snapshot!(test.completions(), @r"
bar
for
");
// TODO: it would be good if `bar` was included here, but
// the list comprehension is not yet valid and so we do not
// detect this as a definition of `bar`.
assert_snapshot!(test.completions(), @"<No completions found>");
}
#[test]
@@ -470,10 +678,7 @@ def frob(): ...
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -484,10 +689,7 @@ def frob(): ...
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -498,10 +700,7 @@ def frob(): ...
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -512,10 +711,7 @@ def frob(): ...
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -526,10 +722,7 @@ def frob(): ...
",
);
assert_snapshot!(test.completions(), @r"
f
foo
");
assert_snapshot!(test.completions(), @"foo");
}
#[test]
@@ -602,7 +795,6 @@ class Foo:
assert_snapshot!(test.completions(), @r"
Foo
b
bar
frob
quux
@@ -621,7 +813,6 @@ class Foo:
assert_snapshot!(test.completions(), @r"
Foo
b
bar
quux
");
@@ -734,6 +925,119 @@ class Foo(<CURSOR>",
");
}
#[test]
fn class_init1() {
let test = cursor_test(
"\
class Quux:
def __init__(self):
self.foo = 1
self.bar = 2
self.baz = 3
quux = Quux()
quux.<CURSOR>
",
);
assert_snapshot!(test.completions(), @r"
bar
baz
foo
__annotations__
__class__
__delattr__
__dict__
__dir__
__doc__
__eq__
__format__
__getattribute__
__getstate__
__hash__
__init__
__init_subclass__
__module__
__ne__
__new__
__reduce__
__reduce_ex__
__repr__
__setattr__
__sizeof__
__str__
__subclasshook__
");
}
#[test]
fn class_init2() {
let test = cursor_test(
"\
class Quux:
def __init__(self):
self.foo = 1
self.bar = 2
self.baz = 3
quux = Quux()
quux.b<CURSOR>
",
);
assert_snapshot!(test.completions(), @r"
bar
baz
foo
__annotations__
__class__
__delattr__
__dict__
__dir__
__doc__
__eq__
__format__
__getattribute__
__getstate__
__hash__
__init__
__init_subclass__
__module__
__ne__
__new__
__reduce__
__reduce_ex__
__repr__
__setattr__
__sizeof__
__str__
__subclasshook__
");
}
#[test]
fn class_init3() {
let test = cursor_test(
"\
class Quux:
def __init__(self):
self.foo = 1
self.bar = 2
self.<CURSOR>
self.baz = 3
",
);
// FIXME: This should list completions on `self`, which should
// include, at least, `foo` and `bar`. At time of writing
// (2025-06-04), the type of `self` is inferred as `Unknown` in
// this context. This in turn prevents us from getting a list
// of available attributes.
//
// See: https://github.com/astral-sh/ty/issues/159
assert_snapshot!(test.completions(), @"<No completions found>");
}
// We don't yet take function parameters into account.
#[test]
fn call_prefix1() {
@@ -750,7 +1054,6 @@ bar(o<CURSOR>
assert_snapshot!(test.completions(), @r"
bar
foo
o
");
}
@@ -788,7 +1091,6 @@ class C:
assert_snapshot!(test.completions(), @r"
C
bar
f
foo
self
");
@@ -825,7 +1127,6 @@ class C:
assert_snapshot!(test.completions(), @r"
C
bar
f
foo
self
");
@@ -854,11 +1155,108 @@ print(f\"{some<CURSOR>
",
);
assert_snapshot!(test.completions(), @r"
print
some
some_symbol
");
assert_snapshot!(test.completions(), @"some_symbol");
}
#[test]
fn statically_invisible_symbols() {
let test = cursor_test(
"\
if 1 + 2 != 3:
hidden_symbol = 1
hidden_<CURSOR>
",
);
assert_snapshot!(test.completions(), @"<No completions found>");
}
#[test]
fn completions_inside_unreachable_sections() {
let test = cursor_test(
"\
import sys
if sys.platform == \"not-my-current-platform\":
only_available_in_this_branch = 1
on<CURSOR>
",
);
// TODO: ideally, `only_available_in_this_branch` should be available here, but we
// currently make no effort to provide a good IDE experience within sections that
// are unreachable
assert_snapshot!(test.completions(), @"sys");
}
#[test]
fn star_import() {
let test = cursor_test(
"\
from typing import *
Re<CURSOR>
",
);
test.assert_completions_include("Reversible");
// `ReadableBuffer` is a symbol in `typing`, but it is not re-exported
test.assert_completions_do_not_include("ReadableBuffer");
}
#[test]
fn nested_attribute_access() {
let test = cursor_test(
"\
class A:
x: str
class B:
a: A
b = B()
b.a.<CURSOR>
",
);
// FIXME: These should be flipped.
test.assert_completions_include("a");
test.assert_completions_do_not_include("x");
}
#[test]
fn ordering() {
let test = cursor_test(
"\
class A:
foo: str
_foo: str
__foo__: str
__foo: str
FOO: str
_FOO: str
__FOO__: str
__FOO: str
A.<CURSOR>
",
);
assert_snapshot!(
test.completions_if(|name| name.contains("FOO") || name.contains("foo")),
@r"
FOO
foo
__FOO__
__foo__
_FOO
__FOO
__foo
_foo
",
);
}
// Ref: https://github.com/astral-sh/ty/issues/572
@@ -921,10 +1319,7 @@ Fo<CURSOR> = float
",
);
assert_snapshot!(test.completions(), @r"
Fo
float
");
assert_snapshot!(test.completions(), @"Fo");
}
// Ref: https://github.com/astral-sh/ty/issues/572
@@ -999,9 +1394,7 @@ except Type<CURSOR>:
",
);
assert_snapshot!(test.completions(), @r"
Type
");
assert_snapshot!(test.completions(), @"<No completions found>");
}
// Ref: https://github.com/astral-sh/ty/issues/572
@@ -1019,6 +1412,10 @@ def _():
impl CursorTest {
fn completions(&self) -> String {
self.completions_if(|_| true)
}
fn completions_if(&self, predicate: impl Fn(&str) -> bool) -> String {
let completions = completion(&self.db, self.file, self.cursor_offset);
if completions.is_empty() {
return "<No completions found>".to_string();
@@ -1026,8 +1423,39 @@ def _():
completions
.into_iter()
.map(|completion| completion.label)
.filter(|label| predicate(label))
.collect::<Vec<String>>()
.join("\n")
}
#[track_caller]
fn assert_completions_include(&self, expected: &str) {
let completions = completion(&self.db, self.file, self.cursor_offset);
assert!(
completions
.iter()
.any(|completion| completion.label == expected),
"Expected completions to include `{expected}`"
);
}
#[track_caller]
fn assert_completions_do_not_include(&self, unexpected: &str) {
let completions = completion(&self.db, self.file, self.cursor_offset);
assert!(
completions
.iter()
.all(|completion| completion.label != unexpected),
"Expected completions to not include `{unexpected}`",
);
}
}
fn tokenize(src: &str) -> Tokens {
let parsed = ruff_python_parser::parse(src, ParseOptions::from(Mode::Module))
.expect("valid Python source for token stream");
parsed.tokens().clone()
}
}

View File

@@ -1,7 +1,7 @@
use crate::find_node::covering_node;
use crate::{Db, HasNavigationTargets, NavigationTargets, RangedValue};
use ruff_db::files::{File, FileRange};
use ruff_db::parsed::{ParsedModule, parsed_module};
use ruff_db::parsed::{ParsedModuleRef, parsed_module};
use ruff_python_ast::{self as ast, AnyNodeRef};
use ruff_python_parser::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -13,8 +13,8 @@ pub fn goto_type_definition(
file: File,
offset: TextSize,
) -> Option<RangedValue<NavigationTargets>> {
let parsed = parsed_module(db.upcast(), file);
let goto_target = find_goto_target(parsed, offset)?;
let module = parsed_module(db.upcast(), file).load(db.upcast());
let goto_target = find_goto_target(&module, offset)?;
let model = SemanticModel::new(db.upcast(), file);
let ty = goto_target.inferred_type(&model)?;
@@ -128,8 +128,8 @@ pub(crate) enum GotoTarget<'a> {
},
}
impl<'db> GotoTarget<'db> {
pub(crate) fn inferred_type(self, model: &SemanticModel<'db>) -> Option<Type<'db>> {
impl GotoTarget<'_> {
pub(crate) fn inferred_type<'db>(self, model: &SemanticModel<'db>) -> Option<Type<'db>> {
let ty = match self {
GotoTarget::Expression(expression) => expression.inferred_type(model),
GotoTarget::FunctionDef(function) => function.inferred_type(model),
@@ -183,7 +183,10 @@ impl Ranged for GotoTarget<'_> {
}
}
pub(crate) fn find_goto_target(parsed: &ParsedModule, offset: TextSize) -> Option<GotoTarget> {
pub(crate) fn find_goto_target(
parsed: &ParsedModuleRef,
offset: TextSize,
) -> Option<GotoTarget<'_>> {
let token = parsed
.tokens()
.at_offset(offset)

View File

@@ -8,9 +8,9 @@ use std::fmt::Formatter;
use ty_python_semantic::SemanticModel;
use ty_python_semantic::types::Type;
pub fn hover(db: &dyn Db, file: File, offset: TextSize) -> Option<RangedValue<Hover>> {
let parsed = parsed_module(db.upcast(), file);
let goto_target = find_goto_target(parsed, offset)?;
pub fn hover(db: &dyn Db, file: File, offset: TextSize) -> Option<RangedValue<Hover<'_>>> {
let parsed = parsed_module(db.upcast(), file).load(db.upcast());
let goto_target = find_goto_target(&parsed, offset)?;
if let GotoTarget::Expression(expr) = goto_target {
if expr.is_literal_expr() {

View File

@@ -54,7 +54,7 @@ impl fmt::Display for DisplayInlayHint<'_, '_> {
pub fn inlay_hints(db: &dyn Db, file: File, range: TextRange) -> Vec<InlayHint<'_>> {
let mut visitor = InlayHintVisitor::new(db, file, range);
let ast = parsed_module(db.upcast(), file);
let ast = parsed_module(db.upcast(), file).load(db.upcast());
visitor.visit_body(ast.suite());

View File

@@ -0,0 +1,11 @@
# This is a regression test for `infer_expression_types`.
# ref: https://github.com/astral-sh/ruff/pull/18041#discussion_r2094573989
class C:
def f(self, other: "C"):
if self.a > other.b or self.b:
return False
if self:
return True
C().a

View File

@@ -453,14 +453,16 @@ fn check_file_impl(db: &dyn Db, file: File) -> Vec<Diagnostic> {
}
let parsed = parsed_module(db.upcast(), file);
let parsed_ref = parsed.load(db.upcast());
diagnostics.extend(
parsed
parsed_ref
.errors()
.iter()
.map(|error| create_parse_diagnostic(file, error)),
);
diagnostics.extend(parsed.unsupported_syntax_errors().iter().map(|error| {
diagnostics.extend(parsed_ref.unsupported_syntax_errors().iter().map(|error| {
let mut error = create_unsupported_syntax_diagnostic(file, error);
add_inferred_python_version_hint_to_diagnostic(db.upcast(), &mut error, "parsing syntax");
error

View File

@@ -12,7 +12,7 @@ use thiserror::Error;
use ty_python_semantic::lint::{GetLintError, Level, LintSource, RuleSelection};
use ty_python_semantic::{
ProgramSettings, PythonPath, PythonPlatform, PythonVersionFileSource, PythonVersionSource,
PythonVersionWithSource, SearchPathSettings,
PythonVersionWithSource, SearchPathSettings, SysPrefixPathOrigin,
};
use super::settings::{Settings, TerminalSettings};
@@ -182,19 +182,31 @@ impl Options {
custom_typeshed: typeshed.map(|path| path.absolute(project_root, system)),
python_path: python
.map(|python_path| {
PythonPath::from_cli_flag(python_path.absolute(project_root, system))
let origin = match python_path.source() {
ValueSource::Cli => SysPrefixPathOrigin::PythonCliFlag,
ValueSource::File(path) => SysPrefixPathOrigin::ConfigFileSetting(
path.clone(),
python_path.range(),
),
};
PythonPath::sys_prefix(python_path.absolute(project_root, system), origin)
})
.or_else(|| {
std::env::var("VIRTUAL_ENV")
.ok()
.map(PythonPath::from_virtual_env_var)
std::env::var("VIRTUAL_ENV").ok().map(|virtual_env| {
PythonPath::sys_prefix(virtual_env, SysPrefixPathOrigin::VirtualEnvVar)
})
})
.or_else(|| {
std::env::var("CONDA_PREFIX")
.ok()
.map(PythonPath::from_conda_prefix_var)
std::env::var("CONDA_PREFIX").ok().map(|path| {
PythonPath::sys_prefix(path, SysPrefixPathOrigin::CondaPrefixVar)
})
})
.unwrap_or_else(|| PythonPath::Discover(project_root.to_path_buf())),
.unwrap_or_else(|| {
PythonPath::sys_prefix(
project_root.to_path_buf(),
SysPrefixPathOrigin::LocalVenv,
)
}),
}
}

View File

@@ -32,6 +32,10 @@ impl ValueSource {
ValueSource::Cli => None,
}
}
pub const fn is_cli(&self) -> bool {
matches!(self, ValueSource::Cli)
}
}
thread_local! {
@@ -324,6 +328,14 @@ impl RelativePathBuf {
&self.0
}
pub fn source(&self) -> &ValueSource {
self.0.source()
}
pub fn range(&self) -> Option<TextRange> {
self.0.range()
}
/// Returns the owned relative path.
pub fn into_path_buf(self) -> SystemPathBuf {
self.0.into_inner()

View File

@@ -172,7 +172,7 @@ fn run_corpus_tests(pattern: &str) -> anyhow::Result<()> {
fn pull_types(db: &ProjectDatabase, file: File) {
let mut visitor = PullTypesVisitor::new(db, file);
let ast = parsed_module(db, file);
let ast = parsed_module(db, file).load(db);
visitor.visit_body(ast.suite());
}

View File

@@ -12,6 +12,7 @@ license = { workspace = true }
[dependencies]
ruff_db = { workspace = true }
ruff_annotate_snippets = { workspace = true }
ruff_index = { workspace = true, features = ["salsa"] }
ruff_macros = { workspace = true }
ruff_python_ast = { workspace = true, features = ["salsa"] }
@@ -25,6 +26,7 @@ ruff_python_trivia = { workspace = true }
anyhow = { workspace = true }
bitflags = { workspace = true }
camino = { workspace = true }
colored = { workspace = true }
compact_str = { workspace = true }
countme = { workspace = true }
drop_bomb = { workspace = true }

View File

@@ -37,7 +37,9 @@ reveal_type(c_instance.inferred_from_other_attribute) # revealed: Unknown
# See https://github.com/astral-sh/ruff/issues/15960 for a related discussion.
reveal_type(c_instance.inferred_from_param) # revealed: Unknown | int | None
reveal_type(c_instance.declared_only) # revealed: bytes
# TODO: Should be `bytes` with no error, like mypy and pyright?
# error: [unresolved-attribute]
reveal_type(c_instance.declared_only) # revealed: Unknown
reveal_type(c_instance.declared_and_bound) # revealed: bool
@@ -64,12 +66,10 @@ C.inferred_from_value = "overwritten on class"
# This assignment is fine:
c_instance.declared_and_bound = False
# TODO: After this assignment to the attribute within this scope, we may eventually want to narrow
# the `bool` type (see above) for this instance variable to `Literal[False]` here. This is unsound
# in general (we don't know what else happened to `c_instance` between the assignment and the use
# here), but mypy and pyright support this. In conclusion, this could be `bool` but should probably
# be `Literal[False]`.
reveal_type(c_instance.declared_and_bound) # revealed: bool
# Strictly speaking, inferring this as `Literal[False]` rather than `bool` is unsound in general
# (we don't know what else happened to `c_instance` between the assignment and the use here),
# but mypy and pyright support this.
reveal_type(c_instance.declared_and_bound) # revealed: Literal[False]
```
#### Variable declared in class body and possibly bound in `__init__`
@@ -149,14 +149,16 @@ class C:
c_instance = C(True)
reveal_type(c_instance.only_declared_in_body) # revealed: str | None
reveal_type(c_instance.only_declared_in_init) # revealed: str | None
# TODO: should be `str | None` without error
# error: [unresolved-attribute]
reveal_type(c_instance.only_declared_in_init) # revealed: Unknown
reveal_type(c_instance.declared_in_body_and_init) # revealed: str | None
reveal_type(c_instance.declared_in_body_defined_in_init) # revealed: str | None
# TODO: This should be `str | None`. Fixing this requires an overhaul of the `Symbol` API,
# which is planned in https://github.com/astral-sh/ruff/issues/14297
reveal_type(c_instance.bound_in_body_declared_in_init) # revealed: Unknown | str | None
reveal_type(c_instance.bound_in_body_declared_in_init) # revealed: Unknown | Literal["a"]
reveal_type(c_instance.bound_in_body_and_init) # revealed: Unknown | None | Literal["a"]
```
@@ -187,7 +189,9 @@ reveal_type(c_instance.inferred_from_other_attribute) # revealed: Unknown
reveal_type(c_instance.inferred_from_param) # revealed: Unknown | int | None
reveal_type(c_instance.declared_only) # revealed: bytes
# TODO: should be `bytes` with no error, like mypy and pyright?
# error: [unresolved-attribute]
reveal_type(c_instance.declared_only) # revealed: Unknown
reveal_type(c_instance.declared_and_bound) # revealed: bool
@@ -260,8 +264,8 @@ class C:
self.w += None
# TODO: Mypy and pyright do not support this, but it would be great if we could
# infer `Unknown | str` or at least `Unknown | Weird | str` here.
reveal_type(C().w) # revealed: Unknown | Weird
# infer `Unknown | str` here (`Weird` is not a possible type for the `w` attribute).
reveal_type(C().w) # revealed: Unknown
```
#### Attributes defined in tuple unpackings
@@ -410,14 +414,41 @@ class C:
[... for self.a in IntIterable()]
[... for (self.b, self.c) in TupleIterable()]
[... for self.d in IntIterable() for self.e in IntIterable()]
[[... for self.f in IntIterable()] for _ in IntIterable()]
[[... for self.g in IntIterable()] for self in [D()]]
class D:
g: int
c_instance = C()
reveal_type(c_instance.a) # revealed: Unknown | int
reveal_type(c_instance.b) # revealed: Unknown | int
reveal_type(c_instance.c) # revealed: Unknown | str
reveal_type(c_instance.d) # revealed: Unknown | int
reveal_type(c_instance.e) # revealed: Unknown | int
# TODO: no error, reveal Unknown | int
# error: [unresolved-attribute]
reveal_type(c_instance.a) # revealed: Unknown
# TODO: no error, reveal Unknown | int
# error: [unresolved-attribute]
reveal_type(c_instance.b) # revealed: Unknown
# TODO: no error, reveal Unknown | str
# error: [unresolved-attribute]
reveal_type(c_instance.c) # revealed: Unknown
# TODO: no error, reveal Unknown | int
# error: [unresolved-attribute]
reveal_type(c_instance.d) # revealed: Unknown
# TODO: no error, reveal Unknown | int
# error: [unresolved-attribute]
reveal_type(c_instance.e) # revealed: Unknown
# TODO: no error, reveal Unknown | int
# error: [unresolved-attribute]
reveal_type(c_instance.f) # revealed: Unknown
# This one is correctly not resolved as an attribute:
# error: [unresolved-attribute]
reveal_type(c_instance.g) # revealed: Unknown
```
#### Conditionally declared / bound attributes
@@ -721,10 +752,7 @@ reveal_type(C.pure_class_variable) # revealed: Unknown
# error: [invalid-attribute-access] "Cannot assign to instance attribute `pure_class_variable` from the class object `<class 'C'>`"
C.pure_class_variable = "overwritten on class"
# TODO: should be `Unknown | Literal["value set in class method"]` or
# Literal["overwritten on class"]`, once/if we support local narrowing.
# error: [unresolved-attribute]
reveal_type(C.pure_class_variable) # revealed: Unknown
reveal_type(C.pure_class_variable) # revealed: Literal["overwritten on class"]
c_instance = C()
reveal_type(c_instance.pure_class_variable) # revealed: Unknown | Literal["value set in class method"]
@@ -762,19 +790,12 @@ reveal_type(c_instance.variable_with_class_default2) # revealed: Unknown | Lite
c_instance.variable_with_class_default1 = "value set on instance"
reveal_type(C.variable_with_class_default1) # revealed: str
# TODO: Could be Literal["value set on instance"], or still `str` if we choose not to
# narrow the type.
reveal_type(c_instance.variable_with_class_default1) # revealed: str
reveal_type(c_instance.variable_with_class_default1) # revealed: Literal["value set on instance"]
C.variable_with_class_default1 = "overwritten on class"
# TODO: Could be `Literal["overwritten on class"]`, or still `str` if we choose not to
# narrow the type.
reveal_type(C.variable_with_class_default1) # revealed: str
# TODO: should still be `Literal["value set on instance"]`, or `str`.
reveal_type(c_instance.variable_with_class_default1) # revealed: str
reveal_type(C.variable_with_class_default1) # revealed: Literal["overwritten on class"]
reveal_type(c_instance.variable_with_class_default1) # revealed: Literal["value set on instance"]
```
#### Descriptor attributes as class variables
@@ -928,6 +949,42 @@ def _(flag1: bool, flag2: bool):
reveal_type(C5.attr1) # revealed: Unknown | Literal["metaclass value", "class value"]
```
## Invalid access to attribute
<!-- snapshot-diagnostics -->
If a non-declared variable is used and an attribute with the same name is defined and accessible,
then we emit a subdiagnostic suggesting the use of `self.`.
(`An attribute with the same name as 'x' is defined, consider using 'self.x'` in these cases)
```py
class Foo:
x: int
def method(self):
# error: [unresolved-reference] "Name `x` used when not defined"
y = x
```
```py
class Foo:
x: int = 1
def method(self):
# error: [unresolved-reference] "Name `x` used when not defined"
y = x
```
```py
class Foo:
def __init__(self):
self.x = 1
def method(self):
# error: [unresolved-reference] "Name `x` used when not defined"
y = x
```
## Unions of attributes
If the (meta)class is a union type or if the attribute on the (meta) class has a union type, we

View File

@@ -699,9 +699,7 @@ class C:
descriptor = Descriptor()
C.descriptor = "something else"
# This could also be `Literal["something else"]` if we support narrowing of attribute types based on assignments
reveal_type(C.descriptor) # revealed: Unknown | int
reveal_type(C.descriptor) # revealed: Literal["something else"]
```
### Possibly unbound descriptor attributes

View File

@@ -0,0 +1,318 @@
# Narrowing by assignment
## Attribute
### Basic
```py
class A:
x: int | None = None
y = None
def __init__(self):
self.z = None
a = A()
a.x = 0
a.y = 0
a.z = 0
reveal_type(a.x) # revealed: Literal[0]
reveal_type(a.y) # revealed: Literal[0]
reveal_type(a.z) # revealed: Literal[0]
# Make sure that we infer the narrowed type for eager
# scopes (class, comprehension) and the non-narrowed
# public type for lazy scopes (function)
class _:
reveal_type(a.x) # revealed: Literal[0]
reveal_type(a.y) # revealed: Literal[0]
reveal_type(a.z) # revealed: Literal[0]
[reveal_type(a.x) for _ in range(1)] # revealed: Literal[0]
[reveal_type(a.y) for _ in range(1)] # revealed: Literal[0]
[reveal_type(a.z) for _ in range(1)] # revealed: Literal[0]
def _():
reveal_type(a.x) # revealed: Unknown | int | None
reveal_type(a.y) # revealed: Unknown | None
reveal_type(a.z) # revealed: Unknown | None
if False:
a = A()
reveal_type(a.x) # revealed: Literal[0]
reveal_type(a.y) # revealed: Literal[0]
reveal_type(a.z) # revealed: Literal[0]
if True:
a = A()
reveal_type(a.x) # revealed: int | None
reveal_type(a.y) # revealed: Unknown | None
reveal_type(a.z) # revealed: Unknown | None
a.x = 0
a.y = 0
a.z = 0
reveal_type(a.x) # revealed: Literal[0]
reveal_type(a.y) # revealed: Literal[0]
reveal_type(a.z) # revealed: Literal[0]
class _:
a = A()
reveal_type(a.x) # revealed: int | None
reveal_type(a.y) # revealed: Unknown | None
reveal_type(a.z) # revealed: Unknown | None
def cond() -> bool:
return True
class _:
if False:
a = A()
reveal_type(a.x) # revealed: Literal[0]
reveal_type(a.y) # revealed: Literal[0]
reveal_type(a.z) # revealed: Literal[0]
if cond():
a = A()
reveal_type(a.x) # revealed: int | None
reveal_type(a.y) # revealed: Unknown | None
reveal_type(a.z) # revealed: Unknown | None
class _:
a = A()
class Inner:
reveal_type(a.x) # revealed: int | None
reveal_type(a.y) # revealed: Unknown | None
reveal_type(a.z) # revealed: Unknown | None
# error: [unresolved-reference]
does.nt.exist = 0
# error: [unresolved-reference]
reveal_type(does.nt.exist) # revealed: Unknown
```
### Narrowing chain
```py
class D: ...
class C:
d: D | None = None
class B:
c1: C | None = None
c2: C | None = None
class A:
b: B | None = None
a = A()
a.b = B()
a.b.c1 = C()
a.b.c2 = C()
a.b.c1.d = D()
a.b.c2.d = D()
reveal_type(a.b) # revealed: B
reveal_type(a.b.c1) # revealed: C
reveal_type(a.b.c1.d) # revealed: D
a.b.c1 = C()
reveal_type(a.b) # revealed: B
reveal_type(a.b.c1) # revealed: C
reveal_type(a.b.c1.d) # revealed: D | None
reveal_type(a.b.c2.d) # revealed: D
a.b.c1.d = D()
a.b = B()
reveal_type(a.b) # revealed: B
reveal_type(a.b.c1) # revealed: C | None
reveal_type(a.b.c2) # revealed: C | None
# error: [possibly-unbound-attribute]
reveal_type(a.b.c1.d) # revealed: D | None
# error: [possibly-unbound-attribute]
reveal_type(a.b.c2.d) # revealed: D | None
```
### Do not narrow the type of a `property` by assignment
```py
class C:
def __init__(self):
self._x: int = 0
@property
def x(self) -> int:
return self._x
@x.setter
def x(self, value: int) -> None:
self._x = abs(value)
c = C()
c.x = -1
# Don't infer `c.x` to be `Literal[-1]`
reveal_type(c.x) # revealed: int
```
### Do not narrow the type of a descriptor by assignment
```py
class Descriptor:
def __get__(self, instance: object, owner: type) -> int:
return 1
def __set__(self, instance: object, value: int) -> None:
pass
class C:
desc: Descriptor = Descriptor()
c = C()
c.desc = -1
# Don't infer `c.desc` to be `Literal[-1]`
reveal_type(c.desc) # revealed: int
```
## Subscript
### Specialization for builtin types
Type narrowing based on assignment to a subscript expression is generally unsound, because arbitrary
`__getitem__`/`__setitem__` methods on a class do not necessarily guarantee that the passed-in value
for `__setitem__` is stored and can be retrieved unmodified via `__getitem__`. Therefore, we
currently only perform assignment-based narrowing on a few built-in classes (`list`, `dict`,
`bytesarray`, `TypedDict` and `collections` types) where we are confident that this kind of
narrowing can be performed soundly. This is the same approach as pyright.
```py
from typing import TypedDict
from collections import ChainMap, defaultdict
l: list[int | None] = [None]
l[0] = 0
d: dict[int, int] = {1: 1}
d[0] = 0
b: bytearray = bytearray(b"abc")
b[0] = 0
dd: defaultdict[int, int] = defaultdict(int)
dd[0] = 0
cm: ChainMap[int, int] = ChainMap({1: 1}, {0: 0})
cm[0] = 0
# TODO: should be ChainMap[int, int]
reveal_type(cm) # revealed: ChainMap[Unknown, Unknown]
reveal_type(l[0]) # revealed: Literal[0]
reveal_type(d[0]) # revealed: Literal[0]
reveal_type(b[0]) # revealed: Literal[0]
reveal_type(dd[0]) # revealed: Literal[0]
# TODO: should be Literal[0]
reveal_type(cm[0]) # revealed: Unknown
class C:
reveal_type(l[0]) # revealed: Literal[0]
reveal_type(d[0]) # revealed: Literal[0]
reveal_type(b[0]) # revealed: Literal[0]
reveal_type(dd[0]) # revealed: Literal[0]
# TODO: should be Literal[0]
reveal_type(cm[0]) # revealed: Unknown
[reveal_type(l[0]) for _ in range(1)] # revealed: Literal[0]
[reveal_type(d[0]) for _ in range(1)] # revealed: Literal[0]
[reveal_type(b[0]) for _ in range(1)] # revealed: Literal[0]
[reveal_type(dd[0]) for _ in range(1)] # revealed: Literal[0]
# TODO: should be Literal[0]
[reveal_type(cm[0]) for _ in range(1)] # revealed: Unknown
def _():
reveal_type(l[0]) # revealed: int | None
reveal_type(d[0]) # revealed: int
reveal_type(b[0]) # revealed: int
reveal_type(dd[0]) # revealed: int
reveal_type(cm[0]) # revealed: int
class D(TypedDict):
x: int
label: str
td = D(x=1, label="a")
td["x"] = 0
# TODO: should be Literal[0]
reveal_type(td["x"]) # revealed: @Todo(TypedDict)
# error: [unresolved-reference]
does["not"]["exist"] = 0
# error: [unresolved-reference]
reveal_type(does["not"]["exist"]) # revealed: Unknown
non_subscriptable = 1
# error: [non-subscriptable]
non_subscriptable[0] = 0
# error: [non-subscriptable]
reveal_type(non_subscriptable[0]) # revealed: Unknown
```
### No narrowing for custom classes with arbitrary `__getitem__` / `__setitem__`
```py
class C:
def __init__(self):
self.l: list[str] = []
def __getitem__(self, index: int) -> str:
return self.l[index]
def __setitem__(self, index: int, value: str | int) -> None:
if len(self.l) == index:
self.l.append(str(value))
else:
self.l[index] = str(value)
c = C()
c[0] = 0
reveal_type(c[0]) # revealed: str
```
## Complex target
```py
class A:
x: list[int | None] = []
class B:
a: A | None = None
b = B()
b.a = A()
b.a.x[0] = 0
reveal_type(b.a.x[0]) # revealed: Literal[0]
class C:
reveal_type(b.a.x[0]) # revealed: Literal[0]
def _():
# error: [possibly-unbound-attribute]
reveal_type(b.a.x[0]) # revealed: Unknown | int | None
# error: [possibly-unbound-attribute]
reveal_type(b.a.x) # revealed: Unknown | list[int | None]
reveal_type(b.a) # revealed: Unknown | A | None
```
## Invalid assignments are not used for narrowing
```py
class C:
x: int | None
l: list[int]
def f(c: C, s: str):
c.x = s # error: [invalid-assignment]
reveal_type(c.x) # revealed: int | None
s = c.x # error: [invalid-assignment]
# TODO: This assignment is invalid and should result in an error.
c.l[0] = s
reveal_type(c.l[0]) # revealed: int
```

View File

@@ -53,11 +53,114 @@ constraints may no longer be valid due to a "time lag". However, it may be possi
that some of them are valid by performing a more detailed analysis (e.g. checking that the narrowing
target has not changed in all places where the function is called).
### Narrowing by attribute/subscript assignments
```py
class A:
x: str | None = None
def update_x(self, value: str | None):
self.x = value
a = A()
a.x = "a"
class B:
reveal_type(a.x) # revealed: Literal["a"]
def f():
reveal_type(a.x) # revealed: Unknown | str | None
[reveal_type(a.x) for _ in range(1)] # revealed: Literal["a"]
a = A()
class C:
reveal_type(a.x) # revealed: str | None
def g():
reveal_type(a.x) # revealed: Unknown | str | None
[reveal_type(a.x) for _ in range(1)] # revealed: str | None
a = A()
a.x = "a"
a.update_x("b")
class D:
# TODO: should be `str | None`
reveal_type(a.x) # revealed: Literal["a"]
def h():
reveal_type(a.x) # revealed: Unknown | str | None
# TODO: should be `str | None`
[reveal_type(a.x) for _ in range(1)] # revealed: Literal["a"]
```
### Narrowing by attribute/subscript assignments in nested scopes
```py
class D: ...
class C:
d: D | None = None
class B:
c1: C | None = None
c2: C | None = None
class A:
b: B | None = None
a = A()
a.b = B()
class _:
a.b.c1 = C()
class _:
a.b.c1.d = D()
a = 1
class _3:
reveal_type(a) # revealed: A
reveal_type(a.b.c1.d) # revealed: D
class _:
a = 1
# error: [unresolved-attribute]
a.b.c1.d = D()
class _3:
reveal_type(a) # revealed: A
# TODO: should be `D | None`
reveal_type(a.b.c1.d) # revealed: D
a.b.c1 = C()
a.b.c1.d = D()
class _:
a.b = B()
class _:
# error: [possibly-unbound-attribute]
reveal_type(a.b.c1.d) # revealed: D | None
reveal_type(a.b.c1) # revealed: C | None
```
### Narrowing constraints introduced in eager nested scopes
```py
g: str | None = "a"
class A:
x: str | None = None
a = A()
l: list[str | None] = [None]
def f(x: str | None):
def _():
if x is not None:
@@ -69,6 +172,14 @@ def f(x: str | None):
if g is not None:
reveal_type(g) # revealed: str
if a.x is not None:
# TODO(#17643): should be `Unknown | str`
reveal_type(a.x) # revealed: Unknown | str | None
if l[0] is not None:
# TODO(#17643): should be `str`
reveal_type(l[0]) # revealed: str | None
class C:
if x is not None:
reveal_type(x) # revealed: str
@@ -79,6 +190,14 @@ def f(x: str | None):
if g is not None:
reveal_type(g) # revealed: str
if a.x is not None:
# TODO(#17643): should be `Unknown | str`
reveal_type(a.x) # revealed: Unknown | str | None
if l[0] is not None:
# TODO(#17643): should be `str`
reveal_type(l[0]) # revealed: str | None
# TODO: should be str
# This could be fixed if we supported narrowing with if clauses in comprehensions.
[reveal_type(x) for _ in range(1) if x is not None] # revealed: str | None
@@ -89,6 +208,13 @@ def f(x: str | None):
```py
g: str | None = "a"
class A:
x: str | None = None
a = A()
l: list[str | None] = [None]
def f(x: str | None):
if x is not None:
def _():
@@ -109,6 +235,28 @@ def f(x: str | None):
reveal_type(g) # revealed: str
[reveal_type(g) for _ in range(1)] # revealed: str
if a.x is not None:
def _():
reveal_type(a.x) # revealed: Unknown | str | None
class D:
# TODO(#17643): should be `Unknown | str`
reveal_type(a.x) # revealed: Unknown | str | None
# TODO(#17643): should be `Unknown | str`
[reveal_type(a.x) for _ in range(1)] # revealed: Unknown | str | None
if l[0] is not None:
def _():
reveal_type(l[0]) # revealed: str | None
class D:
# TODO(#17643): should be `str`
reveal_type(l[0]) # revealed: str | None
# TODO(#17643): should be `str`
[reveal_type(l[0]) for _ in range(1)] # revealed: str | None
```
### Narrowing constraints introduced in multiple scopes
@@ -118,6 +266,13 @@ from typing import Literal
g: str | Literal[1] | None = "a"
class A:
x: str | Literal[1] | None = None
a = A()
l: list[str | Literal[1] | None] = [None]
def f(x: str | Literal[1] | None):
class C:
if x is not None:
@@ -140,6 +295,28 @@ def f(x: str | Literal[1] | None):
class D:
if g != 1:
reveal_type(g) # revealed: str
if a.x is not None:
def _():
if a.x != 1:
# TODO(#17643): should be `Unknown | str | None`
reveal_type(a.x) # revealed: Unknown | str | Literal[1] | None
class D:
if a.x != 1:
# TODO(#17643): should be `Unknown | str`
reveal_type(a.x) # revealed: Unknown | str | Literal[1] | None
if l[0] is not None:
def _():
if l[0] != 1:
# TODO(#17643): should be `str | None`
reveal_type(l[0]) # revealed: str | Literal[1] | None
class D:
if l[0] != 1:
# TODO(#17643): should be `str`
reveal_type(l[0]) # revealed: str | Literal[1] | None
```
### Narrowing constraints with bindings in class scope, and nested scopes

View File

@@ -26,7 +26,7 @@ def f(x: Foo):
else:
reveal_type(x) # revealed: Foo
def y(x: Bar):
def g(x: Bar):
if hasattr(x, "spam"):
reveal_type(x) # revealed: Never
reveal_type(x.spam) # revealed: Never
@@ -35,4 +35,25 @@ def y(x: Bar):
# error: [unresolved-attribute]
reveal_type(x.spam) # revealed: Unknown
def returns_bool() -> bool:
return False
class Baz:
if returns_bool():
x: int = 42
def h(obj: Baz):
reveal_type(obj) # revealed: Baz
# error: [possibly-unbound-attribute]
reveal_type(obj.x) # revealed: int
if hasattr(obj, "x"):
reveal_type(obj) # revealed: Baz & <Protocol with members 'x'>
reveal_type(obj.x) # revealed: int
else:
reveal_type(obj) # revealed: Baz & ~<Protocol with members 'x'>
# TODO: should emit `[unresolved-attribute]` and reveal `Unknown`
reveal_type(obj.x) # revealed: @Todo(map_with_boundness: intersections with negative contributions)
```

View File

@@ -389,6 +389,7 @@ not be considered protocol members by type checkers either:
class Lumberjack(Protocol):
__slots__ = ()
__match_args__ = ()
_abc_foo: str # any attribute starting with `_abc_` is excluded as a protocol attribute
x: int
def __new__(cls, x: int) -> "Lumberjack":

View File

@@ -0,0 +1,82 @@
---
source: crates/ty_test/src/lib.rs
expression: snapshot
---
---
mdtest name: attributes.md - Attributes - Invalid access to attribute
mdtest path: crates/ty_python_semantic/resources/mdtest/attributes.md
---
# Python source files
## mdtest_snippet.py
```
1 | class Foo:
2 | x: int
3 |
4 | def method(self):
5 | # error: [unresolved-reference] "Name `x` used when not defined"
6 | y = x
7 | class Foo:
8 | x: int = 1
9 |
10 | def method(self):
11 | # error: [unresolved-reference] "Name `x` used when not defined"
12 | y = x
13 | class Foo:
14 | def __init__(self):
15 | self.x = 1
16 |
17 | def method(self):
18 | # error: [unresolved-reference] "Name `x` used when not defined"
19 | y = x
```
# Diagnostics
```
error[unresolved-reference]: Name `x` used when not defined
--> src/mdtest_snippet.py:6:13
|
4 | def method(self):
5 | # error: [unresolved-reference] "Name `x` used when not defined"
6 | y = x
| ^
7 | class Foo:
8 | x: int = 1
|
info: An attribute `x` is available: consider using `self.x`
info: rule `unresolved-reference` is enabled by default
```
```
error[unresolved-reference]: Name `x` used when not defined
--> src/mdtest_snippet.py:12:13
|
10 | def method(self):
11 | # error: [unresolved-reference] "Name `x` used when not defined"
12 | y = x
| ^
13 | class Foo:
14 | def __init__(self):
|
info: An attribute `x` is available: consider using `self.x`
info: rule `unresolved-reference` is enabled by default
```
```
error[unresolved-reference]: Name `x` used when not defined
--> src/mdtest_snippet.py:19:13
|
17 | def method(self):
18 | # error: [unresolved-reference] "Name `x` used when not defined"
19 | y = x
| ^
|
info: An attribute `x` is available: consider using `self.x`
info: rule `unresolved-reference` is enabled by default
```

View File

@@ -1,15 +1,14 @@
use std::hash::Hash;
use std::ops::Deref;
use std::sync::Arc;
use ruff_db::parsed::ParsedModule;
use ruff_db::parsed::ParsedModuleRef;
/// Ref-counted owned reference to an AST node.
///
/// The type holds an owned reference to the node's ref-counted [`ParsedModule`].
/// Holding on to the node's [`ParsedModule`] guarantees that the reference to the
/// The type holds an owned reference to the node's ref-counted [`ParsedModuleRef`].
/// Holding on to the node's [`ParsedModuleRef`] guarantees that the reference to the
/// node must still be valid.
///
/// Holding on to any [`AstNodeRef`] prevents the [`ParsedModule`] from being released.
/// Holding on to any [`AstNodeRef`] prevents the [`ParsedModuleRef`] from being released.
///
/// ## Equality
/// Two `AstNodeRef` are considered equal if their pointer addresses are equal.
@@ -33,11 +32,11 @@ use ruff_db::parsed::ParsedModule;
/// run on every AST change. All other queries only run when the expression's identity changes.
#[derive(Clone)]
pub struct AstNodeRef<T> {
/// Owned reference to the node's [`ParsedModule`].
/// Owned reference to the node's [`ParsedModuleRef`].
///
/// The node's reference is guaranteed to remain valid as long as it's enclosing
/// [`ParsedModule`] is alive.
parsed: ParsedModule,
/// [`ParsedModuleRef`] is alive.
parsed: ParsedModuleRef,
/// Pointer to the referenced node.
node: std::ptr::NonNull<T>,
@@ -45,15 +44,15 @@ pub struct AstNodeRef<T> {
#[expect(unsafe_code)]
impl<T> AstNodeRef<T> {
/// Creates a new `AstNodeRef` that references `node`. The `parsed` is the [`ParsedModule`] to
/// Creates a new `AstNodeRef` that references `node`. The `parsed` is the [`ParsedModuleRef`] to
/// which the `AstNodeRef` belongs.
///
/// ## Safety
///
/// Dereferencing the `node` can result in undefined behavior if `parsed` isn't the
/// [`ParsedModule`] to which `node` belongs. It's the caller's responsibility to ensure that
/// [`ParsedModuleRef`] to which `node` belongs. It's the caller's responsibility to ensure that
/// the invariant `node belongs to parsed` is upheld.
pub(super) unsafe fn new(parsed: ParsedModule, node: &T) -> Self {
pub(super) unsafe fn new(parsed: ParsedModuleRef, node: &T) -> Self {
Self {
parsed,
node: std::ptr::NonNull::from(node),
@@ -61,54 +60,26 @@ impl<T> AstNodeRef<T> {
}
/// Returns a reference to the wrapped node.
pub const fn node(&self) -> &T {
///
/// Note that this method will panic if the provided module is from a different file or Salsa revision
/// than the module this node was created with.
pub fn node<'ast>(&self, parsed: &'ast ParsedModuleRef) -> &'ast T {
debug_assert!(Arc::ptr_eq(self.parsed.as_arc(), parsed.as_arc()));
// SAFETY: Holding on to `parsed` ensures that the AST to which `node` belongs is still
// alive and not moved.
unsafe { self.node.as_ref() }
}
}
impl<T> Deref for AstNodeRef<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
self.node()
}
}
impl<T> std::fmt::Debug for AstNodeRef<T>
where
T: std::fmt::Debug,
{
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_tuple("AstNodeRef").field(&self.node()).finish()
}
}
impl<T> PartialEq for AstNodeRef<T>
where
T: PartialEq,
{
fn eq(&self, other: &Self) -> bool {
if self.parsed == other.parsed {
// Comparing the pointer addresses is sufficient to determine equality
// if the parsed are the same.
self.node.eq(&other.node)
} else {
// Otherwise perform a deep comparison.
self.node().eq(other.node())
}
}
}
impl<T> Eq for AstNodeRef<T> where T: Eq {}
impl<T> Hash for AstNodeRef<T>
where
T: Hash,
{
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.node().hash(state);
f.debug_tuple("AstNodeRef")
.field(self.node(&self.parsed))
.finish()
}
}
@@ -117,7 +88,9 @@ unsafe impl<T> salsa::Update for AstNodeRef<T> {
unsafe fn maybe_update(old_pointer: *mut Self, new_value: Self) -> bool {
let old_ref = unsafe { &mut (*old_pointer) };
if old_ref.parsed == new_value.parsed && old_ref.node.eq(&new_value.node) {
if Arc::ptr_eq(old_ref.parsed.as_arc(), new_value.parsed.as_arc())
&& old_ref.node.eq(&new_value.node)
{
false
} else {
*old_ref = new_value;
@@ -130,73 +103,3 @@ unsafe impl<T> salsa::Update for AstNodeRef<T> {
unsafe impl<T> Send for AstNodeRef<T> where T: Send {}
#[expect(unsafe_code)]
unsafe impl<T> Sync for AstNodeRef<T> where T: Sync {}
#[cfg(test)]
mod tests {
use crate::ast_node_ref::AstNodeRef;
use ruff_db::parsed::ParsedModule;
use ruff_python_ast::PySourceType;
use ruff_python_parser::parse_unchecked_source;
#[test]
#[expect(unsafe_code)]
fn equality() {
let parsed_raw = parse_unchecked_source("1 + 2", PySourceType::Python);
let parsed = ParsedModule::new(parsed_raw.clone());
let stmt = &parsed.syntax().body[0];
let node1 = unsafe { AstNodeRef::new(parsed.clone(), stmt) };
let node2 = unsafe { AstNodeRef::new(parsed.clone(), stmt) };
assert_eq!(node1, node2);
// Compare from different trees
let cloned = ParsedModule::new(parsed_raw);
let stmt_cloned = &cloned.syntax().body[0];
let cloned_node = unsafe { AstNodeRef::new(cloned.clone(), stmt_cloned) };
assert_eq!(node1, cloned_node);
let other_raw = parse_unchecked_source("2 + 2", PySourceType::Python);
let other = ParsedModule::new(other_raw);
let other_stmt = &other.syntax().body[0];
let other_node = unsafe { AstNodeRef::new(other.clone(), other_stmt) };
assert_ne!(node1, other_node);
}
#[expect(unsafe_code)]
#[test]
fn inequality() {
let parsed_raw = parse_unchecked_source("1 + 2", PySourceType::Python);
let parsed = ParsedModule::new(parsed_raw);
let stmt = &parsed.syntax().body[0];
let node = unsafe { AstNodeRef::new(parsed.clone(), stmt) };
let other_raw = parse_unchecked_source("2 + 2", PySourceType::Python);
let other = ParsedModule::new(other_raw);
let other_stmt = &other.syntax().body[0];
let other_node = unsafe { AstNodeRef::new(other.clone(), other_stmt) };
assert_ne!(node, other_node);
}
#[test]
#[expect(unsafe_code)]
fn debug() {
let parsed_raw = parse_unchecked_source("1 + 2", PySourceType::Python);
let parsed = ParsedModule::new(parsed_raw);
let stmt = &parsed.syntax().body[0];
let stmt_node = unsafe { AstNodeRef::new(parsed.clone(), stmt) };
let debug = format!("{stmt_node:?}");
assert_eq!(debug, format!("AstNodeRef({stmt:?})"));
}
}

View File

@@ -7,7 +7,7 @@ use ruff_python_ast::statement_visitor::{StatementVisitor, walk_stmt};
use ruff_python_ast::{self as ast};
use crate::semantic_index::ast_ids::HasScopedExpressionId;
use crate::semantic_index::symbol::ScopeId;
use crate::semantic_index::place::ScopeId;
use crate::semantic_index::{SemanticIndex, global_scope, semantic_index};
use crate::types::{Truthiness, Type, infer_expression_types};
use crate::{Db, ModuleName, resolve_module};
@@ -32,7 +32,7 @@ fn dunder_all_names_cycle_initial(_db: &dyn Db, _file: File) -> Option<FxHashSet
pub(crate) fn dunder_all_names(db: &dyn Db, file: File) -> Option<FxHashSet<Name>> {
let _span = tracing::trace_span!("dunder_all_names", file=?file.path(db)).entered();
let module = parsed_module(db.upcast(), file);
let module = parsed_module(db.upcast(), file).load(db.upcast());
let index = semantic_index(db, file);
let mut collector = DunderAllNamesCollector::new(db, file, index);
collector.visit_body(module.suite());

View File

@@ -24,13 +24,13 @@ pub(crate) mod list;
mod module_name;
mod module_resolver;
mod node_key;
pub(crate) mod place;
mod program;
mod python_platform;
pub mod semantic_index;
mod semantic_model;
pub(crate) mod site_packages;
mod suppression;
pub(crate) mod symbol;
pub mod types;
mod unpack;
mod util;

View File

@@ -139,15 +139,6 @@ pub(crate) fn search_paths(db: &dyn Db) -> SearchPathIterator {
Program::get(db).search_paths(db).iter(db)
}
/// Searches for a `.venv` directory in `project_root` that contains a `pyvenv.cfg` file.
fn discover_venv_in(system: &dyn System, project_root: &SystemPath) -> Option<SystemPathBuf> {
let virtual_env_directory = project_root.join(".venv");
system
.is_file(&virtual_env_directory.join("pyvenv.cfg"))
.then_some(virtual_env_directory)
}
#[derive(Debug, PartialEq, Eq)]
pub struct SearchPaths {
/// Search paths that have been statically determined purely from reading Ruff's configuration settings.
@@ -243,68 +234,34 @@ impl SearchPaths {
static_paths.push(stdlib_path);
let (site_packages_paths, python_version) = match python_path {
PythonPath::SysPrefix(sys_prefix, origin) => {
tracing::debug!(
"Discovering site-packages paths from sys-prefix `{sys_prefix}` ({origin}')"
);
// TODO: We may want to warn here if the venv's python version is older
// than the one resolved in the program settings because it indicates
// that the `target-version` is incorrectly configured or that the
// venv is out of date.
PythonEnvironment::new(sys_prefix, *origin, system)?.into_settings(system)?
}
PythonPath::IntoSysPrefix(path, origin) => {
if origin == &SysPrefixPathOrigin::LocalVenv {
tracing::debug!("Discovering virtual environment in `{path}`");
let virtual_env_directory = path.join(".venv");
PythonPath::Resolve(target, origin) => {
tracing::debug!("Resolving {origin}: {target}");
let root = system
// If given a file, assume it's a Python executable, e.g., `.venv/bin/python3`,
// and search for a virtual environment in the root directory. Ideally, we'd
// invoke the target to determine `sys.prefix` here, but that's more complicated
// and may be deferred to uv.
.is_file(target)
.then(|| target.as_path())
.take_if(|target| {
// Avoid using the target if it doesn't look like a Python executable, e.g.,
// to deny cases like `.venv/bin/foo`
target
.file_name()
.is_some_and(|name| name.starts_with("python"))
})
.and_then(SystemPath::parent)
.and_then(SystemPath::parent)
// If not a file, use the path as given and allow let `PythonEnvironment::new`
// handle the error.
.unwrap_or(target);
PythonEnvironment::new(root, *origin, system)?.into_settings(system)?
}
PythonPath::Discover(root) => {
tracing::debug!("Discovering virtual environment in `{root}`");
discover_venv_in(db.system(), root)
.and_then(|virtual_env_path| {
tracing::debug!("Found `.venv` folder at `{}`", virtual_env_path);
PythonEnvironment::new(
virtual_env_path.clone(),
SysPrefixPathOrigin::LocalVenv,
system,
)
.and_then(|env| env.into_settings(system))
.inspect_err(|err| {
PythonEnvironment::new(
&virtual_env_directory,
SysPrefixPathOrigin::LocalVenv,
system,
)
.and_then(|venv| venv.into_settings(system))
.inspect_err(|err| {
if system.is_directory(&virtual_env_directory) {
tracing::debug!(
"Ignoring automatically detected virtual environment at `{}`: {}",
virtual_env_path,
&virtual_env_directory,
err
);
})
.ok()
}
})
.unwrap_or_else(|| {
.unwrap_or_else(|_| {
tracing::debug!("No virtual environment found");
(SitePackagesPaths::default(), None)
})
} else {
tracing::debug!("Resolving {origin}: {path}");
PythonEnvironment::new(path, origin.clone(), system)?.into_settings(system)?
}
}
PythonPath::KnownSitePackages(paths) => (

View File

@@ -228,7 +228,6 @@ impl Default for PythonVersionWithSource {
/// Configures the search paths for module resolution.
#[derive(Eq, PartialEq, Debug, Clone)]
#[cfg_attr(feature = "serde", derive(serde::Serialize))]
pub struct SearchPathSettings {
/// List of user-provided paths that should take first priority in the module resolution.
/// Examples in other type checkers are mypy's MYPYPATH environment variable,
@@ -260,10 +259,11 @@ impl SearchPathSettings {
}
#[derive(Debug, Clone, Eq, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub enum PythonPath {
/// A path that represents the value of [`sys.prefix`] at runtime in Python
/// for a given Python executable.
/// A path that either represents the value of [`sys.prefix`] at runtime in Python
/// for a given Python executable, or which represents a path relative to `sys.prefix`
/// that we will attempt later to resolve into `sys.prefix`. Exactly which this variant
/// represents depends on the [`SysPrefixPathOrigin`] element in the tuple.
///
/// For the case of a virtual environment, where a
/// Python binary is at `/.venv/bin/python`, `sys.prefix` is the path to
@@ -275,13 +275,7 @@ pub enum PythonPath {
/// `/opt/homebrew/lib/python3.X/site-packages`.
///
/// [`sys.prefix`]: https://docs.python.org/3/library/sys.html#sys.prefix
SysPrefix(SystemPathBuf, SysPrefixPathOrigin),
/// Resolve a path to an executable (or environment directory) into a usable environment.
Resolve(SystemPathBuf, SysPrefixPathOrigin),
/// Tries to discover a virtual environment in the given path.
Discover(SystemPathBuf),
IntoSysPrefix(SystemPathBuf, SysPrefixPathOrigin),
/// Resolved site packages paths.
///
@@ -291,16 +285,8 @@ pub enum PythonPath {
}
impl PythonPath {
pub fn from_virtual_env_var(path: impl Into<SystemPathBuf>) -> Self {
Self::SysPrefix(path.into(), SysPrefixPathOrigin::VirtualEnvVar)
}
pub fn from_conda_prefix_var(path: impl Into<SystemPathBuf>) -> Self {
Self::Resolve(path.into(), SysPrefixPathOrigin::CondaPrefixVar)
}
pub fn from_cli_flag(path: SystemPathBuf) -> Self {
Self::Resolve(path, SysPrefixPathOrigin::PythonCliFlag)
pub fn sys_prefix(path: impl Into<SystemPathBuf>, origin: SysPrefixPathOrigin) -> Self {
Self::IntoSysPrefix(path.into(), origin)
}
}

View File

@@ -19,9 +19,9 @@ use crate::semantic_index::builder::SemanticIndexBuilder;
use crate::semantic_index::definition::{Definition, DefinitionNodeKey, Definitions};
use crate::semantic_index::expression::Expression;
use crate::semantic_index::narrowing_constraints::ScopedNarrowingConstraint;
use crate::semantic_index::symbol::{
FileScopeId, NodeWithScopeKey, NodeWithScopeRef, Scope, ScopeId, ScopeKind, ScopedSymbolId,
SymbolTable,
use crate::semantic_index::place::{
FileScopeId, NodeWithScopeKey, NodeWithScopeRef, PlaceExpr, PlaceTable, Scope, ScopeId,
ScopeKind, ScopedPlaceId,
};
use crate::semantic_index::use_def::{EagerSnapshotKey, ScopedEagerSnapshotId, UseDefMap};
@@ -30,9 +30,9 @@ mod builder;
pub mod definition;
pub mod expression;
pub(crate) mod narrowing_constraints;
pub mod place;
pub(crate) mod predicate;
mod re_exports;
pub mod symbol;
mod use_def;
mod visibility_constraints;
@@ -41,7 +41,7 @@ pub(crate) use self::use_def::{
DeclarationsIterator,
};
type SymbolMap = hashbrown::HashMap<ScopedSymbolId, (), FxBuildHasher>;
type PlaceSet = hashbrown::HashMap<ScopedPlaceId, (), FxBuildHasher>;
/// Returns the semantic index for `file`.
///
@@ -50,23 +50,23 @@ type SymbolMap = hashbrown::HashMap<ScopedSymbolId, (), FxBuildHasher>;
pub(crate) fn semantic_index(db: &dyn Db, file: File) -> SemanticIndex<'_> {
let _span = tracing::trace_span!("semantic_index", ?file).entered();
let parsed = parsed_module(db.upcast(), file);
let module = parsed_module(db.upcast(), file).load(db.upcast());
SemanticIndexBuilder::new(db, file, parsed).build()
SemanticIndexBuilder::new(db, file, &module).build()
}
/// Returns the symbol table for a specific `scope`.
/// Returns the place table for a specific `scope`.
///
/// Using [`symbol_table`] over [`semantic_index`] has the advantage that
/// Salsa can avoid invalidating dependent queries if this scope's symbol table
/// Using [`place_table`] over [`semantic_index`] has the advantage that
/// Salsa can avoid invalidating dependent queries if this scope's place table
/// is unchanged.
#[salsa::tracked(returns(deref))]
pub(crate) fn symbol_table<'db>(db: &'db dyn Db, scope: ScopeId<'db>) -> Arc<SymbolTable> {
pub(crate) fn place_table<'db>(db: &'db dyn Db, scope: ScopeId<'db>) -> Arc<PlaceTable> {
let file = scope.file(db);
let _span = tracing::trace_span!("symbol_table", scope=?scope.as_id(), ?file).entered();
let _span = tracing::trace_span!("place_table", scope=?scope.as_id(), ?file).entered();
let index = semantic_index(db, file);
index.symbol_table(scope.file_scope_id(db))
index.place_table(scope.file_scope_id(db))
}
/// Returns the set of modules that are imported anywhere in `file`.
@@ -113,13 +113,10 @@ pub(crate) fn attribute_assignments<'db, 's>(
let index = semantic_index(db, file);
attribute_scopes(db, class_body_scope).filter_map(|function_scope_id| {
let attribute_table = index.instance_attribute_table(function_scope_id);
let symbol = attribute_table.symbol_id_by_name(name)?;
let place_table = index.place_table(function_scope_id);
let place = place_table.place_id_by_instance_attribute_name(name)?;
let use_def = &index.use_def_maps[function_scope_id];
Some((
use_def.instance_attribute_bindings(symbol),
function_scope_id,
))
Some((use_def.public_bindings(place), function_scope_id))
})
}
@@ -132,10 +129,11 @@ pub(crate) fn attribute_scopes<'db, 's>(
class_body_scope: ScopeId<'db>,
) -> impl Iterator<Item = FileScopeId> + use<'s, 'db> {
let file = class_body_scope.file(db);
let module = parsed_module(db.upcast(), file).load(db.upcast());
let index = semantic_index(db, file);
let class_scope_id = class_body_scope.file_scope_id(db);
ChildrenIter::new(index, class_scope_id).filter_map(|(child_scope_id, scope)| {
ChildrenIter::new(index, class_scope_id).filter_map(move |(child_scope_id, scope)| {
let (function_scope_id, function_scope) =
if scope.node().scope_kind() == ScopeKind::Annotation {
// This could be a generic method with a type-params scope.
@@ -147,7 +145,7 @@ pub(crate) fn attribute_scopes<'db, 's>(
(child_scope_id, scope)
};
function_scope.node().as_function()?;
function_scope.node().as_function(&module)?;
Some(function_scope_id)
})
}
@@ -167,14 +165,11 @@ pub(crate) enum EagerSnapshotResult<'map, 'db> {
NoLongerInEagerContext,
}
/// The symbol tables and use-def maps for all scopes in a file.
/// The place tables and use-def maps for all scopes in a file.
#[derive(Debug, Update)]
pub(crate) struct SemanticIndex<'db> {
/// List of all symbol tables in this file, indexed by scope.
symbol_tables: IndexVec<FileScopeId, Arc<SymbolTable>>,
/// List of all instance attribute tables in this file, indexed by scope.
instance_attribute_tables: IndexVec<FileScopeId, SymbolTable>,
/// List of all place tables in this file, indexed by scope.
place_tables: IndexVec<FileScopeId, Arc<PlaceTable>>,
/// List of all scopes in this file.
scopes: IndexVec<FileScopeId, Scope>,
@@ -195,7 +190,7 @@ pub(crate) struct SemanticIndex<'db> {
scope_ids_by_scope: IndexVec<FileScopeId, ScopeId<'db>>,
/// Map from the file-local [`FileScopeId`] to the set of explicit-global symbols it contains.
globals_by_scope: FxHashMap<FileScopeId, FxHashSet<ScopedSymbolId>>,
globals_by_scope: FxHashMap<FileScopeId, FxHashSet<ScopedPlaceId>>,
/// Use-def map for each scope in this file.
use_def_maps: IndexVec<FileScopeId, Arc<UseDefMap<'db>>>,
@@ -223,17 +218,13 @@ pub(crate) struct SemanticIndex<'db> {
}
impl<'db> SemanticIndex<'db> {
/// Returns the symbol table for a specific scope.
/// Returns the place table for a specific scope.
///
/// Use the Salsa cached [`symbol_table()`] query if you only need the
/// symbol table for a single scope.
/// Use the Salsa cached [`place_table()`] query if you only need the
/// place table for a single scope.
#[track_caller]
pub(super) fn symbol_table(&self, scope_id: FileScopeId) -> Arc<SymbolTable> {
self.symbol_tables[scope_id].clone()
}
pub(super) fn instance_attribute_table(&self, scope_id: FileScopeId) -> &SymbolTable {
&self.instance_attribute_tables[scope_id]
pub(super) fn place_table(&self, scope_id: FileScopeId) -> Arc<PlaceTable> {
self.place_tables[scope_id].clone()
}
/// Returns the use-def map for a specific scope.
@@ -286,7 +277,7 @@ impl<'db> SemanticIndex<'db> {
pub(crate) fn symbol_is_global_in_scope(
&self,
symbol: ScopedSymbolId,
symbol: ScopedPlaceId,
scope: FileScopeId,
) -> bool {
self.globals_by_scope
@@ -444,7 +435,7 @@ impl<'db> SemanticIndex<'db> {
pub(crate) fn eager_snapshot(
&self,
enclosing_scope: FileScopeId,
symbol: &str,
expr: &PlaceExpr,
nested_scope: FileScopeId,
) -> EagerSnapshotResult<'_, 'db> {
for (ancestor_scope_id, ancestor_scope) in self.ancestor_scopes(nested_scope) {
@@ -455,12 +446,12 @@ impl<'db> SemanticIndex<'db> {
return EagerSnapshotResult::NoLongerInEagerContext;
}
}
let Some(symbol_id) = self.symbol_tables[enclosing_scope].symbol_id_by_name(symbol) else {
let Some(place_id) = self.place_tables[enclosing_scope].place_id_by_expr(expr) else {
return EagerSnapshotResult::NotFound;
};
let key = EagerSnapshotKey {
enclosing_scope,
enclosing_symbol: symbol_id,
enclosing_place: place_id,
nested_scope,
};
let Some(id) = self.eager_snapshots.get(&key) else {
@@ -480,9 +471,9 @@ pub struct AncestorsIter<'a> {
}
impl<'a> AncestorsIter<'a> {
fn new(module_symbol_table: &'a SemanticIndex, start: FileScopeId) -> Self {
fn new(module_table: &'a SemanticIndex, start: FileScopeId) -> Self {
Self {
scopes: &module_symbol_table.scopes,
scopes: &module_table.scopes,
next_id: Some(start),
}
}
@@ -508,9 +499,9 @@ pub struct DescendantsIter<'a> {
}
impl<'a> DescendantsIter<'a> {
fn new(symbol_table: &'a SemanticIndex, scope_id: FileScopeId) -> Self {
let scope = &symbol_table.scopes[scope_id];
let scopes = &symbol_table.scopes[scope.descendants()];
fn new(index: &'a SemanticIndex, scope_id: FileScopeId) -> Self {
let scope = &index.scopes[scope_id];
let scopes = &index.scopes[scope.descendants()];
Self {
next_id: scope_id + 1,
@@ -545,8 +536,8 @@ pub struct ChildrenIter<'a> {
}
impl<'a> ChildrenIter<'a> {
pub(crate) fn new(module_symbol_table: &'a SemanticIndex, parent: FileScopeId) -> Self {
let descendants = DescendantsIter::new(module_symbol_table, parent);
pub(crate) fn new(module_index: &'a SemanticIndex, parent: FileScopeId) -> Self {
let descendants = DescendantsIter::new(module_index, parent);
Self {
parent,
@@ -569,7 +560,7 @@ impl FusedIterator for ChildrenIter<'_> {}
#[cfg(test)]
mod tests {
use ruff_db::files::{File, system_path_to_file};
use ruff_db::parsed::parsed_module;
use ruff_db::parsed::{ParsedModuleRef, parsed_module};
use ruff_python_ast::{self as ast};
use ruff_text_size::{Ranged, TextRange};
@@ -577,21 +568,19 @@ mod tests {
use crate::db::tests::{TestDb, TestDbBuilder};
use crate::semantic_index::ast_ids::{HasScopedUseId, ScopedUseId};
use crate::semantic_index::definition::{Definition, DefinitionKind};
use crate::semantic_index::symbol::{
FileScopeId, Scope, ScopeKind, ScopedSymbolId, SymbolTable,
};
use crate::semantic_index::place::{FileScopeId, PlaceTable, Scope, ScopeKind, ScopedPlaceId};
use crate::semantic_index::use_def::UseDefMap;
use crate::semantic_index::{global_scope, semantic_index, symbol_table, use_def_map};
use crate::semantic_index::{global_scope, place_table, semantic_index, use_def_map};
impl UseDefMap<'_> {
fn first_public_binding(&self, symbol: ScopedSymbolId) -> Option<Definition<'_>> {
fn first_public_binding(&self, symbol: ScopedPlaceId) -> Option<Definition<'_>> {
self.public_bindings(symbol)
.find_map(|constrained_binding| constrained_binding.binding)
.find_map(|constrained_binding| constrained_binding.binding.definition())
}
fn first_binding_at_use(&self, use_id: ScopedUseId) -> Option<Definition<'_>> {
self.bindings_at_use(use_id)
.find_map(|constrained_binding| constrained_binding.binding)
.find_map(|constrained_binding| constrained_binding.binding.definition())
}
}
@@ -613,17 +602,17 @@ mod tests {
TestCase { db, file }
}
fn names(table: &SymbolTable) -> Vec<String> {
fn names(table: &PlaceTable) -> Vec<String> {
table
.symbols()
.map(|symbol| symbol.name().to_string())
.places()
.filter_map(|expr| Some(expr.as_name()?.to_string()))
.collect()
}
#[test]
fn empty() {
let TestCase { db, file } = test_case("");
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
let global_names = names(global_table);
@@ -633,7 +622,7 @@ mod tests {
#[test]
fn simple() {
let TestCase { db, file } = test_case("x");
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert_eq!(names(global_table), vec!["x"]);
}
@@ -641,7 +630,7 @@ mod tests {
#[test]
fn annotation_only() {
let TestCase { db, file } = test_case("x: int");
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert_eq!(names(global_table), vec!["int", "x"]);
// TODO record definition
@@ -651,10 +640,10 @@ mod tests {
fn import() {
let TestCase { db, file } = test_case("import foo");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(names(global_table), vec!["foo"]);
let foo = global_table.symbol_id_by_name("foo").unwrap();
let foo = global_table.place_id_by_name("foo").unwrap();
let use_def = use_def_map(&db, scope);
let binding = use_def.first_public_binding(foo).unwrap();
@@ -664,7 +653,7 @@ mod tests {
#[test]
fn import_sub() {
let TestCase { db, file } = test_case("import foo.bar");
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert_eq!(names(global_table), vec!["foo"]);
}
@@ -672,7 +661,7 @@ mod tests {
#[test]
fn import_as() {
let TestCase { db, file } = test_case("import foo.bar as baz");
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert_eq!(names(global_table), vec!["baz"]);
}
@@ -681,12 +670,12 @@ mod tests {
fn import_from() {
let TestCase { db, file } = test_case("from bar import foo");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(names(global_table), vec!["foo"]);
assert!(
global_table
.symbol_by_name("foo")
.place_by_name("foo")
.is_some_and(|symbol| { symbol.is_bound() && !symbol.is_used() }),
"symbols that are defined get the defined flag"
);
@@ -695,7 +684,7 @@ mod tests {
let binding = use_def
.first_public_binding(
global_table
.symbol_id_by_name("foo")
.place_id_by_name("foo")
.expect("symbol to exist"),
)
.unwrap();
@@ -706,18 +695,18 @@ mod tests {
fn assign() {
let TestCase { db, file } = test_case("x = foo");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(names(global_table), vec!["foo", "x"]);
assert!(
global_table
.symbol_by_name("foo")
.place_by_name("foo")
.is_some_and(|symbol| { !symbol.is_bound() && symbol.is_used() }),
"a symbol used but not bound in a scope should have only the used flag"
);
let use_def = use_def_map(&db, scope);
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name("x").expect("symbol exists"))
.first_public_binding(global_table.place_id_by_name("x").expect("symbol exists"))
.unwrap();
assert!(matches!(binding.kind(&db), DefinitionKind::Assignment(_)));
}
@@ -726,13 +715,13 @@ mod tests {
fn augmented_assignment() {
let TestCase { db, file } = test_case("x += 1");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(names(global_table), vec!["x"]);
let use_def = use_def_map(&db, scope);
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name("x").unwrap())
.first_public_binding(global_table.place_id_by_name("x").unwrap())
.unwrap();
assert!(matches!(
@@ -750,10 +739,11 @@ class C:
y = 2
",
);
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert_eq!(names(global_table), vec!["C", "y"]);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let [(class_scope_id, class_scope)] = index
@@ -763,14 +753,17 @@ y = 2
panic!("expected one child scope")
};
assert_eq!(class_scope.kind(), ScopeKind::Class);
assert_eq!(class_scope_id.to_scope_id(&db, file).name(&db), "C");
assert_eq!(
class_scope_id.to_scope_id(&db, file).name(&db, &module),
"C"
);
let class_table = index.symbol_table(class_scope_id);
let class_table = index.place_table(class_scope_id);
assert_eq!(names(&class_table), vec!["x"]);
let use_def = index.use_def_map(class_scope_id);
let binding = use_def
.first_public_binding(class_table.symbol_id_by_name("x").expect("symbol exists"))
.first_public_binding(class_table.place_id_by_name("x").expect("symbol exists"))
.unwrap();
assert!(matches!(binding.kind(&db), DefinitionKind::Assignment(_)));
}
@@ -784,8 +777,9 @@ def func():
y = 2
",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["func", "y"]);
@@ -796,18 +790,17 @@ y = 2
panic!("expected one child scope")
};
assert_eq!(function_scope.kind(), ScopeKind::Function);
assert_eq!(function_scope_id.to_scope_id(&db, file).name(&db), "func");
assert_eq!(
function_scope_id.to_scope_id(&db, file).name(&db, &module),
"func"
);
let function_table = index.symbol_table(function_scope_id);
let function_table = index.place_table(function_scope_id);
assert_eq!(names(&function_table), vec!["x"]);
let use_def = index.use_def_map(function_scope_id);
let binding = use_def
.first_public_binding(
function_table
.symbol_id_by_name("x")
.expect("symbol exists"),
)
.first_public_binding(function_table.place_id_by_name("x").expect("symbol exists"))
.unwrap();
assert!(matches!(binding.kind(&db), DefinitionKind::Assignment(_)));
}
@@ -822,7 +815,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
);
let index = semantic_index(&db, file);
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert_eq!(names(global_table), vec!["str", "int", "f"]);
@@ -833,7 +826,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
panic!("Expected a function scope")
};
let function_table = index.symbol_table(function_scope_id);
let function_table = index.place_table(function_scope_id);
assert_eq!(
names(&function_table),
vec!["a", "b", "c", "d", "args", "kwargs"],
@@ -844,7 +837,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let binding = use_def
.first_public_binding(
function_table
.symbol_id_by_name(name)
.place_id_by_name(name)
.expect("symbol exists"),
)
.unwrap();
@@ -853,7 +846,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let args_binding = use_def
.first_public_binding(
function_table
.symbol_id_by_name("args")
.place_id_by_name("args")
.expect("symbol exists"),
)
.unwrap();
@@ -864,7 +857,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let kwargs_binding = use_def
.first_public_binding(
function_table
.symbol_id_by_name("kwargs")
.place_id_by_name("kwargs")
.expect("symbol exists"),
)
.unwrap();
@@ -879,7 +872,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let TestCase { db, file } = test_case("lambda a, b, c=1, *args, d=2, **kwargs: None");
let index = semantic_index(&db, file);
let global_table = symbol_table(&db, global_scope(&db, file));
let global_table = place_table(&db, global_scope(&db, file));
assert!(names(global_table).is_empty());
@@ -890,7 +883,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
panic!("Expected a lambda scope")
};
let lambda_table = index.symbol_table(lambda_scope_id);
let lambda_table = index.place_table(lambda_scope_id);
assert_eq!(
names(&lambda_table),
vec!["a", "b", "c", "d", "args", "kwargs"],
@@ -899,14 +892,14 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let use_def = index.use_def_map(lambda_scope_id);
for name in ["a", "b", "c", "d"] {
let binding = use_def
.first_public_binding(lambda_table.symbol_id_by_name(name).expect("symbol exists"))
.first_public_binding(lambda_table.place_id_by_name(name).expect("symbol exists"))
.unwrap();
assert!(matches!(binding.kind(&db), DefinitionKind::Parameter(_)));
}
let args_binding = use_def
.first_public_binding(
lambda_table
.symbol_id_by_name("args")
.place_id_by_name("args")
.expect("symbol exists"),
)
.unwrap();
@@ -917,7 +910,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let kwargs_binding = use_def
.first_public_binding(
lambda_table
.symbol_id_by_name("kwargs")
.place_id_by_name("kwargs")
.expect("symbol exists"),
)
.unwrap();
@@ -937,8 +930,9 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["iter1"]);
@@ -951,11 +945,13 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
assert_eq!(comprehension_scope.kind(), ScopeKind::Comprehension);
assert_eq!(
comprehension_scope_id.to_scope_id(&db, file).name(&db),
comprehension_scope_id
.to_scope_id(&db, file)
.name(&db, &module),
"<listcomp>"
);
let comprehension_symbol_table = index.symbol_table(comprehension_scope_id);
let comprehension_symbol_table = index.place_table(comprehension_scope_id);
assert_eq!(names(&comprehension_symbol_table), vec!["x", "y"]);
@@ -964,7 +960,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let binding = use_def
.first_public_binding(
comprehension_symbol_table
.symbol_id_by_name(name)
.place_id_by_name(name)
.expect("symbol exists"),
)
.unwrap();
@@ -995,8 +991,9 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let use_def = index.use_def_map(comprehension_scope_id);
let module = parsed_module(&db, file).syntax();
let element = module.body[0]
let module = parsed_module(&db, file).load(&db);
let syntax = module.syntax();
let element = syntax.body[0]
.as_expr_stmt()
.unwrap()
.value
@@ -1012,7 +1009,7 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
let DefinitionKind::Comprehension(comprehension) = binding.kind(&db) else {
panic!("expected generator definition")
};
let target = comprehension.target();
let target = comprehension.target(&module);
let name = target.as_name_expr().unwrap().id().as_str();
assert_eq!(name, "x");
@@ -1030,8 +1027,9 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["iter1"]);
@@ -1044,11 +1042,13 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
assert_eq!(comprehension_scope.kind(), ScopeKind::Comprehension);
assert_eq!(
comprehension_scope_id.to_scope_id(&db, file).name(&db),
comprehension_scope_id
.to_scope_id(&db, file)
.name(&db, &module),
"<listcomp>"
);
let comprehension_symbol_table = index.symbol_table(comprehension_scope_id);
let comprehension_symbol_table = index.place_table(comprehension_scope_id);
assert_eq!(names(&comprehension_symbol_table), vec!["y", "iter2"]);
@@ -1063,11 +1063,11 @@ def f(a: str, /, b: str, c: int = 1, *args, d: int = 2, **kwargs):
assert_eq!(
inner_comprehension_scope_id
.to_scope_id(&db, file)
.name(&db),
.name(&db, &module),
"<setcomp>"
);
let inner_comprehension_symbol_table = index.symbol_table(inner_comprehension_scope_id);
let inner_comprehension_symbol_table = index.place_table(inner_comprehension_scope_id);
assert_eq!(names(&inner_comprehension_symbol_table), vec!["x"]);
}
@@ -1082,14 +1082,14 @@ with item1 as x, item2 as y:
);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["item1", "x", "item2", "y"]);
let use_def = index.use_def_map(FileScopeId::global());
for name in ["x", "y"] {
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name(name).expect("symbol exists"))
.first_public_binding(global_table.place_id_by_name(name).expect("symbol exists"))
.expect("Expected with item definition for {name}");
assert!(matches!(binding.kind(&db), DefinitionKind::WithItem(_)));
}
@@ -1105,14 +1105,14 @@ with context() as (x, y):
);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["context", "x", "y"]);
let use_def = index.use_def_map(FileScopeId::global());
for name in ["x", "y"] {
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name(name).expect("symbol exists"))
.first_public_binding(global_table.place_id_by_name(name).expect("symbol exists"))
.expect("Expected with item definition for {name}");
assert!(matches!(binding.kind(&db), DefinitionKind::WithItem(_)));
}
@@ -1128,8 +1128,9 @@ def func():
y = 2
",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["func"]);
let [
@@ -1144,12 +1145,18 @@ def func():
assert_eq!(func_scope_1.kind(), ScopeKind::Function);
assert_eq!(func_scope1_id.to_scope_id(&db, file).name(&db), "func");
assert_eq!(
func_scope1_id.to_scope_id(&db, file).name(&db, &module),
"func"
);
assert_eq!(func_scope_2.kind(), ScopeKind::Function);
assert_eq!(func_scope2_id.to_scope_id(&db, file).name(&db), "func");
assert_eq!(
func_scope2_id.to_scope_id(&db, file).name(&db, &module),
"func"
);
let func1_table = index.symbol_table(func_scope1_id);
let func2_table = index.symbol_table(func_scope2_id);
let func1_table = index.place_table(func_scope1_id);
let func2_table = index.place_table(func_scope2_id);
assert_eq!(names(&func1_table), vec!["x"]);
assert_eq!(names(&func2_table), vec!["y"]);
@@ -1157,7 +1164,7 @@ def func():
let binding = use_def
.first_public_binding(
global_table
.symbol_id_by_name("func")
.place_id_by_name("func")
.expect("symbol exists"),
)
.unwrap();
@@ -1173,8 +1180,9 @@ def func[T]():
",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["func"]);
@@ -1186,8 +1194,11 @@ def func[T]():
};
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope_id.to_scope_id(&db, file).name(&db), "func");
let ann_table = index.symbol_table(ann_scope_id);
assert_eq!(
ann_scope_id.to_scope_id(&db, file).name(&db, &module),
"func"
);
let ann_table = index.place_table(ann_scope_id);
assert_eq!(names(&ann_table), vec!["T"]);
let [(func_scope_id, func_scope)] =
@@ -1196,8 +1207,11 @@ def func[T]():
panic!("expected one child scope");
};
assert_eq!(func_scope.kind(), ScopeKind::Function);
assert_eq!(func_scope_id.to_scope_id(&db, file).name(&db), "func");
let func_table = index.symbol_table(func_scope_id);
assert_eq!(
func_scope_id.to_scope_id(&db, file).name(&db, &module),
"func"
);
let func_table = index.place_table(func_scope_id);
assert_eq!(names(&func_table), vec!["x"]);
}
@@ -1210,8 +1224,9 @@ class C[T]:
",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let global_table = index.symbol_table(FileScopeId::global());
let global_table = index.place_table(FileScopeId::global());
assert_eq!(names(&global_table), vec!["C"]);
@@ -1223,12 +1238,12 @@ class C[T]:
};
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope_id.to_scope_id(&db, file).name(&db), "C");
let ann_table = index.symbol_table(ann_scope_id);
assert_eq!(ann_scope_id.to_scope_id(&db, file).name(&db, &module), "C");
let ann_table = index.place_table(ann_scope_id);
assert_eq!(names(&ann_table), vec!["T"]);
assert!(
ann_table
.symbol_by_name("T")
.place_by_name("T")
.is_some_and(|s| s.is_bound() && !s.is_used()),
"type parameters are defined by the scope that introduces them"
);
@@ -1240,16 +1255,19 @@ class C[T]:
};
assert_eq!(class_scope.kind(), ScopeKind::Class);
assert_eq!(class_scope_id.to_scope_id(&db, file).name(&db), "C");
assert_eq!(names(&index.symbol_table(class_scope_id)), vec!["x"]);
assert_eq!(
class_scope_id.to_scope_id(&db, file).name(&db, &module),
"C"
);
assert_eq!(names(&index.place_table(class_scope_id)), vec!["x"]);
}
#[test]
fn reachability_trivial() {
let TestCase { db, file } = test_case("x = 1; x");
let parsed = parsed_module(&db, file);
let module = parsed_module(&db, file).load(&db);
let scope = global_scope(&db, file);
let ast = parsed.syntax();
let ast = module.syntax();
let ast::Stmt::Expr(ast::StmtExpr {
value: x_use_expr, ..
}) = &ast.body[1]
@@ -1268,7 +1286,7 @@ class C[T]:
let ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(num),
..
}) = assignment.value()
}) = assignment.value(&module)
else {
panic!("should be a number literal")
};
@@ -1280,8 +1298,8 @@ class C[T]:
let TestCase { db, file } = test_case("x = 1;\ndef test():\n y = 4");
let index = semantic_index(&db, file);
let parsed = parsed_module(&db, file);
let ast = parsed.syntax();
let module = parsed_module(&db, file).load(&db);
let ast = module.syntax();
let x_stmt = ast.body[0].as_assign_stmt().unwrap();
let x = &x_stmt.targets[0];
@@ -1298,14 +1316,15 @@ class C[T]:
#[test]
fn scope_iterators() {
fn scope_names<'a>(
scopes: impl Iterator<Item = (FileScopeId, &'a Scope)>,
db: &'a dyn Db,
fn scope_names<'a, 'db>(
scopes: impl Iterator<Item = (FileScopeId, &'db Scope)>,
db: &'db dyn Db,
file: File,
module: &'a ParsedModuleRef,
) -> Vec<&'a str> {
scopes
.into_iter()
.map(|(scope_id, _)| scope_id.to_scope_id(db, file).name(db))
.map(|(scope_id, _)| scope_id.to_scope_id(db, file).name(db, module))
.collect()
}
@@ -1322,21 +1341,22 @@ def x():
pass",
);
let module = parsed_module(&db, file).load(&db);
let index = semantic_index(&db, file);
let descendants = index.descendent_scopes(FileScopeId::global());
assert_eq!(
scope_names(descendants, &db, file),
scope_names(descendants, &db, file, &module),
vec!["Test", "foo", "bar", "baz", "x"]
);
let children = index.child_scopes(FileScopeId::global());
assert_eq!(scope_names(children, &db, file), vec!["Test", "x"]);
assert_eq!(scope_names(children, &db, file, &module), vec!["Test", "x"]);
let test_class = index.child_scopes(FileScopeId::global()).next().unwrap().0;
let test_child_scopes = index.child_scopes(test_class);
assert_eq!(
scope_names(test_child_scopes, &db, file),
scope_names(test_child_scopes, &db, file, &module),
vec!["foo", "baz"]
);
@@ -1348,7 +1368,7 @@ def x():
let ancestors = index.ancestor_scopes(bar_scope);
assert_eq!(
scope_names(ancestors, &db, file),
scope_names(ancestors, &db, file, &module),
vec!["bar", "foo", "Test", "<module>"]
);
}
@@ -1369,9 +1389,9 @@ match subject:
);
let global_scope_id = global_scope(&db, file);
let global_table = symbol_table(&db, global_scope_id);
let global_table = place_table(&db, global_scope_id);
assert!(global_table.symbol_by_name("Foo").unwrap().is_used());
assert!(global_table.place_by_name("Foo").unwrap().is_used());
assert_eq!(
names(global_table),
vec![
@@ -1395,7 +1415,7 @@ match subject:
("l", 1),
] {
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name(name).expect("symbol exists"))
.first_public_binding(global_table.place_id_by_name(name).expect("symbol exists"))
.expect("Expected with item definition for {name}");
if let DefinitionKind::MatchPattern(pattern) = binding.kind(&db) {
assert_eq!(pattern.index(), expected_index);
@@ -1418,14 +1438,14 @@ match 1:
);
let global_scope_id = global_scope(&db, file);
let global_table = symbol_table(&db, global_scope_id);
let global_table = place_table(&db, global_scope_id);
assert_eq!(names(global_table), vec!["first", "second"]);
let use_def = use_def_map(&db, global_scope_id);
for (name, expected_index) in [("first", 0), ("second", 0)] {
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name(name).expect("symbol exists"))
.first_public_binding(global_table.place_id_by_name(name).expect("symbol exists"))
.expect("Expected with item definition for {name}");
if let DefinitionKind::MatchPattern(pattern) = binding.kind(&db) {
assert_eq!(pattern.index(), expected_index);
@@ -1439,13 +1459,13 @@ match 1:
fn for_loops_single_assignment() {
let TestCase { db, file } = test_case("for x in a: pass");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(&names(global_table), &["a", "x"]);
let use_def = use_def_map(&db, scope);
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name("x").unwrap())
.first_public_binding(global_table.place_id_by_name("x").unwrap())
.unwrap();
assert!(matches!(binding.kind(&db), DefinitionKind::For(_)));
@@ -1455,16 +1475,16 @@ match 1:
fn for_loops_simple_unpacking() {
let TestCase { db, file } = test_case("for (x, y) in a: pass");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(&names(global_table), &["a", "x", "y"]);
let use_def = use_def_map(&db, scope);
let x_binding = use_def
.first_public_binding(global_table.symbol_id_by_name("x").unwrap())
.first_public_binding(global_table.place_id_by_name("x").unwrap())
.unwrap();
let y_binding = use_def
.first_public_binding(global_table.symbol_id_by_name("y").unwrap())
.first_public_binding(global_table.place_id_by_name("y").unwrap())
.unwrap();
assert!(matches!(x_binding.kind(&db), DefinitionKind::For(_)));
@@ -1475,13 +1495,13 @@ match 1:
fn for_loops_complex_unpacking() {
let TestCase { db, file } = test_case("for [((a,) b), (c, d)] in e: pass");
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
let global_table = place_table(&db, scope);
assert_eq!(&names(global_table), &["e", "a", "b", "c", "d"]);
let use_def = use_def_map(&db, scope);
let binding = use_def
.first_public_binding(global_table.symbol_id_by_name("a").unwrap())
.first_public_binding(global_table.place_id_by_name("a").unwrap())
.unwrap();
assert!(matches!(binding.kind(&db), DefinitionKind::For(_)));

View File

@@ -6,14 +6,14 @@ use ruff_python_ast::ExprRef;
use crate::Db;
use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
use crate::semantic_index::place::ScopeId;
use crate::semantic_index::semantic_index;
use crate::semantic_index::symbol::ScopeId;
/// AST ids for a single scope.
///
/// The motivation for building the AST ids per scope isn't about reducing invalidation because
/// the struct changes whenever the parsed AST changes. Instead, it's mainly that we can
/// build the AST ids struct when building the symbol table and also keep the property that
/// build the AST ids struct when building the place table and also keep the property that
/// IDs of outer scopes are unaffected by changes in inner scopes.
///
/// For example, we don't want that adding new statements to `foo` changes the statement id of `x = foo()` in:
@@ -28,7 +28,7 @@ use crate::semantic_index::symbol::ScopeId;
pub(crate) struct AstIds {
/// Maps expressions to their expression id.
expressions_map: FxHashMap<ExpressionNodeKey, ScopedExpressionId>,
/// Maps expressions which "use" a symbol (that is, [`ast::ExprName`]) to a use id.
/// Maps expressions which "use" a place (that is, [`ast::ExprName`], [`ast::ExprAttribute`] or [`ast::ExprSubscript`]) to a use id.
uses_map: FxHashMap<ExpressionNodeKey, ScopedUseId>,
}
@@ -49,7 +49,7 @@ fn ast_ids<'db>(db: &'db dyn Db, scope: ScopeId) -> &'db AstIds {
semantic_index(db, scope.file(db)).ast_ids(scope.file_scope_id(db))
}
/// Uniquely identifies a use of a name in a [`crate::semantic_index::symbol::FileScopeId`].
/// Uniquely identifies a use of a name in a [`crate::semantic_index::place::FileScopeId`].
#[newtype_index]
pub struct ScopedUseId;
@@ -72,6 +72,20 @@ impl HasScopedUseId for ast::ExprName {
}
}
impl HasScopedUseId for ast::ExprAttribute {
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedUseId {
let expression_ref = ExprRef::from(self);
expression_ref.scoped_use_id(db, scope)
}
}
impl HasScopedUseId for ast::ExprSubscript {
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedUseId {
let expression_ref = ExprRef::from(self);
expression_ref.scoped_use_id(db, scope)
}
}
impl HasScopedUseId for ast::ExprRef<'_> {
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedUseId {
let ast_ids = ast_ids(db, scope);
@@ -79,7 +93,7 @@ impl HasScopedUseId for ast::ExprRef<'_> {
}
}
/// Uniquely identifies an [`ast::Expr`] in a [`crate::semantic_index::symbol::FileScopeId`].
/// Uniquely identifies an [`ast::Expr`] in a [`crate::semantic_index::place::FileScopeId`].
#[newtype_index]
#[derive(salsa::Update)]
pub struct ScopedExpressionId;

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +1,23 @@
use std::ops::Deref;
use ruff_db::files::{File, FileRange};
use ruff_db::parsed::ParsedModule;
use ruff_db::parsed::ParsedModuleRef;
use ruff_python_ast as ast;
use ruff_text_size::{Ranged, TextRange};
use crate::Db;
use crate::ast_node_ref::AstNodeRef;
use crate::node_key::NodeKey;
use crate::semantic_index::symbol::{FileScopeId, ScopeId, ScopedSymbolId};
use crate::semantic_index::place::{FileScopeId, ScopeId, ScopedPlaceId};
use crate::unpack::{Unpack, UnpackPosition};
/// A definition of a symbol.
/// A definition of a place.
///
/// ## ID stability
/// The `Definition`'s ID is stable when the only field that change is its `kind` (AST node).
///
/// The `Definition` changes when the `file`, `scope`, or `symbol` change. This can be
/// because a new scope gets inserted before the `Definition` or a new symbol is inserted
/// The `Definition` changes when the `file`, `scope`, or `place` change. This can be
/// because a new scope gets inserted before the `Definition` or a new place is inserted
/// before this `Definition`. However, the ID can be considered stable and it is okay to use
/// `Definition` in cross-module` salsa queries or as a field on other salsa tracked structs.
#[salsa::tracked(debug)]
@@ -28,8 +28,8 @@ pub struct Definition<'db> {
/// The scope in which the definition occurs.
pub(crate) file_scope: FileScopeId,
/// The symbol defined.
pub(crate) symbol: ScopedSymbolId,
/// The place ID of the definition.
pub(crate) place: ScopedPlaceId,
/// WARNING: Only access this field when doing type inference for the same
/// file as where `Definition` is defined to avoid cross-file query dependencies.
@@ -49,12 +49,12 @@ impl<'db> Definition<'db> {
self.file_scope(db).to_scope_id(db, self.file(db))
}
pub fn full_range(self, db: &'db dyn Db) -> FileRange {
FileRange::new(self.file(db), self.kind(db).full_range())
pub fn full_range(self, db: &'db dyn Db, module: &ParsedModuleRef) -> FileRange {
FileRange::new(self.file(db), self.kind(db).full_range(module))
}
pub fn focus_range(self, db: &'db dyn Db) -> FileRange {
FileRange::new(self.file(db), self.kind(db).target_range())
pub fn focus_range(self, db: &'db dyn Db, module: &ParsedModuleRef) -> FileRange {
FileRange::new(self.file(db), self.kind(db).target_range(module))
}
}
@@ -89,219 +89,252 @@ impl<'a, 'db> IntoIterator for &'a Definitions<'db> {
}
}
#[derive(Copy, Clone, Debug)]
pub(crate) enum DefinitionNodeRef<'a> {
Import(ImportDefinitionNodeRef<'a>),
ImportFrom(ImportFromDefinitionNodeRef<'a>),
ImportStar(StarImportDefinitionNodeRef<'a>),
For(ForStmtDefinitionNodeRef<'a>),
Function(&'a ast::StmtFunctionDef),
Class(&'a ast::StmtClassDef),
TypeAlias(&'a ast::StmtTypeAlias),
NamedExpression(&'a ast::ExprNamed),
Assignment(AssignmentDefinitionNodeRef<'a>),
AnnotatedAssignment(AnnotatedAssignmentDefinitionNodeRef<'a>),
AugmentedAssignment(&'a ast::StmtAugAssign),
Comprehension(ComprehensionDefinitionNodeRef<'a>),
VariadicPositionalParameter(&'a ast::Parameter),
VariadicKeywordParameter(&'a ast::Parameter),
Parameter(&'a ast::ParameterWithDefault),
WithItem(WithItemDefinitionNodeRef<'a>),
MatchPattern(MatchPatternDefinitionNodeRef<'a>),
ExceptHandler(ExceptHandlerDefinitionNodeRef<'a>),
TypeVar(&'a ast::TypeParamTypeVar),
ParamSpec(&'a ast::TypeParamParamSpec),
TypeVarTuple(&'a ast::TypeParamTypeVarTuple),
#[derive(Debug, Clone, Copy, PartialEq, Eq, salsa::Update)]
pub(crate) enum DefinitionState<'db> {
Defined(Definition<'db>),
/// Represents the implicit "unbound"/"undeclared" definition of every place.
Undefined,
/// Represents a definition that has been deleted.
/// This used when an attribute/subscript definition (such as `x.y = ...`, `x[0] = ...`) becomes obsolete due to a reassignment of the root place.
Deleted,
}
impl<'a> From<&'a ast::StmtFunctionDef> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtFunctionDef) -> Self {
impl<'db> DefinitionState<'db> {
pub(crate) fn is_defined_and(self, f: impl Fn(Definition<'db>) -> bool) -> bool {
matches!(self, DefinitionState::Defined(def) if f(def))
}
pub(crate) fn is_undefined_or(self, f: impl Fn(Definition<'db>) -> bool) -> bool {
matches!(self, DefinitionState::Undefined)
|| matches!(self, DefinitionState::Defined(def) if f(def))
}
pub(crate) fn is_undefined(self) -> bool {
matches!(self, DefinitionState::Undefined)
}
#[allow(unused)]
pub(crate) fn definition(self) -> Option<Definition<'db>> {
match self {
DefinitionState::Defined(def) => Some(def),
DefinitionState::Deleted | DefinitionState::Undefined => None,
}
}
}
#[derive(Copy, Clone, Debug)]
pub(crate) enum DefinitionNodeRef<'ast, 'db> {
Import(ImportDefinitionNodeRef<'ast>),
ImportFrom(ImportFromDefinitionNodeRef<'ast>),
ImportStar(StarImportDefinitionNodeRef<'ast>),
For(ForStmtDefinitionNodeRef<'ast, 'db>),
Function(&'ast ast::StmtFunctionDef),
Class(&'ast ast::StmtClassDef),
TypeAlias(&'ast ast::StmtTypeAlias),
NamedExpression(&'ast ast::ExprNamed),
Assignment(AssignmentDefinitionNodeRef<'ast, 'db>),
AnnotatedAssignment(AnnotatedAssignmentDefinitionNodeRef<'ast>),
AugmentedAssignment(&'ast ast::StmtAugAssign),
Comprehension(ComprehensionDefinitionNodeRef<'ast, 'db>),
VariadicPositionalParameter(&'ast ast::Parameter),
VariadicKeywordParameter(&'ast ast::Parameter),
Parameter(&'ast ast::ParameterWithDefault),
WithItem(WithItemDefinitionNodeRef<'ast, 'db>),
MatchPattern(MatchPatternDefinitionNodeRef<'ast>),
ExceptHandler(ExceptHandlerDefinitionNodeRef<'ast>),
TypeVar(&'ast ast::TypeParamTypeVar),
ParamSpec(&'ast ast::TypeParamParamSpec),
TypeVarTuple(&'ast ast::TypeParamTypeVarTuple),
}
impl<'ast> From<&'ast ast::StmtFunctionDef> for DefinitionNodeRef<'ast, '_> {
fn from(node: &'ast ast::StmtFunctionDef) -> Self {
Self::Function(node)
}
}
impl<'a> From<&'a ast::StmtClassDef> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtClassDef) -> Self {
impl<'ast> From<&'ast ast::StmtClassDef> for DefinitionNodeRef<'ast, '_> {
fn from(node: &'ast ast::StmtClassDef) -> Self {
Self::Class(node)
}
}
impl<'a> From<&'a ast::StmtTypeAlias> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtTypeAlias) -> Self {
impl<'ast> From<&'ast ast::StmtTypeAlias> for DefinitionNodeRef<'ast, '_> {
fn from(node: &'ast ast::StmtTypeAlias) -> Self {
Self::TypeAlias(node)
}
}
impl<'a> From<&'a ast::ExprNamed> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::ExprNamed) -> Self {
impl<'ast> From<&'ast ast::ExprNamed> for DefinitionNodeRef<'ast, '_> {
fn from(node: &'ast ast::ExprNamed) -> Self {
Self::NamedExpression(node)
}
}
impl<'a> From<&'a ast::StmtAugAssign> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtAugAssign) -> Self {
impl<'ast> From<&'ast ast::StmtAugAssign> for DefinitionNodeRef<'ast, '_> {
fn from(node: &'ast ast::StmtAugAssign) -> Self {
Self::AugmentedAssignment(node)
}
}
impl<'a> From<&'a ast::TypeParamTypeVar> for DefinitionNodeRef<'a> {
fn from(value: &'a ast::TypeParamTypeVar) -> Self {
impl<'ast> From<&'ast ast::TypeParamTypeVar> for DefinitionNodeRef<'ast, '_> {
fn from(value: &'ast ast::TypeParamTypeVar) -> Self {
Self::TypeVar(value)
}
}
impl<'a> From<&'a ast::TypeParamParamSpec> for DefinitionNodeRef<'a> {
fn from(value: &'a ast::TypeParamParamSpec) -> Self {
impl<'ast> From<&'ast ast::TypeParamParamSpec> for DefinitionNodeRef<'ast, '_> {
fn from(value: &'ast ast::TypeParamParamSpec) -> Self {
Self::ParamSpec(value)
}
}
impl<'a> From<&'a ast::TypeParamTypeVarTuple> for DefinitionNodeRef<'a> {
fn from(value: &'a ast::TypeParamTypeVarTuple) -> Self {
impl<'ast> From<&'ast ast::TypeParamTypeVarTuple> for DefinitionNodeRef<'ast, '_> {
fn from(value: &'ast ast::TypeParamTypeVarTuple) -> Self {
Self::TypeVarTuple(value)
}
}
impl<'a> From<ImportDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: ImportDefinitionNodeRef<'a>) -> Self {
impl<'ast> From<ImportDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '_> {
fn from(node_ref: ImportDefinitionNodeRef<'ast>) -> Self {
Self::Import(node_ref)
}
}
impl<'a> From<ImportFromDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: ImportFromDefinitionNodeRef<'a>) -> Self {
impl<'ast> From<ImportFromDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '_> {
fn from(node_ref: ImportFromDefinitionNodeRef<'ast>) -> Self {
Self::ImportFrom(node_ref)
}
}
impl<'a> From<ForStmtDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(value: ForStmtDefinitionNodeRef<'a>) -> Self {
impl<'ast, 'db> From<ForStmtDefinitionNodeRef<'ast, 'db>> for DefinitionNodeRef<'ast, 'db> {
fn from(value: ForStmtDefinitionNodeRef<'ast, 'db>) -> Self {
Self::For(value)
}
}
impl<'a> From<AssignmentDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: AssignmentDefinitionNodeRef<'a>) -> Self {
impl<'ast, 'db> From<AssignmentDefinitionNodeRef<'ast, 'db>> for DefinitionNodeRef<'ast, 'db> {
fn from(node_ref: AssignmentDefinitionNodeRef<'ast, 'db>) -> Self {
Self::Assignment(node_ref)
}
}
impl<'a> From<AnnotatedAssignmentDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: AnnotatedAssignmentDefinitionNodeRef<'a>) -> Self {
impl<'ast> From<AnnotatedAssignmentDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '_> {
fn from(node_ref: AnnotatedAssignmentDefinitionNodeRef<'ast>) -> Self {
Self::AnnotatedAssignment(node_ref)
}
}
impl<'a> From<WithItemDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: WithItemDefinitionNodeRef<'a>) -> Self {
impl<'ast, 'db> From<WithItemDefinitionNodeRef<'ast, 'db>> for DefinitionNodeRef<'ast, 'db> {
fn from(node_ref: WithItemDefinitionNodeRef<'ast, 'db>) -> Self {
Self::WithItem(node_ref)
}
}
impl<'a> From<ComprehensionDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node: ComprehensionDefinitionNodeRef<'a>) -> Self {
impl<'ast, 'db> From<ComprehensionDefinitionNodeRef<'ast, 'db>> for DefinitionNodeRef<'ast, 'db> {
fn from(node: ComprehensionDefinitionNodeRef<'ast, 'db>) -> Self {
Self::Comprehension(node)
}
}
impl<'a> From<&'a ast::ParameterWithDefault> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::ParameterWithDefault) -> Self {
impl<'ast> From<&'ast ast::ParameterWithDefault> for DefinitionNodeRef<'ast, '_> {
fn from(node: &'ast ast::ParameterWithDefault) -> Self {
Self::Parameter(node)
}
}
impl<'a> From<MatchPatternDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node: MatchPatternDefinitionNodeRef<'a>) -> Self {
impl<'ast> From<MatchPatternDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '_> {
fn from(node: MatchPatternDefinitionNodeRef<'ast>) -> Self {
Self::MatchPattern(node)
}
}
impl<'a> From<StarImportDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node: StarImportDefinitionNodeRef<'a>) -> Self {
impl<'ast> From<StarImportDefinitionNodeRef<'ast>> for DefinitionNodeRef<'ast, '_> {
fn from(node: StarImportDefinitionNodeRef<'ast>) -> Self {
Self::ImportStar(node)
}
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ImportDefinitionNodeRef<'a> {
pub(crate) node: &'a ast::StmtImport,
pub(crate) struct ImportDefinitionNodeRef<'ast> {
pub(crate) node: &'ast ast::StmtImport,
pub(crate) alias_index: usize,
pub(crate) is_reexported: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct StarImportDefinitionNodeRef<'a> {
pub(crate) node: &'a ast::StmtImportFrom,
pub(crate) symbol_id: ScopedSymbolId,
pub(crate) struct StarImportDefinitionNodeRef<'ast> {
pub(crate) node: &'ast ast::StmtImportFrom,
pub(crate) place_id: ScopedPlaceId,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ImportFromDefinitionNodeRef<'a> {
pub(crate) node: &'a ast::StmtImportFrom,
pub(crate) struct ImportFromDefinitionNodeRef<'ast> {
pub(crate) node: &'ast ast::StmtImportFrom,
pub(crate) alias_index: usize,
pub(crate) is_reexported: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct AssignmentDefinitionNodeRef<'a> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'a>)>,
pub(crate) value: &'a ast::Expr,
pub(crate) target: &'a ast::Expr,
pub(crate) struct AssignmentDefinitionNodeRef<'ast, 'db> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'db>)>,
pub(crate) value: &'ast ast::Expr,
pub(crate) target: &'ast ast::Expr,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct AnnotatedAssignmentDefinitionNodeRef<'a> {
pub(crate) node: &'a ast::StmtAnnAssign,
pub(crate) annotation: &'a ast::Expr,
pub(crate) value: Option<&'a ast::Expr>,
pub(crate) target: &'a ast::Expr,
pub(crate) struct AnnotatedAssignmentDefinitionNodeRef<'ast> {
pub(crate) node: &'ast ast::StmtAnnAssign,
pub(crate) annotation: &'ast ast::Expr,
pub(crate) value: Option<&'ast ast::Expr>,
pub(crate) target: &'ast ast::Expr,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct WithItemDefinitionNodeRef<'a> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'a>)>,
pub(crate) context_expr: &'a ast::Expr,
pub(crate) target: &'a ast::Expr,
pub(crate) struct WithItemDefinitionNodeRef<'ast, 'db> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'db>)>,
pub(crate) context_expr: &'ast ast::Expr,
pub(crate) target: &'ast ast::Expr,
pub(crate) is_async: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ForStmtDefinitionNodeRef<'a> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'a>)>,
pub(crate) iterable: &'a ast::Expr,
pub(crate) target: &'a ast::Expr,
pub(crate) struct ForStmtDefinitionNodeRef<'ast, 'db> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'db>)>,
pub(crate) iterable: &'ast ast::Expr,
pub(crate) target: &'ast ast::Expr,
pub(crate) is_async: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ExceptHandlerDefinitionNodeRef<'a> {
pub(crate) handler: &'a ast::ExceptHandlerExceptHandler,
pub(crate) struct ExceptHandlerDefinitionNodeRef<'ast> {
pub(crate) handler: &'ast ast::ExceptHandlerExceptHandler,
pub(crate) is_star: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ComprehensionDefinitionNodeRef<'a> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'a>)>,
pub(crate) iterable: &'a ast::Expr,
pub(crate) target: &'a ast::Expr,
pub(crate) struct ComprehensionDefinitionNodeRef<'ast, 'db> {
pub(crate) unpack: Option<(UnpackPosition, Unpack<'db>)>,
pub(crate) iterable: &'ast ast::Expr,
pub(crate) target: &'ast ast::Expr,
pub(crate) first: bool,
pub(crate) is_async: bool,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct MatchPatternDefinitionNodeRef<'a> {
pub(crate) struct MatchPatternDefinitionNodeRef<'ast> {
/// The outermost pattern node in which the identifier being defined occurs.
pub(crate) pattern: &'a ast::Pattern,
pub(crate) pattern: &'ast ast::Pattern,
/// The identifier being defined.
pub(crate) identifier: &'a ast::Identifier,
pub(crate) identifier: &'ast ast::Identifier,
/// The index of the identifier in the pattern when visiting the `pattern` node in evaluation
/// order.
pub(crate) index: u32,
}
impl<'db> DefinitionNodeRef<'db> {
impl<'db> DefinitionNodeRef<'_, 'db> {
#[expect(unsafe_code)]
pub(super) unsafe fn into_owned(self, parsed: ParsedModule) -> DefinitionKind<'db> {
pub(super) unsafe fn into_owned(self, parsed: ParsedModuleRef) -> DefinitionKind<'db> {
match self {
DefinitionNodeRef::Import(ImportDefinitionNodeRef {
node,
@@ -323,10 +356,10 @@ impl<'db> DefinitionNodeRef<'db> {
is_reexported,
}),
DefinitionNodeRef::ImportStar(star_import) => {
let StarImportDefinitionNodeRef { node, symbol_id } = star_import;
let StarImportDefinitionNodeRef { node, place_id } = star_import;
DefinitionKind::StarImport(StarImportDefinitionKind {
node: unsafe { AstNodeRef::new(parsed, node) },
symbol_id,
place_id,
})
}
DefinitionNodeRef::Function(function) => {
@@ -456,7 +489,7 @@ impl<'db> DefinitionNodeRef<'db> {
// INVARIANT: for an invalid-syntax statement such as `from foo import *, bar, *`,
// we only create a `StarImportDefinitionKind` for the *first* `*` alias in the names list.
Self::ImportStar(StarImportDefinitionNodeRef { node, symbol_id: _ }) => node
Self::ImportStar(StarImportDefinitionNodeRef { node, place_id: _ }) => node
.names
.iter()
.find(|alias| &alias.name == "*")
@@ -517,7 +550,7 @@ pub(crate) enum DefinitionCategory {
}
impl DefinitionCategory {
/// True if this definition establishes a "declared type" for the symbol.
/// True if this definition establishes a "declared type" for the place.
///
/// If so, any assignments reached by this definition are in error if they assign a value of a
/// type not assignable to the declared type.
@@ -530,7 +563,7 @@ impl DefinitionCategory {
)
}
/// True if this definition assigns a value to the symbol.
/// True if this definition assigns a value to the place.
///
/// False only for annotated assignments without a RHS.
pub(crate) fn is_binding(self) -> bool {
@@ -591,62 +624,76 @@ impl DefinitionKind<'_> {
/// Returns the [`TextRange`] of the definition target.
///
/// A definition target would mainly be the node representing the symbol being defined i.e.,
/// [`ast::ExprName`] or [`ast::Identifier`] but could also be other nodes.
pub(crate) fn target_range(&self) -> TextRange {
/// A definition target would mainly be the node representing the place being defined i.e.,
/// [`ast::ExprName`], [`ast::Identifier`], [`ast::ExprAttribute`] or [`ast::ExprSubscript`] but could also be other nodes.
pub(crate) fn target_range(&self, module: &ParsedModuleRef) -> TextRange {
match self {
DefinitionKind::Import(import) => import.alias().range(),
DefinitionKind::ImportFrom(import) => import.alias().range(),
DefinitionKind::StarImport(import) => import.alias().range(),
DefinitionKind::Function(function) => function.name.range(),
DefinitionKind::Class(class) => class.name.range(),
DefinitionKind::TypeAlias(type_alias) => type_alias.name.range(),
DefinitionKind::NamedExpression(named) => named.target.range(),
DefinitionKind::Assignment(assignment) => assignment.target.range(),
DefinitionKind::AnnotatedAssignment(assign) => assign.target.range(),
DefinitionKind::AugmentedAssignment(aug_assign) => aug_assign.target.range(),
DefinitionKind::For(for_stmt) => for_stmt.target.range(),
DefinitionKind::Comprehension(comp) => comp.target().range(),
DefinitionKind::VariadicPositionalParameter(parameter) => parameter.name.range(),
DefinitionKind::VariadicKeywordParameter(parameter) => parameter.name.range(),
DefinitionKind::Parameter(parameter) => parameter.parameter.name.range(),
DefinitionKind::WithItem(with_item) => with_item.target.range(),
DefinitionKind::MatchPattern(match_pattern) => match_pattern.identifier.range(),
DefinitionKind::ExceptHandler(handler) => handler.node().range(),
DefinitionKind::TypeVar(type_var) => type_var.name.range(),
DefinitionKind::ParamSpec(param_spec) => param_spec.name.range(),
DefinitionKind::TypeVarTuple(type_var_tuple) => type_var_tuple.name.range(),
DefinitionKind::Import(import) => import.alias(module).range(),
DefinitionKind::ImportFrom(import) => import.alias(module).range(),
DefinitionKind::StarImport(import) => import.alias(module).range(),
DefinitionKind::Function(function) => function.node(module).name.range(),
DefinitionKind::Class(class) => class.node(module).name.range(),
DefinitionKind::TypeAlias(type_alias) => type_alias.node(module).name.range(),
DefinitionKind::NamedExpression(named) => named.node(module).target.range(),
DefinitionKind::Assignment(assignment) => assignment.target.node(module).range(),
DefinitionKind::AnnotatedAssignment(assign) => assign.target.node(module).range(),
DefinitionKind::AugmentedAssignment(aug_assign) => {
aug_assign.node(module).target.range()
}
DefinitionKind::For(for_stmt) => for_stmt.target.node(module).range(),
DefinitionKind::Comprehension(comp) => comp.target(module).range(),
DefinitionKind::VariadicPositionalParameter(parameter) => {
parameter.node(module).name.range()
}
DefinitionKind::VariadicKeywordParameter(parameter) => {
parameter.node(module).name.range()
}
DefinitionKind::Parameter(parameter) => parameter.node(module).parameter.name.range(),
DefinitionKind::WithItem(with_item) => with_item.target.node(module).range(),
DefinitionKind::MatchPattern(match_pattern) => {
match_pattern.identifier.node(module).range()
}
DefinitionKind::ExceptHandler(handler) => handler.node(module).range(),
DefinitionKind::TypeVar(type_var) => type_var.node(module).name.range(),
DefinitionKind::ParamSpec(param_spec) => param_spec.node(module).name.range(),
DefinitionKind::TypeVarTuple(type_var_tuple) => {
type_var_tuple.node(module).name.range()
}
}
}
/// Returns the [`TextRange`] of the entire definition.
pub(crate) fn full_range(&self) -> TextRange {
pub(crate) fn full_range(&self, module: &ParsedModuleRef) -> TextRange {
match self {
DefinitionKind::Import(import) => import.alias().range(),
DefinitionKind::ImportFrom(import) => import.alias().range(),
DefinitionKind::StarImport(import) => import.import().range(),
DefinitionKind::Function(function) => function.range(),
DefinitionKind::Class(class) => class.range(),
DefinitionKind::TypeAlias(type_alias) => type_alias.range(),
DefinitionKind::NamedExpression(named) => named.range(),
DefinitionKind::Assignment(assignment) => assignment.target.range(),
DefinitionKind::AnnotatedAssignment(assign) => assign.target.range(),
DefinitionKind::AugmentedAssignment(aug_assign) => aug_assign.range(),
DefinitionKind::For(for_stmt) => for_stmt.target.range(),
DefinitionKind::Comprehension(comp) => comp.target().range(),
DefinitionKind::VariadicPositionalParameter(parameter) => parameter.range(),
DefinitionKind::VariadicKeywordParameter(parameter) => parameter.range(),
DefinitionKind::Parameter(parameter) => parameter.parameter.range(),
DefinitionKind::WithItem(with_item) => with_item.target.range(),
DefinitionKind::MatchPattern(match_pattern) => match_pattern.identifier.range(),
DefinitionKind::ExceptHandler(handler) => handler.node().range(),
DefinitionKind::TypeVar(type_var) => type_var.range(),
DefinitionKind::ParamSpec(param_spec) => param_spec.range(),
DefinitionKind::TypeVarTuple(type_var_tuple) => type_var_tuple.range(),
DefinitionKind::Import(import) => import.alias(module).range(),
DefinitionKind::ImportFrom(import) => import.alias(module).range(),
DefinitionKind::StarImport(import) => import.import(module).range(),
DefinitionKind::Function(function) => function.node(module).range(),
DefinitionKind::Class(class) => class.node(module).range(),
DefinitionKind::TypeAlias(type_alias) => type_alias.node(module).range(),
DefinitionKind::NamedExpression(named) => named.node(module).range(),
DefinitionKind::Assignment(assignment) => assignment.target.node(module).range(),
DefinitionKind::AnnotatedAssignment(assign) => assign.target.node(module).range(),
DefinitionKind::AugmentedAssignment(aug_assign) => aug_assign.node(module).range(),
DefinitionKind::For(for_stmt) => for_stmt.target.node(module).range(),
DefinitionKind::Comprehension(comp) => comp.target(module).range(),
DefinitionKind::VariadicPositionalParameter(parameter) => {
parameter.node(module).range()
}
DefinitionKind::VariadicKeywordParameter(parameter) => parameter.node(module).range(),
DefinitionKind::Parameter(parameter) => parameter.node(module).parameter.range(),
DefinitionKind::WithItem(with_item) => with_item.target.node(module).range(),
DefinitionKind::MatchPattern(match_pattern) => {
match_pattern.identifier.node(module).range()
}
DefinitionKind::ExceptHandler(handler) => handler.node(module).range(),
DefinitionKind::TypeVar(type_var) => type_var.node(module).range(),
DefinitionKind::ParamSpec(param_spec) => param_spec.node(module).range(),
DefinitionKind::TypeVarTuple(type_var_tuple) => type_var_tuple.node(module).range(),
}
}
pub(crate) fn category(&self, in_stub: bool) -> DefinitionCategory {
pub(crate) fn category(&self, in_stub: bool, module: &ParsedModuleRef) -> DefinitionCategory {
match self {
// functions, classes, and imports always bind, and we consider them declarations
DefinitionKind::Function(_)
@@ -661,7 +708,7 @@ impl DefinitionKind<'_> {
// a parameter always binds a value, but is only a declaration if annotated
DefinitionKind::VariadicPositionalParameter(parameter)
| DefinitionKind::VariadicKeywordParameter(parameter) => {
if parameter.annotation.is_some() {
if parameter.node(module).annotation.is_some() {
DefinitionCategory::DeclarationAndBinding
} else {
DefinitionCategory::Binding
@@ -669,7 +716,12 @@ impl DefinitionKind<'_> {
}
// presence of a default is irrelevant, same logic as for a no-default parameter
DefinitionKind::Parameter(parameter_with_default) => {
if parameter_with_default.parameter.annotation.is_some() {
if parameter_with_default
.node(module)
.parameter
.annotation
.is_some()
{
DefinitionCategory::DeclarationAndBinding
} else {
DefinitionCategory::Binding
@@ -700,14 +752,15 @@ impl DefinitionKind<'_> {
#[derive(Copy, Clone, Debug, PartialEq, Hash)]
pub(crate) enum TargetKind<'db> {
Sequence(UnpackPosition, Unpack<'db>),
NameOrAttribute,
/// Name, attribute, or subscript.
Single,
}
impl<'db> From<Option<(UnpackPosition, Unpack<'db>)>> for TargetKind<'db> {
fn from(value: Option<(UnpackPosition, Unpack<'db>)>) -> Self {
match value {
Some((unpack_position, unpack)) => TargetKind::Sequence(unpack_position, unpack),
None => TargetKind::NameOrAttribute,
None => TargetKind::Single,
}
}
}
@@ -715,19 +768,19 @@ impl<'db> From<Option<(UnpackPosition, Unpack<'db>)>> for TargetKind<'db> {
#[derive(Clone, Debug)]
pub struct StarImportDefinitionKind {
node: AstNodeRef<ast::StmtImportFrom>,
symbol_id: ScopedSymbolId,
place_id: ScopedPlaceId,
}
impl StarImportDefinitionKind {
pub(crate) fn import(&self) -> &ast::StmtImportFrom {
self.node.node()
pub(crate) fn import<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::StmtImportFrom {
self.node.node(module)
}
pub(crate) fn alias(&self) -> &ast::Alias {
pub(crate) fn alias<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Alias {
// INVARIANT: for an invalid-syntax statement such as `from foo import *, bar, *`,
// we only create a `StarImportDefinitionKind` for the *first* `*` alias in the names list.
self.node
.node()
.node(module)
.names
.iter()
.find(|alias| &alias.name == "*")
@@ -737,8 +790,8 @@ impl StarImportDefinitionKind {
)
}
pub(crate) fn symbol_id(&self) -> ScopedSymbolId {
self.symbol_id
pub(crate) fn place_id(&self) -> ScopedPlaceId {
self.place_id
}
}
@@ -750,8 +803,8 @@ pub struct MatchPatternDefinitionKind {
}
impl MatchPatternDefinitionKind {
pub(crate) fn pattern(&self) -> &ast::Pattern {
self.pattern.node()
pub(crate) fn pattern<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Pattern {
self.pattern.node(module)
}
pub(crate) fn index(&self) -> u32 {
@@ -759,26 +812,31 @@ impl MatchPatternDefinitionKind {
}
}
/// Note that the elements of a comprehension can be in different scopes.
/// If the definition target of a comprehension is a name, it is in the comprehension's scope.
/// But if the target is an attribute or subscript, its definition is not in the comprehension's scope;
/// it is in the scope in which the root variable is bound.
/// TODO: currently we don't model this correctly and simply assume that it is in a scope outside the comprehension.
#[derive(Clone, Debug)]
pub struct ComprehensionDefinitionKind<'db> {
pub(super) target_kind: TargetKind<'db>,
pub(super) iterable: AstNodeRef<ast::Expr>,
pub(super) target: AstNodeRef<ast::Expr>,
pub(super) first: bool,
pub(super) is_async: bool,
target_kind: TargetKind<'db>,
iterable: AstNodeRef<ast::Expr>,
target: AstNodeRef<ast::Expr>,
first: bool,
is_async: bool,
}
impl<'db> ComprehensionDefinitionKind<'db> {
pub(crate) fn iterable(&self) -> &ast::Expr {
self.iterable.node()
pub(crate) fn iterable<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.iterable.node(module)
}
pub(crate) fn target_kind(&self) -> TargetKind<'db> {
self.target_kind
}
pub(crate) fn target(&self) -> &ast::Expr {
self.target.node()
pub(crate) fn target<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.target.node(module)
}
pub(crate) fn is_first(&self) -> bool {
@@ -798,12 +856,12 @@ pub struct ImportDefinitionKind {
}
impl ImportDefinitionKind {
pub(crate) fn import(&self) -> &ast::StmtImport {
self.node.node()
pub(crate) fn import<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::StmtImport {
self.node.node(module)
}
pub(crate) fn alias(&self) -> &ast::Alias {
&self.node.node().names[self.alias_index]
pub(crate) fn alias<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Alias {
&self.node.node(module).names[self.alias_index]
}
pub(crate) fn is_reexported(&self) -> bool {
@@ -819,12 +877,12 @@ pub struct ImportFromDefinitionKind {
}
impl ImportFromDefinitionKind {
pub(crate) fn import(&self) -> &ast::StmtImportFrom {
self.node.node()
pub(crate) fn import<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::StmtImportFrom {
self.node.node(module)
}
pub(crate) fn alias(&self) -> &ast::Alias {
&self.node.node().names[self.alias_index]
pub(crate) fn alias<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Alias {
&self.node.node(module).names[self.alias_index]
}
pub(crate) fn is_reexported(&self) -> bool {
@@ -840,28 +898,16 @@ pub struct AssignmentDefinitionKind<'db> {
}
impl<'db> AssignmentDefinitionKind<'db> {
pub(crate) fn new(
target_kind: TargetKind<'db>,
value: AstNodeRef<ast::Expr>,
target: AstNodeRef<ast::Expr>,
) -> Self {
Self {
target_kind,
value,
target,
}
}
pub(crate) fn target_kind(&self) -> TargetKind<'db> {
self.target_kind
}
pub(crate) fn value(&self) -> &ast::Expr {
self.value.node()
pub(crate) fn value<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.value.node(module)
}
pub(crate) fn target(&self) -> &ast::Expr {
self.target.node()
pub(crate) fn target<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.target.node(module)
}
}
@@ -873,28 +919,16 @@ pub struct AnnotatedAssignmentDefinitionKind {
}
impl AnnotatedAssignmentDefinitionKind {
pub(crate) fn new(
annotation: AstNodeRef<ast::Expr>,
value: Option<AstNodeRef<ast::Expr>>,
target: AstNodeRef<ast::Expr>,
) -> Self {
Self {
annotation,
value,
target,
}
pub(crate) fn value<'ast>(&self, module: &'ast ParsedModuleRef) -> Option<&'ast ast::Expr> {
self.value.as_ref().map(|value| value.node(module))
}
pub(crate) fn value(&self) -> Option<&ast::Expr> {
self.value.as_deref()
pub(crate) fn annotation<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.annotation.node(module)
}
pub(crate) fn annotation(&self) -> &ast::Expr {
self.annotation.node()
}
pub(crate) fn target(&self) -> &ast::Expr {
self.target.node()
pub(crate) fn target<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.target.node(module)
}
}
@@ -907,30 +941,16 @@ pub struct WithItemDefinitionKind<'db> {
}
impl<'db> WithItemDefinitionKind<'db> {
pub(crate) fn new(
target_kind: TargetKind<'db>,
context_expr: AstNodeRef<ast::Expr>,
target: AstNodeRef<ast::Expr>,
is_async: bool,
) -> Self {
Self {
target_kind,
context_expr,
target,
is_async,
}
}
pub(crate) fn context_expr(&self) -> &ast::Expr {
self.context_expr.node()
pub(crate) fn context_expr<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.context_expr.node(module)
}
pub(crate) fn target_kind(&self) -> TargetKind<'db> {
self.target_kind
}
pub(crate) fn target(&self) -> &ast::Expr {
self.target.node()
pub(crate) fn target<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.target.node(module)
}
pub(crate) const fn is_async(&self) -> bool {
@@ -947,30 +967,16 @@ pub struct ForStmtDefinitionKind<'db> {
}
impl<'db> ForStmtDefinitionKind<'db> {
pub(crate) fn new(
target_kind: TargetKind<'db>,
iterable: AstNodeRef<ast::Expr>,
target: AstNodeRef<ast::Expr>,
is_async: bool,
) -> Self {
Self {
target_kind,
iterable,
target,
is_async,
}
}
pub(crate) fn iterable(&self) -> &ast::Expr {
self.iterable.node()
pub(crate) fn iterable<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.iterable.node(module)
}
pub(crate) fn target_kind(&self) -> TargetKind<'db> {
self.target_kind
}
pub(crate) fn target(&self) -> &ast::Expr {
self.target.node()
pub(crate) fn target<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::Expr {
self.target.node(module)
}
pub(crate) const fn is_async(&self) -> bool {
@@ -985,12 +991,18 @@ pub struct ExceptHandlerDefinitionKind {
}
impl ExceptHandlerDefinitionKind {
pub(crate) fn node(&self) -> &ast::ExceptHandlerExceptHandler {
self.handler.node()
pub(crate) fn node<'ast>(
&self,
module: &'ast ParsedModuleRef,
) -> &'ast ast::ExceptHandlerExceptHandler {
self.handler.node(module)
}
pub(crate) fn handled_exceptions(&self) -> Option<&ast::Expr> {
self.node().type_.as_deref()
pub(crate) fn handled_exceptions<'ast>(
&self,
module: &'ast ParsedModuleRef,
) -> Option<&'ast ast::Expr> {
self.node(module).type_.as_deref()
}
pub(crate) fn is_star(&self) -> bool {
@@ -1031,6 +1043,18 @@ impl From<&ast::ExprName> for DefinitionNodeKey {
}
}
impl From<&ast::ExprAttribute> for DefinitionNodeKey {
fn from(node: &ast::ExprAttribute) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::ExprSubscript> for DefinitionNodeKey {
fn from(node: &ast::ExprSubscript) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::ExprNamed> for DefinitionNodeKey {
fn from(node: &ast::ExprNamed) -> Self {
Self(NodeKey::from_node(node))

View File

@@ -1,7 +1,8 @@
use crate::ast_node_ref::AstNodeRef;
use crate::db::Db;
use crate::semantic_index::symbol::{FileScopeId, ScopeId};
use crate::semantic_index::place::{FileScopeId, ScopeId};
use ruff_db::files::File;
use ruff_db::parsed::ParsedModuleRef;
use ruff_python_ast as ast;
use salsa;
@@ -41,8 +42,8 @@ pub(crate) struct Expression<'db> {
/// The expression node.
#[no_eq]
#[tracked]
#[returns(deref)]
pub(crate) node_ref: AstNodeRef<ast::Expr>,
#[returns(ref)]
pub(crate) _node_ref: AstNodeRef<ast::Expr>,
/// An assignment statement, if this expression is immediately used as the rhs of that
/// assignment.
@@ -62,6 +63,14 @@ pub(crate) struct Expression<'db> {
}
impl<'db> Expression<'db> {
pub(crate) fn node_ref<'ast>(
self,
db: &'db dyn Db,
parsed: &'ast ParsedModuleRef,
) -> &'ast ast::Expr {
self._node_ref(db).node(parsed)
}
pub(crate) fn scope(self, db: &'db dyn Db) -> ScopeId<'db> {
self.file_scope(db).to_scope_id(db, self.file(db))
}

View File

@@ -1,7 +1,7 @@
//! # Narrowing constraints
//!
//! When building a semantic index for a file, we associate each binding with a _narrowing
//! constraint_, which constrains the type of the binding's symbol. Note that a binding can be
//! constraint_, which constrains the type of the binding's place. Note that a binding can be
//! associated with a different narrowing constraint at different points in a file. See the
//! [`use_def`][crate::semantic_index::use_def] module for more details.
//!
@@ -34,7 +34,7 @@ use crate::semantic_index::predicate::ScopedPredicateId;
/// A narrowing constraint associated with a live binding.
///
/// A constraint is a list of [`Predicate`]s that each constrain the type of the binding's symbol.
/// A constraint is a list of [`Predicate`]s that each constrain the type of the binding's place.
///
/// [`Predicate`]: crate::semantic_index::predicate::Predicate
pub(crate) type ScopedNarrowingConstraint = List<ScopedNarrowingConstraintPredicate>;
@@ -46,7 +46,7 @@ pub(crate) enum ConstraintKey {
}
/// One of the [`Predicate`]s in a narrowing constraint, which constraints the type of the
/// binding's symbol.
/// binding's place.
///
/// Note that those [`Predicate`]s are stored in [their own per-scope
/// arena][crate::semantic_index::predicate::Predicates], so internally we use a

View File

@@ -0,0 +1,957 @@
use std::convert::Infallible;
use std::hash::{Hash, Hasher};
use std::ops::Range;
use bitflags::bitflags;
use hashbrown::hash_map::RawEntryMut;
use ruff_db::files::File;
use ruff_db::parsed::ParsedModuleRef;
use ruff_index::{IndexVec, newtype_index};
use ruff_python_ast as ast;
use ruff_python_ast::name::Name;
use rustc_hash::FxHasher;
use smallvec::{SmallVec, smallvec};
use crate::Db;
use crate::ast_node_ref::AstNodeRef;
use crate::node_key::NodeKey;
use crate::semantic_index::visibility_constraints::ScopedVisibilityConstraintId;
use crate::semantic_index::{PlaceSet, SemanticIndex, semantic_index};
#[derive(Debug, Clone, PartialEq, Eq, Hash, salsa::Update)]
pub(crate) enum PlaceExprSubSegment {
/// A member access, e.g. `.y` in `x.y`
Member(ast::name::Name),
/// An integer-based index access, e.g. `[1]` in `x[1]`
IntSubscript(ast::Int),
/// A string-based index access, e.g. `["foo"]` in `x["foo"]`
StringSubscript(String),
}
impl PlaceExprSubSegment {
pub(crate) fn as_member(&self) -> Option<&ast::name::Name> {
match self {
PlaceExprSubSegment::Member(name) => Some(name),
_ => None,
}
}
}
/// An expression that can be the target of a `Definition`.
/// If you want to perform a comparison based on the equality of segments (without including
/// flags), use [`PlaceSegments`].
#[derive(Eq, PartialEq, Debug)]
pub struct PlaceExpr {
root_name: Name,
sub_segments: SmallVec<[PlaceExprSubSegment; 1]>,
flags: PlaceFlags,
}
impl std::fmt::Display for PlaceExpr {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.root_name)?;
for segment in &self.sub_segments {
match segment {
PlaceExprSubSegment::Member(name) => write!(f, ".{name}")?,
PlaceExprSubSegment::IntSubscript(int) => write!(f, "[{int}]")?,
PlaceExprSubSegment::StringSubscript(string) => write!(f, "[\"{string}\"]")?,
}
}
Ok(())
}
}
impl TryFrom<&ast::name::Name> for PlaceExpr {
type Error = Infallible;
fn try_from(name: &ast::name::Name) -> Result<Self, Infallible> {
Ok(PlaceExpr::name(name.clone()))
}
}
impl TryFrom<ast::name::Name> for PlaceExpr {
type Error = Infallible;
fn try_from(name: ast::name::Name) -> Result<Self, Infallible> {
Ok(PlaceExpr::name(name))
}
}
impl TryFrom<&ast::ExprAttribute> for PlaceExpr {
type Error = ();
fn try_from(attr: &ast::ExprAttribute) -> Result<Self, ()> {
let mut place = PlaceExpr::try_from(&*attr.value)?;
place
.sub_segments
.push(PlaceExprSubSegment::Member(attr.attr.id.clone()));
Ok(place)
}
}
impl TryFrom<ast::ExprAttribute> for PlaceExpr {
type Error = ();
fn try_from(attr: ast::ExprAttribute) -> Result<Self, ()> {
let mut place = PlaceExpr::try_from(&*attr.value)?;
place
.sub_segments
.push(PlaceExprSubSegment::Member(attr.attr.id));
Ok(place)
}
}
impl TryFrom<&ast::ExprSubscript> for PlaceExpr {
type Error = ();
fn try_from(subscript: &ast::ExprSubscript) -> Result<Self, ()> {
let mut place = PlaceExpr::try_from(&*subscript.value)?;
match &*subscript.slice {
ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(index),
..
}) => {
place
.sub_segments
.push(PlaceExprSubSegment::IntSubscript(index.clone()));
}
ast::Expr::StringLiteral(string) => {
place
.sub_segments
.push(PlaceExprSubSegment::StringSubscript(
string.value.to_string(),
));
}
_ => {
return Err(());
}
}
Ok(place)
}
}
impl TryFrom<ast::ExprSubscript> for PlaceExpr {
type Error = ();
fn try_from(subscript: ast::ExprSubscript) -> Result<Self, ()> {
PlaceExpr::try_from(&subscript)
}
}
impl TryFrom<&ast::Expr> for PlaceExpr {
type Error = ();
fn try_from(expr: &ast::Expr) -> Result<Self, ()> {
match expr {
ast::Expr::Name(name) => Ok(PlaceExpr::name(name.id.clone())),
ast::Expr::Attribute(attr) => PlaceExpr::try_from(attr),
ast::Expr::Subscript(subscript) => PlaceExpr::try_from(subscript),
_ => Err(()),
}
}
}
impl PlaceExpr {
pub(super) fn name(name: Name) -> Self {
Self {
root_name: name,
sub_segments: smallvec![],
flags: PlaceFlags::empty(),
}
}
fn insert_flags(&mut self, flags: PlaceFlags) {
self.flags.insert(flags);
}
pub(super) fn mark_instance_attribute(&mut self) {
self.flags.insert(PlaceFlags::IS_INSTANCE_ATTRIBUTE);
}
pub(crate) fn root_name(&self) -> &Name {
&self.root_name
}
pub(crate) fn sub_segments(&self) -> &[PlaceExprSubSegment] {
&self.sub_segments
}
pub(crate) fn as_name(&self) -> Option<&Name> {
if self.is_name() {
Some(&self.root_name)
} else {
None
}
}
/// Assumes that the place expression is a name.
#[track_caller]
pub(crate) fn expect_name(&self) -> &Name {
debug_assert_eq!(self.sub_segments.len(), 0);
&self.root_name
}
/// Does the place expression have the form `self.{name}` (`self` is the first parameter of the method)?
pub(super) fn is_instance_attribute_named(&self, name: &str) -> bool {
self.is_instance_attribute()
&& self.sub_segments.len() == 1
&& self.sub_segments[0].as_member().unwrap().as_str() == name
}
/// Is the place an instance attribute?
pub fn is_instance_attribute(&self) -> bool {
self.flags.contains(PlaceFlags::IS_INSTANCE_ATTRIBUTE)
}
/// Is the place used in its containing scope?
pub fn is_used(&self) -> bool {
self.flags.contains(PlaceFlags::IS_USED)
}
/// Is the place defined in its containing scope?
pub fn is_bound(&self) -> bool {
self.flags.contains(PlaceFlags::IS_BOUND)
}
/// Is the place declared in its containing scope?
pub fn is_declared(&self) -> bool {
self.flags.contains(PlaceFlags::IS_DECLARED)
}
/// Is the place just a name?
pub fn is_name(&self) -> bool {
self.sub_segments.is_empty()
}
pub fn is_name_and(&self, f: impl FnOnce(&str) -> bool) -> bool {
self.is_name() && f(&self.root_name)
}
/// Does the place expression have the form `<object>.member`?
pub fn is_member(&self) -> bool {
self.sub_segments
.last()
.is_some_and(|last| last.as_member().is_some())
}
pub(crate) fn segments(&self) -> PlaceSegments {
PlaceSegments {
root_name: Some(&self.root_name),
sub_segments: &self.sub_segments,
}
}
// TODO: Ideally this would iterate PlaceSegments instead of RootExprs, both to reduce
// allocation and to avoid having both flagged and non-flagged versions of PlaceExprs.
fn root_exprs(&self) -> RootExprs<'_> {
RootExprs {
expr: self,
len: self.sub_segments.len(),
}
}
}
struct RootExprs<'e> {
expr: &'e PlaceExpr,
len: usize,
}
impl Iterator for RootExprs<'_> {
type Item = PlaceExpr;
fn next(&mut self) -> Option<Self::Item> {
if self.len == 0 {
return None;
}
self.len -= 1;
Some(PlaceExpr {
root_name: self.expr.root_name.clone(),
sub_segments: self.expr.sub_segments[..self.len].iter().cloned().collect(),
flags: PlaceFlags::empty(),
})
}
}
bitflags! {
/// Flags that can be queried to obtain information about a place in a given scope.
///
/// See the doc-comment at the top of [`super::use_def`] for explanations of what it
/// means for a place to be *bound* as opposed to *declared*.
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
struct PlaceFlags: u8 {
const IS_USED = 1 << 0;
const IS_BOUND = 1 << 1;
const IS_DECLARED = 1 << 2;
/// TODO: This flag is not yet set by anything
const MARKED_GLOBAL = 1 << 3;
/// TODO: This flag is not yet set by anything
const MARKED_NONLOCAL = 1 << 4;
const IS_INSTANCE_ATTRIBUTE = 1 << 5;
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum PlaceSegment<'a> {
/// A first segment of a place expression (root name), e.g. `x` in `x.y.z[0]`.
Name(&'a ast::name::Name),
Member(&'a ast::name::Name),
IntSubscript(&'a ast::Int),
StringSubscript(&'a str),
}
#[derive(Debug, PartialEq, Eq)]
pub struct PlaceSegments<'a> {
root_name: Option<&'a ast::name::Name>,
sub_segments: &'a [PlaceExprSubSegment],
}
impl<'a> Iterator for PlaceSegments<'a> {
type Item = PlaceSegment<'a>;
fn next(&mut self) -> Option<Self::Item> {
if let Some(name) = self.root_name.take() {
return Some(PlaceSegment::Name(name));
}
if self.sub_segments.is_empty() {
return None;
}
let segment = &self.sub_segments[0];
self.sub_segments = &self.sub_segments[1..];
Some(match segment {
PlaceExprSubSegment::Member(name) => PlaceSegment::Member(name),
PlaceExprSubSegment::IntSubscript(int) => PlaceSegment::IntSubscript(int),
PlaceExprSubSegment::StringSubscript(string) => PlaceSegment::StringSubscript(string),
})
}
}
/// ID that uniquely identifies a place in a file.
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct FilePlaceId {
scope: FileScopeId,
scoped_place_id: ScopedPlaceId,
}
impl FilePlaceId {
pub fn scope(self) -> FileScopeId {
self.scope
}
pub(crate) fn scoped_place_id(self) -> ScopedPlaceId {
self.scoped_place_id
}
}
impl From<FilePlaceId> for ScopedPlaceId {
fn from(val: FilePlaceId) -> Self {
val.scoped_place_id()
}
}
/// ID that uniquely identifies a place inside a [`Scope`].
#[newtype_index]
#[derive(salsa::Update)]
pub struct ScopedPlaceId;
/// A cross-module identifier of a scope that can be used as a salsa query parameter.
#[salsa::tracked(debug)]
pub struct ScopeId<'db> {
pub file: File,
pub file_scope_id: FileScopeId,
count: countme::Count<ScopeId<'static>>,
}
impl<'db> ScopeId<'db> {
pub(crate) fn is_function_like(self, db: &'db dyn Db) -> bool {
self.node(db).scope_kind().is_function_like()
}
pub(crate) fn is_type_parameter(self, db: &'db dyn Db) -> bool {
self.node(db).scope_kind().is_type_parameter()
}
pub(crate) fn node(self, db: &dyn Db) -> &NodeWithScopeKind {
self.scope(db).node()
}
pub(crate) fn scope(self, db: &dyn Db) -> &Scope {
semantic_index(db, self.file(db)).scope(self.file_scope_id(db))
}
#[cfg(test)]
pub(crate) fn name<'ast>(self, db: &'db dyn Db, module: &'ast ParsedModuleRef) -> &'ast str {
match self.node(db) {
NodeWithScopeKind::Module => "<module>",
NodeWithScopeKind::Class(class) | NodeWithScopeKind::ClassTypeParameters(class) => {
class.node(module).name.as_str()
}
NodeWithScopeKind::Function(function)
| NodeWithScopeKind::FunctionTypeParameters(function) => {
function.node(module).name.as_str()
}
NodeWithScopeKind::TypeAlias(type_alias)
| NodeWithScopeKind::TypeAliasTypeParameters(type_alias) => type_alias
.node(module)
.name
.as_name_expr()
.map(|name| name.id.as_str())
.unwrap_or("<type alias>"),
NodeWithScopeKind::Lambda(_) => "<lambda>",
NodeWithScopeKind::ListComprehension(_) => "<listcomp>",
NodeWithScopeKind::SetComprehension(_) => "<setcomp>",
NodeWithScopeKind::DictComprehension(_) => "<dictcomp>",
NodeWithScopeKind::GeneratorExpression(_) => "<generator>",
}
}
}
/// ID that uniquely identifies a scope inside of a module.
#[newtype_index]
#[derive(salsa::Update)]
pub struct FileScopeId;
impl FileScopeId {
/// Returns the scope id of the module-global scope.
pub fn global() -> Self {
FileScopeId::from_u32(0)
}
pub fn is_global(self) -> bool {
self == FileScopeId::global()
}
pub fn to_scope_id(self, db: &dyn Db, file: File) -> ScopeId<'_> {
let index = semantic_index(db, file);
index.scope_ids_by_scope[self]
}
pub(crate) fn is_generator_function(self, index: &SemanticIndex) -> bool {
index.generator_functions.contains(&self)
}
}
#[derive(Debug, salsa::Update)]
pub struct Scope {
parent: Option<FileScopeId>,
node: NodeWithScopeKind,
descendants: Range<FileScopeId>,
reachability: ScopedVisibilityConstraintId,
}
impl Scope {
pub(super) fn new(
parent: Option<FileScopeId>,
node: NodeWithScopeKind,
descendants: Range<FileScopeId>,
reachability: ScopedVisibilityConstraintId,
) -> Self {
Scope {
parent,
node,
descendants,
reachability,
}
}
pub fn parent(&self) -> Option<FileScopeId> {
self.parent
}
pub fn node(&self) -> &NodeWithScopeKind {
&self.node
}
pub fn kind(&self) -> ScopeKind {
self.node().scope_kind()
}
pub fn descendants(&self) -> Range<FileScopeId> {
self.descendants.clone()
}
pub(super) fn extend_descendants(&mut self, children_end: FileScopeId) {
self.descendants = self.descendants.start..children_end;
}
pub(crate) fn is_eager(&self) -> bool {
self.kind().is_eager()
}
pub(crate) fn reachability(&self) -> ScopedVisibilityConstraintId {
self.reachability
}
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ScopeKind {
Module,
Annotation,
Class,
Function,
Lambda,
Comprehension,
TypeAlias,
}
impl ScopeKind {
pub(crate) fn is_eager(self) -> bool {
match self {
ScopeKind::Module | ScopeKind::Class | ScopeKind::Comprehension => true,
ScopeKind::Annotation
| ScopeKind::Function
| ScopeKind::Lambda
| ScopeKind::TypeAlias => false,
}
}
pub(crate) fn is_function_like(self) -> bool {
// Type parameter scopes behave like function scopes in terms of name resolution; CPython
// place table also uses the term "function-like" for these scopes.
matches!(
self,
ScopeKind::Annotation
| ScopeKind::Function
| ScopeKind::Lambda
| ScopeKind::TypeAlias
| ScopeKind::Comprehension
)
}
pub(crate) fn is_class(self) -> bool {
matches!(self, ScopeKind::Class)
}
pub(crate) fn is_type_parameter(self) -> bool {
matches!(self, ScopeKind::Annotation | ScopeKind::TypeAlias)
}
}
/// [`PlaceExpr`] table for a specific [`Scope`].
#[derive(Default, salsa::Update)]
pub struct PlaceTable {
/// The place expressions in this scope.
places: IndexVec<ScopedPlaceId, PlaceExpr>,
/// The set of places.
place_set: PlaceSet,
}
impl PlaceTable {
fn shrink_to_fit(&mut self) {
self.places.shrink_to_fit();
}
pub(crate) fn place_expr(&self, place_id: impl Into<ScopedPlaceId>) -> &PlaceExpr {
&self.places[place_id.into()]
}
/// Iterate over the "root" expressions of the place (e.g. `x.y.z`, `x.y`, `x` for `x.y.z[0]`).
pub(crate) fn root_place_exprs(
&self,
place_expr: &PlaceExpr,
) -> impl Iterator<Item = &PlaceExpr> {
place_expr
.root_exprs()
.filter_map(|place_expr| self.place_by_expr(&place_expr))
}
#[expect(unused)]
pub(crate) fn place_ids(&self) -> impl Iterator<Item = ScopedPlaceId> {
self.places.indices()
}
pub fn places(&self) -> impl Iterator<Item = &PlaceExpr> {
self.places.iter()
}
pub fn symbols(&self) -> impl Iterator<Item = &PlaceExpr> {
self.places().filter(|place_expr| place_expr.is_name())
}
pub fn instance_attributes(&self) -> impl Iterator<Item = &PlaceExpr> {
self.places()
.filter(|place_expr| place_expr.is_instance_attribute())
}
/// Returns the place named `name`.
#[allow(unused)] // used in tests
pub(crate) fn place_by_name(&self, name: &str) -> Option<&PlaceExpr> {
let id = self.place_id_by_name(name)?;
Some(self.place_expr(id))
}
/// Returns the flagged place by the unflagged place expression.
///
/// TODO: Ideally this would take a [`PlaceSegments`] instead of [`PlaceExpr`], to avoid the
/// awkward distinction between "flagged" (canonical) and unflagged [`PlaceExpr`]; in that
/// world, we would only create [`PlaceExpr`] in semantic indexing; in type inference we'd
/// create [`PlaceSegments`] if we need to look up a [`PlaceExpr`]. The [`PlaceTable`] would
/// need to gain the ability to hash and look up by a [`PlaceSegments`].
pub(crate) fn place_by_expr(&self, place_expr: &PlaceExpr) -> Option<&PlaceExpr> {
let id = self.place_id_by_expr(place_expr)?;
Some(self.place_expr(id))
}
/// Returns the [`ScopedPlaceId`] of the place named `name`.
pub(crate) fn place_id_by_name(&self, name: &str) -> Option<ScopedPlaceId> {
let (id, ()) = self
.place_set
.raw_entry()
.from_hash(Self::hash_name(name), |id| {
self.place_expr(*id).as_name().map(Name::as_str) == Some(name)
})?;
Some(*id)
}
/// Returns the [`ScopedPlaceId`] of the place expression.
pub(crate) fn place_id_by_expr(&self, place_expr: &PlaceExpr) -> Option<ScopedPlaceId> {
let (id, ()) = self
.place_set
.raw_entry()
.from_hash(Self::hash_place_expr(place_expr), |id| {
self.place_expr(*id).segments() == place_expr.segments()
})?;
Some(*id)
}
pub(crate) fn place_id_by_instance_attribute_name(&self, name: &str) -> Option<ScopedPlaceId> {
self.places
.indices()
.find(|id| self.places[*id].is_instance_attribute_named(name))
}
fn hash_name(name: &str) -> u64 {
let mut hasher = FxHasher::default();
name.hash(&mut hasher);
hasher.finish()
}
fn hash_place_expr(place_expr: &PlaceExpr) -> u64 {
let mut hasher = FxHasher::default();
place_expr.root_name().as_str().hash(&mut hasher);
for segment in &place_expr.sub_segments {
match segment {
PlaceExprSubSegment::Member(name) => name.hash(&mut hasher),
PlaceExprSubSegment::IntSubscript(int) => int.hash(&mut hasher),
PlaceExprSubSegment::StringSubscript(string) => string.hash(&mut hasher),
}
}
hasher.finish()
}
}
impl PartialEq for PlaceTable {
fn eq(&self, other: &Self) -> bool {
// We don't need to compare the place_set because the place is already captured in `PlaceExpr`.
self.places == other.places
}
}
impl Eq for PlaceTable {}
impl std::fmt::Debug for PlaceTable {
/// Exclude the `place_set` field from the debug output.
/// It's very noisy and not useful for debugging.
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_tuple("PlaceTable")
.field(&self.places)
.finish_non_exhaustive()
}
}
#[derive(Debug, Default)]
pub(super) struct PlaceTableBuilder {
table: PlaceTable,
associated_place_ids: IndexVec<ScopedPlaceId, Vec<ScopedPlaceId>>,
}
impl PlaceTableBuilder {
pub(super) fn add_symbol(&mut self, name: Name) -> (ScopedPlaceId, bool) {
let hash = PlaceTable::hash_name(&name);
let entry = self
.table
.place_set
.raw_entry_mut()
.from_hash(hash, |id| self.table.places[*id].as_name() == Some(&name));
match entry {
RawEntryMut::Occupied(entry) => (*entry.key(), false),
RawEntryMut::Vacant(entry) => {
let symbol = PlaceExpr::name(name);
let id = self.table.places.push(symbol);
entry.insert_with_hasher(hash, id, (), |id| {
PlaceTable::hash_place_expr(&self.table.places[*id])
});
let new_id = self.associated_place_ids.push(vec![]);
debug_assert_eq!(new_id, id);
(id, true)
}
}
}
pub(super) fn add_place(&mut self, place_expr: PlaceExpr) -> (ScopedPlaceId, bool) {
let hash = PlaceTable::hash_place_expr(&place_expr);
let entry = self.table.place_set.raw_entry_mut().from_hash(hash, |id| {
self.table.places[*id].segments() == place_expr.segments()
});
match entry {
RawEntryMut::Occupied(entry) => (*entry.key(), false),
RawEntryMut::Vacant(entry) => {
let id = self.table.places.push(place_expr);
entry.insert_with_hasher(hash, id, (), |id| {
PlaceTable::hash_place_expr(&self.table.places[*id])
});
let new_id = self.associated_place_ids.push(vec![]);
debug_assert_eq!(new_id, id);
for root in self.table.places[id].root_exprs() {
if let Some(root_id) = self.table.place_id_by_expr(&root) {
self.associated_place_ids[root_id].push(id);
}
}
(id, true)
}
}
}
pub(super) fn mark_place_bound(&mut self, id: ScopedPlaceId) {
self.table.places[id].insert_flags(PlaceFlags::IS_BOUND);
}
pub(super) fn mark_place_declared(&mut self, id: ScopedPlaceId) {
self.table.places[id].insert_flags(PlaceFlags::IS_DECLARED);
}
pub(super) fn mark_place_used(&mut self, id: ScopedPlaceId) {
self.table.places[id].insert_flags(PlaceFlags::IS_USED);
}
pub(super) fn places(&self) -> impl Iterator<Item = &PlaceExpr> {
self.table.places()
}
pub(super) fn place_id_by_expr(&self, place_expr: &PlaceExpr) -> Option<ScopedPlaceId> {
self.table.place_id_by_expr(place_expr)
}
pub(super) fn place_expr(&self, place_id: impl Into<ScopedPlaceId>) -> &PlaceExpr {
self.table.place_expr(place_id)
}
/// Returns the place IDs associated with the place (e.g. `x.y`, `x.y.z`, `x.y.z[0]` for `x`).
pub(super) fn associated_place_ids(
&self,
place: ScopedPlaceId,
) -> impl Iterator<Item = ScopedPlaceId> {
self.associated_place_ids[place].iter().copied()
}
pub(super) fn finish(mut self) -> PlaceTable {
self.table.shrink_to_fit();
self.table
}
}
/// Reference to a node that introduces a new scope.
#[derive(Copy, Clone, Debug)]
pub(crate) enum NodeWithScopeRef<'a> {
Module,
Class(&'a ast::StmtClassDef),
Function(&'a ast::StmtFunctionDef),
Lambda(&'a ast::ExprLambda),
FunctionTypeParameters(&'a ast::StmtFunctionDef),
ClassTypeParameters(&'a ast::StmtClassDef),
TypeAlias(&'a ast::StmtTypeAlias),
TypeAliasTypeParameters(&'a ast::StmtTypeAlias),
ListComprehension(&'a ast::ExprListComp),
SetComprehension(&'a ast::ExprSetComp),
DictComprehension(&'a ast::ExprDictComp),
GeneratorExpression(&'a ast::ExprGenerator),
}
impl NodeWithScopeRef<'_> {
/// Converts the unowned reference to an owned [`NodeWithScopeKind`].
///
/// # Safety
/// The node wrapped by `self` must be a child of `module`.
#[expect(unsafe_code)]
pub(super) unsafe fn to_kind(self, module: ParsedModuleRef) -> NodeWithScopeKind {
unsafe {
match self {
NodeWithScopeRef::Module => NodeWithScopeKind::Module,
NodeWithScopeRef::Class(class) => {
NodeWithScopeKind::Class(AstNodeRef::new(module, class))
}
NodeWithScopeRef::Function(function) => {
NodeWithScopeKind::Function(AstNodeRef::new(module, function))
}
NodeWithScopeRef::TypeAlias(type_alias) => {
NodeWithScopeKind::TypeAlias(AstNodeRef::new(module, type_alias))
}
NodeWithScopeRef::TypeAliasTypeParameters(type_alias) => {
NodeWithScopeKind::TypeAliasTypeParameters(AstNodeRef::new(module, type_alias))
}
NodeWithScopeRef::Lambda(lambda) => {
NodeWithScopeKind::Lambda(AstNodeRef::new(module, lambda))
}
NodeWithScopeRef::FunctionTypeParameters(function) => {
NodeWithScopeKind::FunctionTypeParameters(AstNodeRef::new(module, function))
}
NodeWithScopeRef::ClassTypeParameters(class) => {
NodeWithScopeKind::ClassTypeParameters(AstNodeRef::new(module, class))
}
NodeWithScopeRef::ListComprehension(comprehension) => {
NodeWithScopeKind::ListComprehension(AstNodeRef::new(module, comprehension))
}
NodeWithScopeRef::SetComprehension(comprehension) => {
NodeWithScopeKind::SetComprehension(AstNodeRef::new(module, comprehension))
}
NodeWithScopeRef::DictComprehension(comprehension) => {
NodeWithScopeKind::DictComprehension(AstNodeRef::new(module, comprehension))
}
NodeWithScopeRef::GeneratorExpression(generator) => {
NodeWithScopeKind::GeneratorExpression(AstNodeRef::new(module, generator))
}
}
}
}
pub(crate) fn node_key(self) -> NodeWithScopeKey {
match self {
NodeWithScopeRef::Module => NodeWithScopeKey::Module,
NodeWithScopeRef::Class(class) => NodeWithScopeKey::Class(NodeKey::from_node(class)),
NodeWithScopeRef::Function(function) => {
NodeWithScopeKey::Function(NodeKey::from_node(function))
}
NodeWithScopeRef::Lambda(lambda) => {
NodeWithScopeKey::Lambda(NodeKey::from_node(lambda))
}
NodeWithScopeRef::FunctionTypeParameters(function) => {
NodeWithScopeKey::FunctionTypeParameters(NodeKey::from_node(function))
}
NodeWithScopeRef::ClassTypeParameters(class) => {
NodeWithScopeKey::ClassTypeParameters(NodeKey::from_node(class))
}
NodeWithScopeRef::TypeAlias(type_alias) => {
NodeWithScopeKey::TypeAlias(NodeKey::from_node(type_alias))
}
NodeWithScopeRef::TypeAliasTypeParameters(type_alias) => {
NodeWithScopeKey::TypeAliasTypeParameters(NodeKey::from_node(type_alias))
}
NodeWithScopeRef::ListComprehension(comprehension) => {
NodeWithScopeKey::ListComprehension(NodeKey::from_node(comprehension))
}
NodeWithScopeRef::SetComprehension(comprehension) => {
NodeWithScopeKey::SetComprehension(NodeKey::from_node(comprehension))
}
NodeWithScopeRef::DictComprehension(comprehension) => {
NodeWithScopeKey::DictComprehension(NodeKey::from_node(comprehension))
}
NodeWithScopeRef::GeneratorExpression(generator) => {
NodeWithScopeKey::GeneratorExpression(NodeKey::from_node(generator))
}
}
}
}
/// Node that introduces a new scope.
#[derive(Clone, Debug, salsa::Update)]
pub enum NodeWithScopeKind {
Module,
Class(AstNodeRef<ast::StmtClassDef>),
ClassTypeParameters(AstNodeRef<ast::StmtClassDef>),
Function(AstNodeRef<ast::StmtFunctionDef>),
FunctionTypeParameters(AstNodeRef<ast::StmtFunctionDef>),
TypeAliasTypeParameters(AstNodeRef<ast::StmtTypeAlias>),
TypeAlias(AstNodeRef<ast::StmtTypeAlias>),
Lambda(AstNodeRef<ast::ExprLambda>),
ListComprehension(AstNodeRef<ast::ExprListComp>),
SetComprehension(AstNodeRef<ast::ExprSetComp>),
DictComprehension(AstNodeRef<ast::ExprDictComp>),
GeneratorExpression(AstNodeRef<ast::ExprGenerator>),
}
impl NodeWithScopeKind {
pub(crate) const fn scope_kind(&self) -> ScopeKind {
match self {
Self::Module => ScopeKind::Module,
Self::Class(_) => ScopeKind::Class,
Self::Function(_) => ScopeKind::Function,
Self::Lambda(_) => ScopeKind::Lambda,
Self::FunctionTypeParameters(_)
| Self::ClassTypeParameters(_)
| Self::TypeAliasTypeParameters(_) => ScopeKind::Annotation,
Self::TypeAlias(_) => ScopeKind::TypeAlias,
Self::ListComprehension(_)
| Self::SetComprehension(_)
| Self::DictComprehension(_)
| Self::GeneratorExpression(_) => ScopeKind::Comprehension,
}
}
pub fn expect_class<'ast>(&self, module: &'ast ParsedModuleRef) -> &'ast ast::StmtClassDef {
match self {
Self::Class(class) => class.node(module),
_ => panic!("expected class"),
}
}
pub(crate) fn as_class<'ast>(
&self,
module: &'ast ParsedModuleRef,
) -> Option<&'ast ast::StmtClassDef> {
match self {
Self::Class(class) => Some(class.node(module)),
_ => None,
}
}
pub fn expect_function<'ast>(
&self,
module: &'ast ParsedModuleRef,
) -> &'ast ast::StmtFunctionDef {
self.as_function(module).expect("expected function")
}
pub fn expect_type_alias<'ast>(
&self,
module: &'ast ParsedModuleRef,
) -> &'ast ast::StmtTypeAlias {
match self {
Self::TypeAlias(type_alias) => type_alias.node(module),
_ => panic!("expected type alias"),
}
}
pub fn as_function<'ast>(
&self,
module: &'ast ParsedModuleRef,
) -> Option<&'ast ast::StmtFunctionDef> {
match self {
Self::Function(function) => Some(function.node(module)),
_ => None,
}
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub(crate) enum NodeWithScopeKey {
Module,
Class(NodeKey),
ClassTypeParameters(NodeKey),
Function(NodeKey),
FunctionTypeParameters(NodeKey),
TypeAlias(NodeKey),
TypeAliasTypeParameters(NodeKey),
Lambda(NodeKey),
ListComprehension(NodeKey),
SetComprehension(NodeKey),
DictComprehension(NodeKey),
GeneratorExpression(NodeKey),
}

Some files were not shown because too many files have changed in this diff Show More