Compare commits

..

1 Commits

Author SHA1 Message Date
Brent Westbrook
98320690dd convert to structs
Move SARIF rendering to ruff_db

Summary
--

This is another mostly-straightforward JSON-based output format. In the first
commit, I converted to using actual structs. I made a couple of tweaks to the
format:

- The `text` field of `SarifRule::full_description` was implicitly an `Option`
before, but serializing this as `null` is invalid based on the [validator]. I
made this an `Option<MessageString>` (`MessageString` is a shortened form of
`multiformatMessageString`, which is what the schema calls this type) and skip
serializing it if it's `None`, which validates against the schema.
- `SarifResult::code` was explicitly an `Option<&'a SecondaryCode>`, which was
also invalid according to the schema. I made it a required field and fell back
on the lint name as in some of the other recent formats. This currently only
affects syntax errors in Ruff.

In the second commit I moved the code to `ruff_db` and updated the Ruff-specific
`expect` calls.

Test Plan
--

Existing tests ported to `ruff_db`

[validator]: https://www.jsonschemavalidator.net/s/GlhhhHQ7
2025-07-15 22:00:19 -04:00
115 changed files with 1348 additions and 6366 deletions

View File

@@ -1,36 +1,5 @@
# Changelog
## 0.12.4
### Preview features
- \[`flake8-type-checking`, `pyupgrade`, `ruff`\] Add `from __future__ import annotations` when it would allow new fixes (`TC001`, `TC002`, `TC003`, `UP037`, `RUF013`) ([#19100](https://github.com/astral-sh/ruff/pull/19100))
- \[`flake8-use-pathlib`\] Add autofix for `PTH109` ([#19245](https://github.com/astral-sh/ruff/pull/19245))
- \[`pylint`\] Detect indirect `pathlib.Path` usages for `unspecified-encoding` (`PLW1514`) ([#19304](https://github.com/astral-sh/ruff/pull/19304))
### Bug fixes
- \[`flake8-bugbear`\] Fix `B017` false negatives for keyword exception arguments ([#19217](https://github.com/astral-sh/ruff/pull/19217))
- \[`flake8-use-pathlib`\] Fix false negative on direct `Path()` instantiation (`PTH210`) ([#19388](https://github.com/astral-sh/ruff/pull/19388))
- \[`flake8-django`\] Fix `DJ008` false positive for abstract models with type-annotated `abstract` field ([#19221](https://github.com/astral-sh/ruff/pull/19221))
- \[`isort`\] Fix `I002` import insertion after docstring with multiple string statements ([#19222](https://github.com/astral-sh/ruff/pull/19222))
- \[`isort`\] Treat form feed as valid whitespace before a semicolon ([#19343](https://github.com/astral-sh/ruff/pull/19343))
- \[`pydoclint`\] Fix `SyntaxError` from fixes with line continuations (`D201`, `D202`) ([#19246](https://github.com/astral-sh/ruff/pull/19246))
- \[`refurb`\] `FURB164` fix should validate arguments and should usually be marked unsafe ([#19136](https://github.com/astral-sh/ruff/pull/19136))
### Rule changes
- \[`flake8-use-pathlib`\] Skip single dots for `invalid-pathlib-with-suffix` (`PTH210`) on versions >= 3.14 ([#19331](https://github.com/astral-sh/ruff/pull/19331))
- \[`pep8_naming`\] Avoid false positives on standard library functions with uppercase names (`N802`) ([#18907](https://github.com/astral-sh/ruff/pull/18907))
- \[`pycodestyle`\] Handle brace escapes for t-strings in logical lines ([#19358](https://github.com/astral-sh/ruff/pull/19358))
- \[`pylint`\] Extend invalid string character rules to include t-strings ([#19355](https://github.com/astral-sh/ruff/pull/19355))
- \[`ruff`\] Allow `strict` kwarg when checking for `starmap-zip` (`RUF058`) in Python 3.14+ ([#19333](https://github.com/astral-sh/ruff/pull/19333))
### Documentation
- \[`flake8-type-checking`\] Make `TC010` docs example more realistic ([#19356](https://github.com/astral-sh/ruff/pull/19356))
- Make more documentation examples error out-of-the-box ([#19288](https://github.com/astral-sh/ruff/pull/19288),[#19272](https://github.com/astral-sh/ruff/pull/19272),[#19291](https://github.com/astral-sh/ruff/pull/19291),[#19296](https://github.com/astral-sh/ruff/pull/19296),[#19292](https://github.com/astral-sh/ruff/pull/19292),[#19295](https://github.com/astral-sh/ruff/pull/19295),[#19297](https://github.com/astral-sh/ruff/pull/19297),[#19309](https://github.com/astral-sh/ruff/pull/19309))
## 0.12.3
### Preview features

View File

@@ -266,13 +266,6 @@ Finally, regenerate the documentation and generated code with `cargo dev generat
## MkDocs
> [!NOTE]
>
> The documentation uses Material for MkDocs Insiders, which is closed-source software.
> This means only members of the Astral organization can preview the documentation exactly as it
> will appear in production.
> Outside contributors can still preview the documentation, but there will be some differences. Consult [the Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/insiders/benefits/#features) for which features are exclusively available in the insiders version.
To preview any changes to the documentation locally:
1. Install the [Rust toolchain](https://www.rust-lang.org/tools/install).

8
Cargo.lock generated
View File

@@ -2711,7 +2711,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.12.4"
version = "0.12.3"
dependencies = [
"anyhow",
"argfile",
@@ -2962,7 +2962,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.12.4"
version = "0.12.3"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3295,7 +3295,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.12.4"
version = "0.12.3"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -4287,7 +4287,6 @@ dependencies = [
"strum_macros",
"tempfile",
"test-case",
"thin-vec",
"thiserror 2.0.12",
"tracing",
"ty_python_semantic",
@@ -4301,7 +4300,6 @@ name = "ty_server"
version = "0.0.0"
dependencies = [
"anyhow",
"bitflags 2.9.1",
"crossbeam",
"jod-thread",
"libc",

View File

@@ -163,7 +163,6 @@ strum_macros = { version = "0.27.0" }
syn = { version = "2.0.55" }
tempfile = { version = "3.9.0" }
test-case = { version = "3.3.1" }
thin-vec = { version = "0.2.14" }
thiserror = { version = "2.0.0" }
tikv-jemallocator = { version = "0.6.0" }
toml = { version = "0.9.0" }

View File

@@ -148,8 +148,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.12.4/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.12.4/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.12.3/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.12.3/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -182,7 +182,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.12.4
rev: v0.12.3
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.12.4"
version = "0.12.3"
publish = true
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -993,7 +993,6 @@ fn value_given_to_table_key_is_not_inline_table_2() {
- `lint.exclude`
- `lint.preview`
- `lint.typing-extensions`
- `lint.future-annotations`
For more information, try '--help'.
");
@@ -5718,11 +5717,8 @@ match 42: # invalid-syntax
let snapshot = format!("output_format_{output_format}");
let project_dir = dunce::canonicalize(tempdir.path())?;
insta::with_settings!({
filters => vec![
(tempdir_filter(&project_dir).as_str(), "[TMP]/"),
(tempdir_filter(&tempdir).as_str(), "[TMP]/"),
(r#""[^"]+\\?/?input.py"#, r#""[TMP]/input.py"#),
(ruff_linter::VERSION, "[VERSION]"),
@@ -5748,25 +5744,3 @@ match 42: # invalid-syntax
Ok(())
}
#[test]
fn future_annotations_preview_warning() {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "lint.future-annotations = true"])
.args(["--select", "F"])
.arg("--no-preview")
.arg("-")
.pass_stdin("1"),
@r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
warning: The `lint.future-annotations` setting will have no effect because `preview` is disabled
",
);
}

View File

@@ -85,7 +85,7 @@ exit_code: 1
"message": {
"text": "SyntaxError: Cannot use `match` statement on Python 3.9 (syntax was added in Python 3.10)"
},
"ruleId": null
"ruleId": "invalid-syntax"
}
],
"tool": {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.12.4"
version = "0.12.3"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1,10 +0,0 @@
from collections import Counter
from elsewhere import third_party
from . import first_party
def f(x: first_party.foo): ...
def g(x: third_party.bar): ...
def h(x: Counter): ...

View File

@@ -1,68 +0,0 @@
def f():
from . import first_party
def f(x: first_party.foo): ...
# Type parameter bounds
def g():
from . import foo
class C[T: foo.Ty]: ...
def h():
from . import foo
def f[T: foo.Ty](x: T): ...
def i():
from . import foo
type Alias[T: foo.Ty] = list[T]
# Type parameter defaults
def j():
from . import foo
class C[T = foo.Ty]: ...
def k():
from . import foo
def f[T = foo.Ty](x: T): ...
def l():
from . import foo
type Alias[T = foo.Ty] = list[T]
# non-generic type alias
def m():
from . import foo
type Alias = foo.Ty
# unions
from typing import Union
def n():
from . import foo
def f(x: Union[foo.Ty, int]): ...
def g(x: foo.Ty | int): ...
# runtime and typing usage
def o():
from . import foo
def f(x: foo.Ty):
return foo.Ty()

View File

@@ -1,6 +0,0 @@
from __future__ import annotations
from . import first_party
def f(x: first_party.foo): ...

View File

@@ -54,13 +54,6 @@ windows_path.with_suffix(r"s")
windows_path.with_suffix(u'' "json")
windows_path.with_suffix(suffix="js")
Path().with_suffix(".")
Path().with_suffix("py")
PosixPath().with_suffix("py")
PurePath().with_suffix("py")
PurePosixPath().with_suffix("py")
PureWindowsPath().with_suffix("py")
WindowsPath().with_suffix("py")
### No errors
path.with_suffix()

View File

@@ -104,6 +104,3 @@ os.chmod(x)
os.replace("src", "dst", src_dir_fd=1, dst_dir_fd=2)
os.replace("src", "dst", src_dir_fd=1)
os.replace("src", "dst", dst_dir_fd=2)
os.getcwd()
os.getcwdb()

View File

@@ -1,5 +0,0 @@
# This is a regression test for https://github.com/astral-sh/ruff/issues/19310
# there is a (potentially invisible) unicode formfeed character (000C) between "docstring" and the semicolon
"docstring" ; print(
f"{__doc__=}",
)

View File

@@ -91,16 +91,9 @@ Path("foo.txt").write_text(text, encoding="utf-8")
Path("foo.txt").write_text(text, *args)
Path("foo.txt").write_text(text, **kwargs)
# https://github.com/astral-sh/ruff/issues/19294
# Violation but not detectable
x = Path("foo.txt")
x.open()
# https://github.com/astral-sh/ruff/issues/18107
codecs.open("plw1514.py", "r", "utf-8").close() # this is fine
# function argument annotated as Path
from pathlib import Path
def format_file(file: Path):
with file.open() as f:
contents = f.read()

View File

@@ -27,24 +27,7 @@ _ = Decimal.from_float(float(" -inF\n \t"))
_ = Decimal.from_float(float(" InfinIty\n\t "))
_ = Decimal.from_float(float(" -InfinIty\n \t"))
# Cases with keyword arguments - should produce unsafe fixes
_ = Fraction.from_decimal(dec=Decimal("4.2"))
_ = Decimal.from_float(f=4.2)
# Cases with invalid argument counts - should not get fixes
_ = Fraction.from_decimal(Decimal("4.2"), 1)
_ = Decimal.from_float(4.2, None)
# Cases with wrong keyword arguments - should not get fixes
_ = Fraction.from_decimal(numerator=Decimal("4.2"))
_ = Decimal.from_float(value=4.2)
# Cases with type validation issues - should produce unsafe fixes
_ = Decimal.from_float("4.2") # Invalid type for from_float
_ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
_ = Fraction.from_float("4.2") # Invalid type for from_float
# OK - should not trigger the rule
# OK
_ = Fraction(0.1)
_ = Fraction(-0.5)
_ = Fraction(5.0)

View File

@@ -71,7 +71,7 @@ pub(crate) fn deferred_scopes(checker: &Checker) {
flake8_type_checking::helpers::is_valid_runtime_import(
binding,
&checker.semantic,
checker.settings(),
&checker.settings().flake8_type_checking,
)
})
.collect()

View File

@@ -1044,6 +1044,7 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
Rule::OsMakedirs,
Rule::OsRename,
Rule::OsReplace,
Rule::OsGetcwd,
Rule::OsStat,
Rule::OsPathJoin,
Rule::OsPathSamefile,
@@ -1109,9 +1110,6 @@ pub(crate) fn expression(expr: &Expr, checker: &Checker) {
if checker.is_rule_enabled(Rule::OsReadlink) {
flake8_use_pathlib::rules::os_readlink(checker, call, segments);
}
if checker.is_rule_enabled(Rule::OsGetcwd) {
flake8_use_pathlib::rules::os_getcwd(checker, call, segments);
}
if checker.is_rule_enabled(Rule::PathConstructorCurrentDirectory) {
flake8_use_pathlib::rules::path_constructor_current_directory(
checker, call, segments,

View File

@@ -2770,10 +2770,11 @@ impl<'a> Checker<'a> {
self.semantic.restore(snapshot);
if self.is_rule_enabled(Rule::QuotedAnnotation) {
pyupgrade::rules::quoted_annotation(self, annotation, range);
if self.semantic.in_typing_only_annotation() {
if self.is_rule_enabled(Rule::QuotedAnnotation) {
pyupgrade::rules::quoted_annotation(self, annotation, range);
}
}
if self.source_type.is_stub() {
if self.is_rule_enabled(Rule::QuotedAnnotationInStub) {
flake8_pyi::rules::quoted_annotation_in_stub(

View File

@@ -928,7 +928,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8UsePathlib, "106") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsRmdir),
(Flake8UsePathlib, "107") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsRemove),
(Flake8UsePathlib, "108") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsUnlink),
(Flake8UsePathlib, "109") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsGetcwd),
(Flake8UsePathlib, "109") => (RuleGroup::Stable, rules::flake8_use_pathlib::violations::OsGetcwd),
(Flake8UsePathlib, "110") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsPathExists),
(Flake8UsePathlib, "111") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsPathExpanduser),
(Flake8UsePathlib, "112") => (RuleGroup::Stable, rules::flake8_use_pathlib::rules::OsPathIsdir),

View File

@@ -288,7 +288,7 @@ fn match_docstring_end(body: &[Stmt]) -> Option<TextSize> {
fn match_semicolon(s: &str) -> Option<TextSize> {
for (offset, c) in s.char_indices() {
match c {
_ if is_python_whitespace(c) => continue,
' ' | '\t' => continue,
';' => return Some(TextSize::try_from(offset).unwrap()),
_ => break,
}

View File

@@ -527,17 +527,6 @@ impl<'a> Importer<'a> {
None
}
}
/// Add a `from __future__ import annotations` import.
pub(crate) fn add_future_import(&self) -> Edit {
let import = &NameImport::ImportFrom(MemberNameImport::member(
"__future__".to_string(),
"annotations".to_string(),
));
// Note that `TextSize::default` should ensure that the import is added at the very
// beginning of the file via `Insertion::start_of_file`.
self.add_import(import, TextSize::default())
}
}
/// An edit to the top-level of a module, making it available at runtime.

View File

@@ -2,8 +2,7 @@ use std::collections::HashSet;
use std::io::Write;
use anyhow::Result;
use serde::{Serialize, Serializer};
use serde_json::json;
use serde::Serialize;
use ruff_db::diagnostic::{Diagnostic, SecondaryCode};
use ruff_source_file::OneIndexed;
@@ -27,38 +26,43 @@ impl Emitter for SarifEmitter {
.map(SarifResult::from_message)
.collect::<Result<Vec<_>>>()?;
let unique_rules: HashSet<_> = results.iter().filter_map(|result| result.code).collect();
let unique_rules: HashSet<_> = diagnostics
.iter()
.filter_map(Diagnostic::secondary_code)
.collect();
let mut rules: Vec<SarifRule> = unique_rules.into_iter().map(SarifRule::from).collect();
rules.sort_by(|a, b| a.code.cmp(b.code));
rules.sort_by(|a, b| a.id.cmp(b.id));
let output = json!({
"$schema": "https://json.schemastore.org/sarif-2.1.0.json",
"version": "2.1.0",
"runs": [{
"tool": {
"driver": {
"name": "ruff",
"informationUri": "https://github.com/astral-sh/ruff",
"rules": rules,
"version": VERSION.to_string(),
}
let output = SarifOutput {
schema: "https://json.schemastore.org/sarif-2.1.0.json",
version: "2.1.0",
runs: [SarifRun {
tool: SarifTool {
driver: SarifDriver {
name: "ruff",
information_uri: "https://github.com/astral-sh/ruff",
rules,
version: VERSION,
},
},
"results": results,
results,
}],
});
};
serde_json::to_writer_pretty(writer, &output)?;
Ok(())
}
}
#[derive(Debug, Clone)]
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
struct SarifRule<'a> {
name: &'a str,
code: &'a SecondaryCode,
linter: &'a str,
summary: &'a str,
explanation: Option<&'a str>,
url: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
full_description: Option<MessageString<'a>>,
help: MessageString<'a>,
help_uri: Option<String>,
id: &'a SecondaryCode,
properties: SarifProperties<'a>,
short_description: MessageString<'a>,
}
impl<'a> From<&'a SecondaryCode> for SarifRule<'a> {
@@ -71,54 +75,28 @@ impl<'a> From<&'a SecondaryCode> for SarifRule<'a> {
.find(|rule| rule.noqa_code().suffix() == suffix)
.expect("Expected a valid noqa code corresponding to a rule");
Self {
name: rule.into(),
code,
linter: linter.name(),
summary: rule.message_formats()[0],
explanation: rule.explanation(),
url: rule.url(),
id: code,
help_uri: rule.url(),
short_description: MessageString::from(rule.message_formats()[0]),
full_description: rule.explanation().map(MessageString::from),
help: MessageString::from(rule.message_formats()[0]),
properties: SarifProperties {
id: code,
kind: linter.name(),
name: rule.into(),
problem_severity: "error",
},
}
}
}
impl Serialize for SarifRule<'_> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
json!({
"id": self.code,
"shortDescription": {
"text": self.summary,
},
"fullDescription": {
"text": self.explanation,
},
"help": {
"text": self.summary,
},
"helpUri": self.url,
"properties": {
"id": self.code,
"kind": self.linter,
"name": self.name,
"problem.severity": "error".to_string(),
},
})
.serialize(serializer)
}
}
#[derive(Debug)]
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct SarifResult<'a> {
code: Option<&'a SecondaryCode>,
level: String,
message: String,
uri: String,
start_line: OneIndexed,
start_column: OneIndexed,
end_line: OneIndexed,
end_column: OneIndexed,
level: &'static str,
locations: [SarifLocation; 1],
message: MessageString<'a>,
rule_id: &'a str,
}
impl<'a> SarifResult<'a> {
@@ -128,16 +106,28 @@ impl<'a> SarifResult<'a> {
let end_location = message.expect_ruff_end_location();
let path = normalize_path(&*message.expect_ruff_filename());
Ok(Self {
code: message.secondary_code(),
level: "error".to_string(),
message: message.body().to_string(),
uri: url::Url::from_file_path(&path)
.map_err(|()| anyhow::anyhow!("Failed to convert path to URL: {}", path.display()))?
.to_string(),
start_line: start_location.line,
start_column: start_location.column,
end_line: end_location.line,
end_column: end_location.column,
rule_id: message
.secondary_code()
.map_or_else(|| message.name(), SecondaryCode::as_str),
level: "error",
message: MessageString::from(message.body()),
locations: [SarifLocation {
physical_location: SarifPhysicalLocation {
artifact_location: SarifArtifactLocation {
uri: url::Url::from_file_path(&path)
.map_err(|()| {
anyhow::anyhow!("Failed to convert path to URL: {}", path.display())
})?
.to_string(),
},
region: SarifRegion {
start_line: start_location.line,
start_column: start_location.column,
end_line: end_location.line,
end_column: end_location.column,
},
},
}],
})
}
@@ -148,47 +138,103 @@ impl<'a> SarifResult<'a> {
let end_location = message.expect_ruff_end_location();
let path = normalize_path(&*message.expect_ruff_filename());
Ok(Self {
code: message.secondary_code(),
level: "error".to_string(),
message: message.body().to_string(),
uri: path.display().to_string(),
start_line: start_location.line,
start_column: start_location.column,
end_line: end_location.line,
end_column: end_location.column,
rule_id: message
.secondary_code()
.map_or_else(|| message.name(), SecondaryCode::as_str),
level: "error",
message: MessageString::from(message.body()),
locations: [SarifLocation {
physical_location: SarifPhysicalLocation {
artifact_location: SarifArtifactLocation {
uri: path.display().to_string(),
},
region: SarifRegion {
start_line: start_location.line,
start_column: start_location.column,
end_line: end_location.line,
end_column: end_location.column,
},
},
}],
})
}
}
impl Serialize for SarifResult<'_> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
json!({
"level": self.level,
"message": {
"text": self.message,
},
"locations": [{
"physicalLocation": {
"artifactLocation": {
"uri": self.uri,
},
"region": {
"startLine": self.start_line,
"startColumn": self.start_column,
"endLine": self.end_line,
"endColumn": self.end_column,
}
}
}],
"ruleId": self.code,
})
.serialize(serializer)
#[derive(Serialize)]
struct SarifOutput<'a> {
#[serde(rename = "$schema")]
schema: &'static str,
runs: [SarifRun<'a>; 1],
version: &'static str,
}
#[derive(Serialize)]
struct SarifRun<'a> {
results: Vec<SarifResult<'a>>,
tool: SarifTool<'a>,
}
#[derive(Serialize)]
struct SarifTool<'a> {
driver: SarifDriver<'a>,
}
#[derive(Serialize)]
struct SarifDriver<'a> {
#[serde(rename = "informationUri")]
information_uri: &'static str,
name: &'static str,
rules: Vec<SarifRule<'a>>,
version: &'static str,
}
#[derive(Debug, Clone, Serialize)]
struct SarifProperties<'a> {
id: &'a SecondaryCode,
kind: &'a str,
name: &'a str,
#[serde(rename = "problem.severity")]
problem_severity: &'static str,
}
#[derive(Debug, Clone, Serialize)]
struct MessageString<'a> {
text: &'a str,
}
impl<'a> From<&'a str> for MessageString<'a> {
fn from(text: &'a str) -> Self {
Self { text }
}
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct SarifLocation {
physical_location: SarifPhysicalLocation,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct SarifPhysicalLocation {
artifact_location: SarifArtifactLocation,
region: SarifRegion,
}
#[derive(Debug, Serialize)]
struct SarifArtifactLocation {
uri: String,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct SarifRegion {
end_column: OneIndexed,
end_line: OneIndexed,
start_column: OneIndexed,
start_line: OneIndexed,
}
#[cfg(test)]
mod tests {
use crate::message::SarifEmitter;

View File

@@ -134,11 +134,6 @@ pub(crate) const fn is_fix_os_path_dirname_enabled(settings: &LinterSettings) ->
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/19245
pub(crate) const fn is_fix_os_getcwd_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/11436
// https://github.com/astral-sh/ruff/pull/11168
pub(crate) const fn is_dunder_init_fix_unused_import_enabled(settings: &LinterSettings) -> bool {
@@ -200,8 +195,3 @@ pub(crate) const fn is_safe_super_call_with_parameters_fix_enabled(
pub(crate) const fn is_assert_raises_exception_call_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}
// https://github.com/astral-sh/ruff/pull/19100
pub(crate) const fn is_add_future_annotations_imports_enabled(settings: &LinterSettings) -> bool {
settings.preview.is_enabled()
}

View File

@@ -2,7 +2,8 @@ use std::fmt;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::Expr;
use ruff_text_size::Ranged;
use ruff_python_semantic::{MemberNameImport, NameImport};
use ruff_text_size::{Ranged, TextSize};
use crate::checkers::ast::Checker;
use crate::{AlwaysFixableViolation, Fix};
@@ -84,7 +85,15 @@ impl AlwaysFixableViolation for FutureRequiredTypeAnnotation {
/// FA102
pub(crate) fn future_required_type_annotation(checker: &Checker, expr: &Expr, reason: Reason) {
checker
.report_diagnostic(FutureRequiredTypeAnnotation { reason }, expr.range())
.set_fix(Fix::unsafe_edit(checker.importer().add_future_import()));
let mut diagnostic =
checker.report_diagnostic(FutureRequiredTypeAnnotation { reason }, expr.range());
let required_import = NameImport::ImportFrom(MemberNameImport::member(
"__future__".to_string(),
"annotations".to_string(),
));
diagnostic.set_fix(Fix::unsafe_edit(
checker
.importer()
.add_import(&required_import, TextSize::default()),
));
}

View File

@@ -1,10 +1,12 @@
use ruff_diagnostics::Fix;
use ruff_python_ast::Expr;
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_semantic::{MemberNameImport, NameImport};
use ruff_text_size::Ranged;
use crate::AlwaysFixableViolation;
use crate::checkers::ast::Checker;
use crate::{AlwaysFixableViolation, Fix};
/// ## What it does
/// Checks for missing `from __future__ import annotations` imports upon
@@ -93,7 +95,15 @@ pub(crate) fn future_rewritable_type_annotation(checker: &Checker, expr: &Expr)
let Some(name) = name else { return };
let import = &NameImport::ImportFrom(MemberNameImport::member(
"__future__".to_string(),
"annotations".to_string(),
));
checker
.report_diagnostic(FutureRewritableTypeAnnotation { name }, expr.range())
.set_fix(Fix::unsafe_edit(checker.importer().add_future_import()));
.set_fix(Fix::unsafe_edit(
checker
.importer()
.add_import(import, ruff_text_size::TextSize::default()),
));
}

View File

@@ -8,110 +8,41 @@ use ruff_python_ast::{self as ast, Decorator, Expr, StringLiteralFlags};
use ruff_python_codegen::{Generator, Stylist};
use ruff_python_parser::typing::parse_type_annotation;
use ruff_python_semantic::{
Binding, BindingKind, Modules, NodeId, ScopeKind, SemanticModel, analyze,
Binding, BindingKind, Modules, NodeId, ResolvedReference, ScopeKind, SemanticModel, analyze,
};
use ruff_text_size::{Ranged, TextRange};
use crate::Edit;
use crate::Locator;
use crate::settings::LinterSettings;
use crate::rules::flake8_type_checking::settings::Settings;
/// Represents the kind of an existing or potential typing-only annotation.
///
/// Note that the order of variants is important here. `Runtime` has the highest precedence when
/// calling [`TypingReference::combine`] on two references, followed by `Future`, `Quote`, and
/// `TypingOnly` with the lowest precedence.
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub(crate) enum TypingReference {
/// The reference is in a runtime-evaluated context.
Runtime,
/// The reference is in a runtime-evaluated context, but the
/// `lint.future-annotations` setting is enabled.
///
/// This takes precedence if both quoting and future imports are enabled.
Future,
/// The reference is in a runtime-evaluated context, but the
/// `lint.flake8-type-checking.quote-annotations` setting is enabled.
Quote,
/// The reference is in a typing-only context.
TypingOnly,
}
impl TypingReference {
/// Determine the kind of [`TypingReference`] for all references to a binding.
pub(crate) fn from_references(
binding: &Binding,
semantic: &SemanticModel,
settings: &LinterSettings,
) -> Self {
let references = binding
.references()
.map(|reference_id| semantic.reference(reference_id));
let mut kind = Self::TypingOnly;
for reference in references {
if reference.in_type_checking_block() {
kind = kind.combine(Self::TypingOnly);
continue;
}
// if we're not in a type checking block, we necessarily need to be within a
// type definition to be considered a typing reference
if !reference.in_type_definition() {
return Self::Runtime;
}
if reference.in_typing_only_annotation() || reference.in_string_type_definition() {
kind = kind.combine(Self::TypingOnly);
continue;
}
// prefer `from __future__ import annotations` to quoting
if settings.future_annotations()
&& !reference.in_typing_only_annotation()
&& reference.in_runtime_evaluated_annotation()
{
kind = kind.combine(Self::Future);
continue;
}
if settings.flake8_type_checking.quote_annotations
&& reference.in_runtime_evaluated_annotation()
{
kind = kind.combine(Self::Quote);
continue;
}
return Self::Runtime;
}
kind
}
/// Logically combine two `TypingReference`s into one.
///
/// `TypingReference::Runtime` has the highest precedence, followed by
/// `TypingReference::Future`, `TypingReference::Quote`, and then `TypingReference::TypingOnly`.
fn combine(self, other: TypingReference) -> TypingReference {
self.min(other)
}
fn is_runtime(self) -> bool {
matches!(self, Self::Runtime)
}
/// Returns `true` if the [`ResolvedReference`] is in a typing-only context _or_ a runtime-evaluated
/// context (with quoting enabled).
pub(crate) fn is_typing_reference(reference: &ResolvedReference, settings: &Settings) -> bool {
reference.in_type_checking_block()
// if we're not in a type checking block, we necessarily need to be within a
// type definition to be considered a typing reference
|| (reference.in_type_definition()
&& (reference.in_typing_only_annotation()
|| reference.in_string_type_definition()
|| (settings.quote_annotations && reference.in_runtime_evaluated_annotation())))
}
/// Returns `true` if the [`Binding`] represents a runtime-required import.
pub(crate) fn is_valid_runtime_import(
binding: &Binding,
semantic: &SemanticModel,
settings: &LinterSettings,
settings: &Settings,
) -> bool {
if matches!(
binding.kind,
BindingKind::Import(..) | BindingKind::FromImport(..) | BindingKind::SubmoduleImport(..)
) {
binding.context.is_runtime()
&& TypingReference::from_references(binding, semantic, settings).is_runtime()
&& binding
.references()
.map(|reference_id| semantic.reference(reference_id))
.any(|reference| !is_typing_reference(reference, settings))
} else {
false
}

View File

@@ -13,8 +13,6 @@ pub(crate) struct ImportBinding<'a> {
pub(crate) range: TextRange,
/// The range of the import's parent statement.
pub(crate) parent_range: Option<TextRange>,
/// Whether the binding needs `from __future__ import annotations` to be imported.
pub(crate) needs_future_import: bool,
}
impl Ranged for ImportBinding<'_> {

View File

@@ -9,12 +9,10 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use itertools::Itertools;
use ruff_python_ast::PythonVersion;
use test_case::test_case;
use crate::registry::{Linter, Rule};
use crate::settings::types::PreviewMode;
use crate::test::{test_path, test_snippet};
use crate::{assert_diagnostics, settings};
@@ -66,40 +64,6 @@ mod tests {
Ok(())
}
#[test_case(&[Rule::TypingOnlyFirstPartyImport], Path::new("TC001.py"))]
#[test_case(&[Rule::TypingOnlyThirdPartyImport], Path::new("TC002.py"))]
#[test_case(&[Rule::TypingOnlyStandardLibraryImport], Path::new("TC003.py"))]
#[test_case(
&[
Rule::TypingOnlyFirstPartyImport,
Rule::TypingOnlyThirdPartyImport,
Rule::TypingOnlyStandardLibraryImport,
],
Path::new("TC001-3_future.py")
)]
#[test_case(&[Rule::TypingOnlyFirstPartyImport], Path::new("TC001_future.py"))]
#[test_case(&[Rule::TypingOnlyFirstPartyImport], Path::new("TC001_future_present.py"))]
fn add_future_import(rules: &[Rule], path: &Path) -> Result<()> {
let name = rules.iter().map(Rule::noqa_code).join("-");
let snapshot = format!("add_future_import__{}_{}", name, path.to_string_lossy());
let diagnostics = test_path(
Path::new("flake8_type_checking").join(path).as_path(),
&settings::LinterSettings {
future_annotations: true,
preview: PreviewMode::Enabled,
// also enable quoting annotations to check the interaction. the future import
// should take precedence.
flake8_type_checking: super::settings::Settings {
quote_annotations: true,
..Default::default()
},
..settings::LinterSettings::for_rules(rules.iter().copied())
},
)?;
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
// we test these rules as a pair, since they're opposites of one another
// so we want to make sure their fixes are not going around in circles.
#[test_case(Rule::UnquotedTypeAlias, Path::new("TC007.py"))]

View File

@@ -139,7 +139,6 @@ pub(crate) fn runtime_import_in_type_checking_block(checker: &Checker, scope: &S
binding,
range: binding.range(),
parent_range: binding.parent_range(checker.semantic()),
needs_future_import: false, // TODO(brent) See #19359.
};
if checker.rule_is_ignored(Rule::RuntimeImportInTypeCheckingBlock, import.start())

View File

@@ -13,7 +13,7 @@ use crate::fix;
use crate::importer::ImportedMembers;
use crate::preview::is_full_path_match_source_strategy_enabled;
use crate::rules::flake8_type_checking::helpers::{
TypingReference, filter_contained, quote_annotation,
filter_contained, is_typing_reference, quote_annotation,
};
use crate::rules::flake8_type_checking::imports::ImportBinding;
use crate::rules::isort::categorize::MatchSourceStrategy;
@@ -71,19 +71,12 @@ use crate::{Fix, FixAvailability, Violation};
/// the criterion for determining whether an import is first-party
/// is stricter, which could affect whether this lint is triggered vs [`TC001`](https://docs.astral.sh/ruff/rules/typing-only-third-party-import/). See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.
///
/// If [`lint.future-annotations`] is set to `true`, `from __future__ import
/// annotations` will be added if doing so would enable an import to be moved into an `if
/// TYPE_CHECKING:` block. This takes precedence over the
/// [`lint.flake8-type-checking.quote-annotations`] setting described above if both settings are
/// enabled.
///
/// ## Options
/// - `lint.flake8-type-checking.quote-annotations`
/// - `lint.flake8-type-checking.runtime-evaluated-base-classes`
/// - `lint.flake8-type-checking.runtime-evaluated-decorators`
/// - `lint.flake8-type-checking.strict`
/// - `lint.typing-modules`
/// - `lint.future-annotations`
///
/// ## References
/// - [PEP 563: Runtime annotation resolution and `TYPE_CHECKING`](https://peps.python.org/pep-0563/#runtime-annotation-resolution-and-type-checking)
@@ -158,19 +151,12 @@ impl Violation for TypingOnlyFirstPartyImport {
/// the criterion for determining whether an import is first-party
/// is stricter, which could affect whether this lint is triggered vs [`TC001`](https://docs.astral.sh/ruff/rules/typing-only-first-party-import/). See [this FAQ section](https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc) for more details.
///
/// If [`lint.future-annotations`] is set to `true`, `from __future__ import
/// annotations` will be added if doing so would enable an import to be moved into an `if
/// TYPE_CHECKING:` block. This takes precedence over the
/// [`lint.flake8-type-checking.quote-annotations`] setting described above if both settings are
/// enabled.
///
/// ## Options
/// - `lint.flake8-type-checking.quote-annotations`
/// - `lint.flake8-type-checking.runtime-evaluated-base-classes`
/// - `lint.flake8-type-checking.runtime-evaluated-decorators`
/// - `lint.flake8-type-checking.strict`
/// - `lint.typing-modules`
/// - `lint.future-annotations`
///
/// ## References
/// - [PEP 563: Runtime annotation resolution and `TYPE_CHECKING`](https://peps.python.org/pep-0563/#runtime-annotation-resolution-and-type-checking)
@@ -240,22 +226,12 @@ impl Violation for TypingOnlyThirdPartyImport {
/// return str(path)
/// ```
///
/// ## Preview
///
/// When [preview](https://docs.astral.sh/ruff/preview/) is enabled, if
/// [`lint.future-annotations`] is set to `true`, `from __future__ import
/// annotations` will be added if doing so would enable an import to be moved into an `if
/// TYPE_CHECKING:` block. This takes precedence over the
/// [`lint.flake8-type-checking.quote-annotations`] setting described above if both settings are
/// enabled.
///
/// ## Options
/// - `lint.flake8-type-checking.quote-annotations`
/// - `lint.flake8-type-checking.runtime-evaluated-base-classes`
/// - `lint.flake8-type-checking.runtime-evaluated-decorators`
/// - `lint.flake8-type-checking.strict`
/// - `lint.typing-modules`
/// - `lint.future-annotations`
///
/// ## References
/// - [PEP 563: Runtime annotation resolution and `TYPE_CHECKING`](https://peps.python.org/pep-0563/#runtime-annotation-resolution-and-type-checking)
@@ -295,10 +271,9 @@ pub(crate) fn typing_only_runtime_import(
for binding_id in scope.binding_ids() {
let binding = checker.semantic().binding(binding_id);
// If we can't add a `__future__` import and in un-strict mode, don't flag typing-only
// imports that are implicitly loaded by way of a valid runtime import.
if !checker.settings().future_annotations()
&& !checker.settings().flake8_type_checking.strict
// If we're in un-strict mode, don't flag typing-only imports that are
// implicitly loaded by way of a valid runtime import.
if !checker.settings().flake8_type_checking.strict
&& runtime_imports
.iter()
.any(|import| is_implicit_import(binding, import))
@@ -314,102 +289,95 @@ pub(crate) fn typing_only_runtime_import(
continue;
};
if !binding.context.is_runtime() {
continue;
}
if binding.context.is_runtime()
&& binding
.references()
.map(|reference_id| checker.semantic().reference(reference_id))
.all(|reference| {
is_typing_reference(reference, &checker.settings().flake8_type_checking)
})
{
let qualified_name = import.qualified_name();
let typing_reference =
TypingReference::from_references(binding, checker.semantic(), checker.settings());
let needs_future_import = match typing_reference {
TypingReference::Runtime => continue,
// We can only get the `Future` variant if `future_annotations` is
// enabled, so we can unconditionally set this here.
TypingReference::Future => true,
TypingReference::TypingOnly | TypingReference::Quote => false,
};
let qualified_name = import.qualified_name();
if is_exempt(
&qualified_name.to_string(),
&checker
.settings()
.flake8_type_checking
.exempt_modules
.iter()
.map(String::as_str)
.collect::<Vec<_>>(),
) {
continue;
}
let source_name = import.source_name().join(".");
// Categorize the import, using coarse-grained categorization.
let match_source_strategy =
if is_full_path_match_source_strategy_enabled(checker.settings()) {
MatchSourceStrategy::FullPath
} else {
MatchSourceStrategy::Root
};
let import_type = match categorize(
&source_name,
qualified_name.is_unresolved_import(),
&checker.settings().src,
checker.package(),
checker.settings().isort.detect_same_package,
&checker.settings().isort.known_modules,
checker.target_version(),
checker.settings().isort.no_sections,
&checker.settings().isort.section_order,
&checker.settings().isort.default_section,
match_source_strategy,
) {
ImportSection::Known(ImportType::LocalFolder | ImportType::FirstParty) => {
ImportType::FirstParty
}
ImportSection::Known(ImportType::ThirdParty) | ImportSection::UserDefined(_) => {
ImportType::ThirdParty
}
ImportSection::Known(ImportType::StandardLibrary) => ImportType::StandardLibrary,
ImportSection::Known(ImportType::Future) => {
if is_exempt(
&qualified_name.to_string(),
&checker
.settings()
.flake8_type_checking
.exempt_modules
.iter()
.map(String::as_str)
.collect::<Vec<_>>(),
) {
continue;
}
};
if !checker.is_rule_enabled(rule_for(import_type)) {
continue;
}
let source_name = import.source_name().join(".");
let Some(node_id) = binding.source else {
continue;
};
// Categorize the import, using coarse-grained categorization.
let match_source_strategy =
if is_full_path_match_source_strategy_enabled(checker.settings()) {
MatchSourceStrategy::FullPath
} else {
MatchSourceStrategy::Root
};
let import = ImportBinding {
import,
reference_id,
binding,
range: binding.range(),
parent_range: binding.parent_range(checker.semantic()),
needs_future_import,
};
let import_type = match categorize(
&source_name,
qualified_name.is_unresolved_import(),
&checker.settings().src,
checker.package(),
checker.settings().isort.detect_same_package,
&checker.settings().isort.known_modules,
checker.target_version(),
checker.settings().isort.no_sections,
&checker.settings().isort.section_order,
&checker.settings().isort.default_section,
match_source_strategy,
) {
ImportSection::Known(ImportType::LocalFolder | ImportType::FirstParty) => {
ImportType::FirstParty
}
ImportSection::Known(ImportType::ThirdParty) | ImportSection::UserDefined(_) => {
ImportType::ThirdParty
}
ImportSection::Known(ImportType::StandardLibrary) => ImportType::StandardLibrary,
ImportSection::Known(ImportType::Future) => {
continue;
}
};
if checker.rule_is_ignored(rule_for(import_type), import.start())
|| import.parent_range.is_some_and(|parent_range| {
checker.rule_is_ignored(rule_for(import_type), parent_range.start())
})
{
ignores_by_statement
.entry((node_id, import_type))
.or_default()
.push(import);
} else {
errors_by_statement
.entry((node_id, import_type))
.or_default()
.push(import);
if !checker.is_rule_enabled(rule_for(import_type)) {
continue;
}
let Some(node_id) = binding.source else {
continue;
};
let import = ImportBinding {
import,
reference_id,
binding,
range: binding.range(),
parent_range: binding.parent_range(checker.semantic()),
};
if checker.rule_is_ignored(rule_for(import_type), import.start())
|| import.parent_range.is_some_and(|parent_range| {
checker.rule_is_ignored(rule_for(import_type), parent_range.start())
})
{
ignores_by_statement
.entry((node_id, import_type))
.or_default()
.push(import);
} else {
errors_by_statement
.entry((node_id, import_type))
.or_default()
.push(import);
}
}
}
@@ -541,8 +509,6 @@ fn fix_imports(checker: &Checker, node_id: NodeId, imports: &[ImportBinding]) ->
.min()
.expect("Expected at least one import");
let add_future_import = imports.iter().any(|binding| binding.needs_future_import);
// Step 1) Remove the import.
let remove_import_edit = fix::edits::remove_unused_imports(
member_names.iter().map(AsRef::as_ref),
@@ -566,52 +532,37 @@ fn fix_imports(checker: &Checker, node_id: NodeId, imports: &[ImportBinding]) ->
)?
.into_edits();
// Step 3) Either add a `__future__` import or quote any runtime usages of the referenced
// symbol.
let fix = if add_future_import {
let future_import = checker.importer().add_future_import();
// The order here is very important. We first need to add the `__future__` import, if
// needed, since it's a syntax error to come later. Then `type_checking_edit` imports
// `TYPE_CHECKING`, if available. Then we can add and/or remove existing imports.
Fix::unsafe_edits(
future_import,
std::iter::once(type_checking_edit)
.chain(add_import_edit)
.chain(std::iter::once(remove_import_edit)),
)
} else {
let quote_reference_edits = filter_contained(
imports
.iter()
.flat_map(|ImportBinding { binding, .. }| {
binding.references.iter().filter_map(|reference_id| {
let reference = checker.semantic().reference(*reference_id);
if reference.in_runtime_context() {
Some(quote_annotation(
reference.expression_id()?,
checker.semantic(),
checker.stylist(),
checker.locator(),
checker.default_string_flags(),
))
} else {
None
}
})
// Step 3) Quote any runtime usages of the referenced symbol.
let quote_reference_edits = filter_contained(
imports
.iter()
.flat_map(|ImportBinding { binding, .. }| {
binding.references.iter().filter_map(|reference_id| {
let reference = checker.semantic().reference(*reference_id);
if reference.in_runtime_context() {
Some(quote_annotation(
reference.expression_id()?,
checker.semantic(),
checker.stylist(),
checker.locator(),
checker.default_string_flags(),
))
} else {
None
}
})
.collect::<Vec<_>>(),
);
Fix::unsafe_edits(
type_checking_edit,
add_import_edit
.into_iter()
.chain(std::iter::once(remove_import_edit))
.chain(quote_reference_edits),
)
};
})
.collect::<Vec<_>>(),
);
Ok(fix.isolate(Checker::isolation(
Ok(Fix::unsafe_edits(
type_checking_edit,
add_import_edit
.into_iter()
.chain(std::iter::once(remove_import_edit))
.chain(quote_reference_edits),
)
.isolate(Checker::isolation(
checker.semantic().parent_statement_id(node_id),
)))
}

View File

@@ -1,76 +0,0 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TC001-3_future.py:1:25: TC003 [*] Move standard library import `collections.Counter` into a type-checking block
|
1 | from collections import Counter
| ^^^^^^^ TC003
2 |
3 | from elsewhere import third_party
|
= help: Move into type-checking block
Unsafe fix
1 |-from collections import Counter
1 |+from __future__ import annotations
2 2 |
3 3 | from elsewhere import third_party
4 4 |
5 5 | from . import first_party
6 |+from typing import TYPE_CHECKING
7 |+
8 |+if TYPE_CHECKING:
9 |+ from collections import Counter
6 10 |
7 11 |
8 12 | def f(x: first_party.foo): ...
TC001-3_future.py:3:23: TC002 [*] Move third-party import `elsewhere.third_party` into a type-checking block
|
1 | from collections import Counter
2 |
3 | from elsewhere import third_party
| ^^^^^^^^^^^ TC002
4 |
5 | from . import first_party
|
= help: Move into type-checking block
Unsafe fix
1 |+from __future__ import annotations
1 2 | from collections import Counter
2 3 |
3 |-from elsewhere import third_party
4 4 |
5 5 | from . import first_party
6 |+from typing import TYPE_CHECKING
7 |+
8 |+if TYPE_CHECKING:
9 |+ from elsewhere import third_party
6 10 |
7 11 |
8 12 | def f(x: first_party.foo): ...
TC001-3_future.py:5:15: TC001 [*] Move application import `.first_party` into a type-checking block
|
3 | from elsewhere import third_party
4 |
5 | from . import first_party
| ^^^^^^^^^^^ TC001
|
= help: Move into type-checking block
Unsafe fix
1 |+from __future__ import annotations
1 2 | from collections import Counter
2 3 |
3 4 | from elsewhere import third_party
4 5 |
5 |-from . import first_party
6 |+from typing import TYPE_CHECKING
7 |+
8 |+if TYPE_CHECKING:
9 |+ from . import first_party
6 10 |
7 11 |
8 12 | def f(x: first_party.foo): ...

View File

@@ -1,32 +0,0 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TC001.py:20:19: TC001 [*] Move application import `.TYP001` into a type-checking block
|
19 | def f():
20 | from . import TYP001
| ^^^^^^ TC001
21 |
22 | x: TYP001
|
= help: Move into type-checking block
Unsafe fix
2 2 |
3 3 | For typing-only import detection tests, see `TC002.py`.
4 4 | """
5 |+from typing import TYPE_CHECKING
6 |+
7 |+if TYPE_CHECKING:
8 |+ from . import TYP001
5 9 |
6 10 |
7 11 | def f():
--------------------------------------------------------------------------------
17 21 |
18 22 |
19 23 | def f():
20 |- from . import TYP001
21 24 |
22 25 | x: TYP001
23 26 |

View File

@@ -1,56 +0,0 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TC001_future.py:2:19: TC001 [*] Move application import `.first_party` into a type-checking block
|
1 | def f():
2 | from . import first_party
| ^^^^^^^^^^^ TC001
3 |
4 | def f(x: first_party.foo): ...
|
= help: Move into type-checking block
Unsafe fix
1 |-def f():
1 |+from __future__ import annotations
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
2 5 | from . import first_party
6 |+def f():
3 7 |
4 8 | def f(x: first_party.foo): ...
5 9 |
TC001_future.py:57:19: TC001 [*] Move application import `.foo` into a type-checking block
|
56 | def n():
57 | from . import foo
| ^^^ TC001
58 |
59 | def f(x: Union[foo.Ty, int]): ...
|
= help: Move into type-checking block
Unsafe fix
1 |+from __future__ import annotations
1 2 | def f():
2 3 | from . import first_party
3 4 |
--------------------------------------------------------------------------------
50 51 |
51 52 |
52 53 | # unions
53 |-from typing import Union
54 |+from typing import Union, TYPE_CHECKING
54 55 |
56 |+if TYPE_CHECKING:
57 |+ from . import foo
58 |+
55 59 |
56 60 | def n():
57 |- from . import foo
58 61 |
59 62 | def f(x: Union[foo.Ty, int]): ...
60 63 | def g(x: foo.Ty | int): ...

View File

@@ -1,23 +0,0 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TC001_future_present.py:3:15: TC001 [*] Move application import `.first_party` into a type-checking block
|
1 | from __future__ import annotations
2 |
3 | from . import first_party
| ^^^^^^^^^^^ TC001
|
= help: Move into type-checking block
Unsafe fix
1 1 | from __future__ import annotations
2 2 |
3 |-from . import first_party
3 |+from typing import TYPE_CHECKING
4 |+
5 |+if TYPE_CHECKING:
6 |+ from . import first_party
4 7 |
5 8 |
6 9 | def f(x: first_party.foo): ...

View File

@@ -1,251 +0,0 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TC002.py:5:22: TC002 [*] Move third-party import `pandas` into a type-checking block
|
4 | def f():
5 | import pandas as pd # TC002
| ^^ TC002
6 |
7 | x: pd.DataFrame
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ import pandas as pd
2 6 |
3 7 |
4 8 | def f():
5 |- import pandas as pd # TC002
6 9 |
7 10 | x: pd.DataFrame
8 11 |
TC002.py:11:24: TC002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
10 | def f():
11 | from pandas import DataFrame # TC002
| ^^^^^^^^^ TC002
12 |
13 | x: DataFrame
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ from pandas import DataFrame
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
8 12 |
9 13 |
10 14 | def f():
11 |- from pandas import DataFrame # TC002
12 15 |
13 16 | x: DataFrame
14 17 |
TC002.py:17:37: TC002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
16 | def f():
17 | from pandas import DataFrame as df # TC002
| ^^ TC002
18 |
19 | x: df
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ from pandas import DataFrame as df
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
14 18 |
15 19 |
16 20 | def f():
17 |- from pandas import DataFrame as df # TC002
18 21 |
19 22 | x: df
20 23 |
TC002.py:23:22: TC002 [*] Move third-party import `pandas` into a type-checking block
|
22 | def f():
23 | import pandas as pd # TC002
| ^^ TC002
24 |
25 | x: pd.DataFrame = 1
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ import pandas as pd
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
20 24 |
21 25 |
22 26 | def f():
23 |- import pandas as pd # TC002
24 27 |
25 28 | x: pd.DataFrame = 1
26 29 |
TC002.py:29:24: TC002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
28 | def f():
29 | from pandas import DataFrame # TC002
| ^^^^^^^^^ TC002
30 |
31 | x: DataFrame = 2
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ from pandas import DataFrame
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
26 30 |
27 31 |
28 32 | def f():
29 |- from pandas import DataFrame # TC002
30 33 |
31 34 | x: DataFrame = 2
32 35 |
TC002.py:35:37: TC002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
34 | def f():
35 | from pandas import DataFrame as df # TC002
| ^^ TC002
36 |
37 | x: df = 3
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ from pandas import DataFrame as df
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
32 36 |
33 37 |
34 38 | def f():
35 |- from pandas import DataFrame as df # TC002
36 39 |
37 40 | x: df = 3
38 41 |
TC002.py:41:22: TC002 [*] Move third-party import `pandas` into a type-checking block
|
40 | def f():
41 | import pandas as pd # TC002
| ^^ TC002
42 |
43 | x: "pd.DataFrame" = 1
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ import pandas as pd
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
38 42 |
39 43 |
40 44 | def f():
41 |- import pandas as pd # TC002
42 45 |
43 46 | x: "pd.DataFrame" = 1
44 47 |
TC002.py:47:22: TC002 [*] Move third-party import `pandas` into a type-checking block
|
46 | def f():
47 | import pandas as pd # TC002
| ^^ TC002
48 |
49 | x = dict["pd.DataFrame", "pd.DataFrame"]
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ import pandas as pd
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
44 48 |
45 49 |
46 50 | def f():
47 |- import pandas as pd # TC002
48 51 |
49 52 | x = dict["pd.DataFrame", "pd.DataFrame"]
50 53 |
TC002.py:172:24: TC002 [*] Move third-party import `module.Member` into a type-checking block
|
170 | global Member
171 |
172 | from module import Member
| ^^^^^^ TC002
173 |
174 | x: Member = 1
|
= help: Move into type-checking block
Unsafe fix
1 1 | """Tests to determine accurate detection of typing-only imports."""
2 |+from typing import TYPE_CHECKING
3 |+
4 |+if TYPE_CHECKING:
5 |+ from module import Member
2 6 |
3 7 |
4 8 | def f():
--------------------------------------------------------------------------------
169 173 | def f():
170 174 | global Member
171 175 |
172 |- from module import Member
173 176 |
174 177 | x: Member = 1
175 178 |

View File

@@ -1,28 +0,0 @@
---
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
---
TC003.py:8:12: TC003 [*] Move standard library import `os` into a type-checking block
|
7 | def f():
8 | import os
| ^^ TC003
9 |
10 | x: os
|
= help: Move into type-checking block
Unsafe fix
2 2 |
3 3 | For typing-only import detection tests, see `TC002.py`.
4 4 | """
5 |+from typing import TYPE_CHECKING
6 |+
7 |+if TYPE_CHECKING:
8 |+ import os
5 9 |
6 10 |
7 11 | def f():
8 |- import os
9 12 |
10 13 | x: os
11 14 |

View File

@@ -3,7 +3,7 @@ use crate::{Edit, Fix, FixAvailability, Violation};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, PythonVersion, StringFlags};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::typing::{self, PathlibPathChecker, TypeChecker};
use ruff_python_semantic::analyze::typing;
use ruff_text_size::Ranged;
/// ## What it does
@@ -141,13 +141,12 @@ fn is_path_with_suffix_call(semantic: &SemanticModel, func: &ast::Expr) -> bool
return false;
}
match &**value {
ast::Expr::Name(name) => {
let Some(binding) = semantic.only_binding(name).map(|id| semantic.binding(id)) else {
return false;
};
typing::is_pathlib_path(binding, semantic)
}
expr => PathlibPathChecker::match_initializer(expr, semantic),
}
let ast::Expr::Name(name) = &**value else {
return false;
};
let Some(binding) = semantic.only_binding(name).map(|id| semantic.binding(id)) else {
return false;
};
typing::is_pathlib_path(binding, semantic)
}

View File

@@ -1,6 +1,5 @@
pub(crate) use glob_rule::*;
pub(crate) use invalid_pathlib_with_suffix::*;
pub(crate) use os_getcwd::*;
pub(crate) use os_path_abspath::*;
pub(crate) use os_path_basename::*;
pub(crate) use os_path_dirname::*;
@@ -24,7 +23,6 @@ pub(crate) use replaceable_by_pathlib::*;
mod glob_rule;
mod invalid_pathlib_with_suffix;
mod os_getcwd;
mod os_path_abspath;
mod os_path_basename;
mod os_path_dirname;

View File

@@ -1,100 +0,0 @@
use crate::checkers::ast::Checker;
use crate::importer::ImportRequest;
use crate::preview::is_fix_os_getcwd_enabled;
use crate::{FixAvailability, Violation};
use ruff_diagnostics::{Applicability, Edit, Fix};
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::ExprCall;
use ruff_text_size::Ranged;
/// ## What it does
/// Checks for uses of `os.getcwd` and `os.getcwdb`.
///
/// ## Why is this bad?
/// `pathlib` offers a high-level API for path manipulation, as compared to
/// the lower-level API offered by `os`. When possible, using `Path` object
/// methods such as `Path.cwd()` can improve readability over the `os`
/// module's counterparts (e.g., `os.getcwd()`).
///
/// ## Examples
/// ```python
/// import os
///
/// cwd = os.getcwd()
/// ```
///
/// Use instead:
/// ```python
/// from pathlib import Path
///
/// cwd = Path.cwd()
/// ```
///
/// ## Known issues
/// While using `pathlib` can improve the readability and type safety of your code,
/// it can be less performant than the lower-level alternatives that work directly with strings,
/// especially on older versions of Python.
///
/// ## Fix Safety
/// This rule's fix is marked as unsafe if the replacement would remove comments attached to the original expression.
///
/// ## References
/// - [Python documentation: `Path.cwd`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.cwd)
/// - [Python documentation: `os.getcwd`](https://docs.python.org/3/library/os.html#os.getcwd)
/// - [Python documentation: `os.getcwdb`](https://docs.python.org/3/library/os.html#os.getcwdb)
/// - [PEP 428 The pathlib module object-oriented filesystem paths](https://peps.python.org/pep-0428/)
/// - [Correspondence between `os` and `pathlib`](https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module)
/// - [Why you should be using pathlib](https://treyhunner.com/2018/12/why-you-should-be-using-pathlib/)
/// - [No really, pathlib is great](https://treyhunner.com/2019/01/no-really-pathlib-is-great/)
#[derive(ViolationMetadata)]
pub(crate) struct OsGetcwd;
impl Violation for OsGetcwd {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
"`os.getcwd()` should be replaced by `Path.cwd()`".to_string()
}
fn fix_title(&self) -> Option<String> {
Some("Replace with `Path.cwd()`".to_string())
}
}
/// PTH109
pub(crate) fn os_getcwd(checker: &Checker, call: &ExprCall, segments: &[&str]) {
if !matches!(segments, ["os", "getcwd" | "getcwdb"]) {
return;
}
let range = call.range();
let mut diagnostic = checker.report_diagnostic(OsGetcwd, call.func.range());
if !call.arguments.is_empty() {
return;
}
if is_fix_os_getcwd_enabled(checker.settings()) {
diagnostic.try_set_fix(|| {
let (import_edit, binding) = checker.importer().get_or_import_symbol(
&ImportRequest::import("pathlib", "Path"),
call.start(),
checker.semantic(),
)?;
let applicability = if checker.comment_ranges().intersects(range) {
Applicability::Unsafe
} else {
Applicability::Safe
};
let replacement = format!("{binding}.cwd()");
Ok(Fix::applicable_edits(
Edit::range_replacement(replacement, range),
[import_edit],
applicability,
))
});
}
}

View File

@@ -7,8 +7,8 @@ use crate::checkers::ast::Checker;
use crate::rules::flake8_use_pathlib::helpers::is_keyword_only_argument_non_default;
use crate::rules::flake8_use_pathlib::rules::Glob;
use crate::rules::flake8_use_pathlib::violations::{
BuiltinOpen, Joiner, OsChmod, OsListdir, OsMakedirs, OsMkdir, OsPathJoin, OsPathSamefile,
OsPathSplitext, OsRename, OsReplace, OsStat, OsSymlink, PyPath,
BuiltinOpen, Joiner, OsChmod, OsGetcwd, OsListdir, OsMakedirs, OsMkdir, OsPathJoin,
OsPathSamefile, OsPathSplitext, OsRename, OsReplace, OsStat, OsSymlink, PyPath,
};
pub(crate) fn replaceable_by_pathlib(checker: &Checker, call: &ExprCall) {
@@ -83,6 +83,10 @@ pub(crate) fn replaceable_by_pathlib(checker: &Checker, call: &ExprCall) {
}
checker.report_diagnostic_if_enabled(OsReplace, range)
}
// PTH109
["os", "getcwd"] => checker.report_diagnostic_if_enabled(OsGetcwd, range),
["os", "getcwdb"] => checker.report_diagnostic_if_enabled(OsGetcwd, range),
// PTH116
["os", "stat"] => {
// `dir_fd` is not supported by pathlib, so check if it's set to non-default values.

View File

@@ -536,7 +536,7 @@ PTH210.py:54:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
54 |+windows_path.with_suffix(u'.' "json")
55 55 | windows_path.with_suffix(suffix="js")
56 56 |
57 57 | Path().with_suffix(".")
57 57 |
PTH210.py:55:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
@@ -544,8 +544,6 @@ PTH210.py:55:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
54 | windows_path.with_suffix(u'' "json")
55 | windows_path.with_suffix(suffix="js")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
56 |
57 | Path().with_suffix(".")
|
= help: Add a leading dot
@@ -556,140 +554,5 @@ PTH210.py:55:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
55 |-windows_path.with_suffix(suffix="js")
55 |+windows_path.with_suffix(suffix=".js")
56 56 |
57 57 | Path().with_suffix(".")
58 58 | Path().with_suffix("py")
PTH210.py:57:1: PTH210 Invalid suffix passed to `.with_suffix()`
|
55 | windows_path.with_suffix(suffix="js")
56 |
57 | Path().with_suffix(".")
| ^^^^^^^^^^^^^^^^^^^^^^^ PTH210
58 | Path().with_suffix("py")
59 | PosixPath().with_suffix("py")
|
= help: Remove "." or extend to valid suffix
PTH210.py:58:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
57 | Path().with_suffix(".")
58 | Path().with_suffix("py")
| ^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
59 | PosixPath().with_suffix("py")
60 | PurePath().with_suffix("py")
|
= help: Add a leading dot
Unsafe fix
55 55 | windows_path.with_suffix(suffix="js")
56 56 |
57 57 | Path().with_suffix(".")
58 |-Path().with_suffix("py")
58 |+Path().with_suffix(".py")
59 59 | PosixPath().with_suffix("py")
60 60 | PurePath().with_suffix("py")
61 61 | PurePosixPath().with_suffix("py")
PTH210.py:59:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
57 | Path().with_suffix(".")
58 | Path().with_suffix("py")
59 | PosixPath().with_suffix("py")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
60 | PurePath().with_suffix("py")
61 | PurePosixPath().with_suffix("py")
|
= help: Add a leading dot
Unsafe fix
56 56 |
57 57 | Path().with_suffix(".")
58 58 | Path().with_suffix("py")
59 |-PosixPath().with_suffix("py")
59 |+PosixPath().with_suffix(".py")
60 60 | PurePath().with_suffix("py")
61 61 | PurePosixPath().with_suffix("py")
62 62 | PureWindowsPath().with_suffix("py")
PTH210.py:60:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
58 | Path().with_suffix("py")
59 | PosixPath().with_suffix("py")
60 | PurePath().with_suffix("py")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
61 | PurePosixPath().with_suffix("py")
62 | PureWindowsPath().with_suffix("py")
|
= help: Add a leading dot
Unsafe fix
57 57 | Path().with_suffix(".")
58 58 | Path().with_suffix("py")
59 59 | PosixPath().with_suffix("py")
60 |-PurePath().with_suffix("py")
60 |+PurePath().with_suffix(".py")
61 61 | PurePosixPath().with_suffix("py")
62 62 | PureWindowsPath().with_suffix("py")
63 63 | WindowsPath().with_suffix("py")
PTH210.py:61:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
59 | PosixPath().with_suffix("py")
60 | PurePath().with_suffix("py")
61 | PurePosixPath().with_suffix("py")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
62 | PureWindowsPath().with_suffix("py")
63 | WindowsPath().with_suffix("py")
|
= help: Add a leading dot
Unsafe fix
58 58 | Path().with_suffix("py")
59 59 | PosixPath().with_suffix("py")
60 60 | PurePath().with_suffix("py")
61 |-PurePosixPath().with_suffix("py")
61 |+PurePosixPath().with_suffix(".py")
62 62 | PureWindowsPath().with_suffix("py")
63 63 | WindowsPath().with_suffix("py")
64 64 |
PTH210.py:62:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
60 | PurePath().with_suffix("py")
61 | PurePosixPath().with_suffix("py")
62 | PureWindowsPath().with_suffix("py")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
63 | WindowsPath().with_suffix("py")
|
= help: Add a leading dot
Unsafe fix
59 59 | PosixPath().with_suffix("py")
60 60 | PurePath().with_suffix("py")
61 61 | PurePosixPath().with_suffix("py")
62 |-PureWindowsPath().with_suffix("py")
62 |+PureWindowsPath().with_suffix(".py")
63 63 | WindowsPath().with_suffix("py")
64 64 |
65 65 | ### No errors
PTH210.py:63:1: PTH210 [*] Dotless suffix passed to `.with_suffix()`
|
61 | PurePosixPath().with_suffix("py")
62 | PureWindowsPath().with_suffix("py")
63 | WindowsPath().with_suffix("py")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PTH210
64 |
65 | ### No errors
|
= help: Add a leading dot
Unsafe fix
60 60 | PurePath().with_suffix("py")
61 61 | PurePosixPath().with_suffix("py")
62 62 | PureWindowsPath().with_suffix("py")
63 |-WindowsPath().with_suffix("py")
63 |+WindowsPath().with_suffix(".py")
64 64 |
65 65 | ### No errors
66 66 | path.with_suffix()
57 57 |
58 58 | ### No errors

View File

@@ -103,7 +103,6 @@ full_name.py:16:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
17 | b = os.path.exists(p)
18 | bb = os.path.expanduser(p)
|
= help: Replace with `Path.cwd()`
full_name.py:17:5: PTH110 `os.path.exists()` should be replaced by `Path.exists()`
|
@@ -293,7 +292,6 @@ full_name.py:35:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
36 | os.path.join(p, *q)
37 | os.sep.join(p, *q)
|
= help: Replace with `Path.cwd()`
full_name.py:36:1: PTH118 `os.path.join()` should be replaced by `Path.joinpath()`
|
@@ -362,21 +360,3 @@ full_name.py:71:1: PTH123 `open()` should be replaced by `Path.open()`
72 |
73 | # https://github.com/astral-sh/ruff/issues/17693
|
full_name.py:108:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
|
106 | os.replace("src", "dst", dst_dir_fd=2)
107 |
108 | os.getcwd()
| ^^^^^^^^^ PTH109
109 | os.getcwdb()
|
= help: Replace with `Path.cwd()`
full_name.py:109:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
|
108 | os.getcwd()
109 | os.getcwdb()
| ^^^^^^^^^^ PTH109
|
= help: Replace with `Path.cwd()`

View File

@@ -103,7 +103,6 @@ import_as.py:16:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
17 | b = foo_p.exists(p)
18 | bb = foo_p.expanduser(p)
|
= help: Replace with `Path.cwd()`
import_as.py:17:5: PTH110 `os.path.exists()` should be replaced by `Path.exists()`
|

View File

@@ -103,7 +103,6 @@ import_from.py:18:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
19 | b = exists(p)
20 | bb = expanduser(p)
|
= help: Replace with `Path.cwd()`
import_from.py:19:5: PTH110 `os.path.exists()` should be replaced by `Path.exists()`
|

View File

@@ -103,7 +103,6 @@ import_from_as.py:23:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
24 | b = xexists(p)
25 | bb = xexpanduser(p)
|
= help: Replace with `Path.cwd()`
import_from_as.py:24:5: PTH110 `os.path.exists()` should be replaced by `Path.exists()`
|

View File

@@ -168,7 +168,6 @@ full_name.py:16:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
17 | b = os.path.exists(p)
18 | bb = os.path.expanduser(p)
|
= help: Replace with `Path.cwd()`
full_name.py:17:5: PTH110 [*] `os.path.exists()` should be replaced by `Path.exists()`
|
@@ -511,7 +510,6 @@ full_name.py:35:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
36 | os.path.join(p, *q)
37 | os.sep.join(p, *q)
|
= help: Replace with `Path.cwd()`
full_name.py:36:1: PTH118 `os.path.join()` should be replaced by `Path.joinpath()`
|
@@ -580,50 +578,3 @@ full_name.py:71:1: PTH123 `open()` should be replaced by `Path.open()`
72 |
73 | # https://github.com/astral-sh/ruff/issues/17693
|
full_name.py:108:1: PTH109 [*] `os.getcwd()` should be replaced by `Path.cwd()`
|
106 | os.replace("src", "dst", dst_dir_fd=2)
107 |
108 | os.getcwd()
| ^^^^^^^^^ PTH109
109 | os.getcwdb()
|
= help: Replace with `Path.cwd()`
Safe fix
1 1 | import os
2 2 | import os.path
3 |+import pathlib
3 4 |
4 5 | p = "/foo"
5 6 | q = "bar"
--------------------------------------------------------------------------------
105 106 | os.replace("src", "dst", src_dir_fd=1)
106 107 | os.replace("src", "dst", dst_dir_fd=2)
107 108 |
108 |-os.getcwd()
109 |+pathlib.Path.cwd()
109 110 | os.getcwdb()
full_name.py:109:1: PTH109 [*] `os.getcwd()` should be replaced by `Path.cwd()`
|
108 | os.getcwd()
109 | os.getcwdb()
| ^^^^^^^^^^ PTH109
|
= help: Replace with `Path.cwd()`
Safe fix
1 1 | import os
2 2 | import os.path
3 |+import pathlib
3 4 |
4 5 | p = "/foo"
5 6 | q = "bar"
--------------------------------------------------------------------------------
106 107 | os.replace("src", "dst", dst_dir_fd=2)
107 108 |
108 109 | os.getcwd()
109 |-os.getcwdb()
110 |+pathlib.Path.cwd()

View File

@@ -168,7 +168,6 @@ import_as.py:16:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
17 | b = foo_p.exists(p)
18 | bb = foo_p.expanduser(p)
|
= help: Replace with `Path.cwd()`
import_as.py:17:5: PTH110 [*] `os.path.exists()` should be replaced by `Path.exists()`
|

View File

@@ -172,7 +172,6 @@ import_from.py:18:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
19 | b = exists(p)
20 | bb = expanduser(p)
|
= help: Replace with `Path.cwd()`
import_from.py:19:5: PTH110 [*] `os.path.exists()` should be replaced by `Path.exists()`
|

View File

@@ -172,7 +172,6 @@ import_from_as.py:23:1: PTH109 `os.getcwd()` should be replaced by `Path.cwd()`
24 | b = xexists(p)
25 | bb = xexpanduser(p)
|
= help: Replace with `Path.cwd()`
import_from_as.py:24:5: PTH110 [*] `os.path.exists()` should be replaced by `Path.exists()`
|

View File

@@ -230,6 +230,52 @@ impl Violation for OsReplace {
}
}
/// ## What it does
/// Checks for uses of `os.getcwd` and `os.getcwdb`.
///
/// ## Why is this bad?
/// `pathlib` offers a high-level API for path manipulation, as compared to
/// the lower-level API offered by `os`. When possible, using `Path` object
/// methods such as `Path.cwd()` can improve readability over the `os`
/// module's counterparts (e.g., `os.getcwd()`).
///
/// ## Examples
/// ```python
/// import os
///
/// cwd = os.getcwd()
/// ```
///
/// Use instead:
/// ```python
/// from pathlib import Path
///
/// cwd = Path.cwd()
/// ```
///
/// ## Known issues
/// While using `pathlib` can improve the readability and type safety of your code,
/// it can be less performant than the lower-level alternatives that work directly with strings,
/// especially on older versions of Python.
///
/// ## References
/// - [Python documentation: `Path.cwd`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.cwd)
/// - [Python documentation: `os.getcwd`](https://docs.python.org/3/library/os.html#os.getcwd)
/// - [Python documentation: `os.getcwdb`](https://docs.python.org/3/library/os.html#os.getcwdb)
/// - [PEP 428 The pathlib module object-oriented filesystem paths](https://peps.python.org/pep-0428/)
/// - [Correspondence between `os` and `pathlib`](https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module)
/// - [Why you should be using pathlib](https://treyhunner.com/2018/12/why-you-should-be-using-pathlib/)
/// - [No really, pathlib is great](https://treyhunner.com/2019/01/no-really-pathlib-is-great/)
#[derive(ViolationMetadata)]
pub(crate) struct OsGetcwd;
impl Violation for OsGetcwd {
#[derive_message_formats]
fn message(&self) -> String {
"`os.getcwd()` should be replaced by `Path.cwd()`".to_string()
}
}
/// ## What it does
/// Checks for uses of `os.stat`.
///

View File

@@ -801,7 +801,6 @@ mod tests {
#[test_case(Path::new("existing_import.py"))]
#[test_case(Path::new("multiline_docstring.py"))]
#[test_case(Path::new("off.py"))]
#[test_case(Path::new("whitespace.py"))]
fn required_import(path: &Path) -> Result<()> {
let snapshot = format!("required_import_{}", path.to_string_lossy());
let diagnostics = test_path(

View File

@@ -1,11 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---
whitespace.py:1:1: I002 [*] Missing required import: `from __future__ import annotations`
Safe fix
1 1 | # This is a regression test for https://github.com/astral-sh/ruff/issues/19310
2 2 | # there is a (potentially invisible) unicode formfeed character (000C) between "docstring" and the semicolon
3 |-"docstring" ; print(
3 |+"docstring" ; from __future__ import annotations; print(
4 4 | f"{__doc__=}",
5 5 | )

View File

@@ -4,7 +4,6 @@ use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_semantic::SemanticModel;
use ruff_python_semantic::analyze::typing;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
@@ -112,34 +111,20 @@ enum Callee<'a> {
}
impl<'a> Callee<'a> {
fn is_pathlib_path_call(expr: &Expr, semantic: &SemanticModel) -> bool {
if let Expr::Call(ast::ExprCall { func, .. }) = expr {
semantic
.resolve_qualified_name(func)
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["pathlib", "Path"])
})
} else {
false
}
}
fn try_from_call_expression(
call: &'a ast::ExprCall,
semantic: &'a SemanticModel,
) -> Option<Self> {
if let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = call.func.as_ref() {
// Direct: Path(...).open()
if Self::is_pathlib_path_call(value, semantic) {
return Some(Callee::Pathlib(attr));
}
// Indirect: x.open() where x = Path(...)
else if let Expr::Name(name) = value.as_ref() {
if let Some(binding_id) = semantic.only_binding(name) {
let binding = semantic.binding(binding_id);
if typing::is_pathlib_path(binding, semantic) {
return Some(Callee::Pathlib(attr));
}
// Check for `pathlib.Path(...).open(...)` or equivalent
if let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() {
if semantic
.resolve_qualified_name(func)
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["pathlib", "Path"])
})
{
return Some(Callee::Pathlib(attr));
}
}
}

View File

@@ -435,41 +435,3 @@ unspecified_encoding.py:80:1: PLW1514 [*] `pathlib.Path(...).write_text` without
81 81 |
82 82 | # Non-errors.
83 83 | Path("foo.txt").open(encoding="utf-8")
unspecified_encoding.py:96:1: PLW1514 [*] `pathlib.Path(...).open` in text mode without explicit `encoding` argument
|
94 | # https://github.com/astral-sh/ruff/issues/19294
95 | x = Path("foo.txt")
96 | x.open()
| ^^^^^^ PLW1514
97 |
98 | # https://github.com/astral-sh/ruff/issues/18107
|
= help: Add explicit `encoding` argument
Unsafe fix
93 93 |
94 94 | # https://github.com/astral-sh/ruff/issues/19294
95 95 | x = Path("foo.txt")
96 |-x.open()
96 |+x.open(encoding="utf-8")
97 97 |
98 98 | # https://github.com/astral-sh/ruff/issues/18107
99 99 | codecs.open("plw1514.py", "r", "utf-8").close() # this is fine
unspecified_encoding.py:105:10: PLW1514 [*] `pathlib.Path(...).open` in text mode without explicit `encoding` argument
|
104 | def format_file(file: Path):
105 | with file.open() as f:
| ^^^^^^^^^ PLW1514
106 | contents = f.read()
|
= help: Add explicit `encoding` argument
Unsafe fix
102 102 | from pathlib import Path
103 103 |
104 104 | def format_file(file: Path):
105 |- with file.open() as f:
105 |+ with file.open(encoding="utf-8") as f:
106 106 | contents = f.read()

View File

@@ -136,23 +136,6 @@ mod tests {
Ok(())
}
#[test_case(Rule::QuotedAnnotation, Path::new("UP037_0.py"))]
#[test_case(Rule::QuotedAnnotation, Path::new("UP037_1.py"))]
#[test_case(Rule::QuotedAnnotation, Path::new("UP037_2.pyi"))]
fn up037_add_future_annotation(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("add_future_annotation_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("pyupgrade").join(path).as_path(),
&settings::LinterSettings {
preview: PreviewMode::Enabled,
future_annotations: true,
..settings::LinterSettings::for_rule(rule_code)
},
)?;
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
#[test]
fn async_timeout_error_alias_not_applied_py310() -> Result<()> {
let diagnostics = test_path(

View File

@@ -57,22 +57,6 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
/// bar: Bar
/// ```
///
/// ## Preview
///
/// When [preview] is enabled, if [`lint.future-annotations`] is set to `true`,
/// `from __future__ import annotations` will be added if doing so would allow an annotation to be
/// unquoted.
///
/// ## Fix safety
///
/// The rule's fix is marked as safe, unless [preview] and
/// [`lint.future_annotations`] are enabled and a `from __future__ import
/// annotations` import is added. Such an import may change the behavior of all annotations in the
/// file.
///
/// ## Options
/// - `lint.future-annotations`
///
/// ## See also
/// - [`quoted-annotation-in-stub`][PYI020]: A rule that
/// removes all quoted annotations from stub files
@@ -85,7 +69,6 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
///
/// [PYI020]: https://docs.astral.sh/ruff/rules/quoted-annotation-in-stub/
/// [TC008]: https://docs.astral.sh/ruff/rules/quoted-type-alias/
/// [preview]: https://docs.astral.sh/ruff/preview/
#[derive(ViolationMetadata)]
pub(crate) struct QuotedAnnotation;
@@ -102,13 +85,6 @@ impl AlwaysFixableViolation for QuotedAnnotation {
/// UP037
pub(crate) fn quoted_annotation(checker: &Checker, annotation: &str, range: TextRange) {
let add_future_import = checker.settings().future_annotations()
&& checker.semantic().in_runtime_evaluated_annotation();
if !(checker.semantic().in_typing_only_annotation() || add_future_import) {
return;
}
let placeholder_range = TextRange::up_to(annotation.text_len());
let spans_multiple_lines = annotation.contains_line_break(placeholder_range);
@@ -127,14 +103,8 @@ pub(crate) fn quoted_annotation(checker: &Checker, annotation: &str, range: Text
(true, false) => format!("({annotation})"),
(_, true) => format!("({annotation}\n)"),
};
let unquote_edit = Edit::range_replacement(new_content, range);
let fix = if add_future_import {
let import_edit = checker.importer().add_future_import();
Fix::unsafe_edits(unquote_edit, [import_edit])
} else {
Fix::safe_edit(unquote_edit)
};
let edit = Edit::range_replacement(new_content, range);
let fix = Fix::safe_edit(edit);
checker
.report_diagnostic(QuotedAnnotation, range)

View File

@@ -1,625 +0,0 @@
---
source: crates/ruff_linter/src/rules/pyupgrade/mod.rs
---
UP037_0.py:18:14: UP037 [*] Remove quotes from type annotation
|
18 | def foo(var: "MyClass") -> "MyClass":
| ^^^^^^^^^ UP037
19 | x: "MyClass"
|
= help: Remove quotes
Safe fix
15 15 | from mypy_extensions import Arg, DefaultArg, DefaultNamedArg, NamedArg, VarArg
16 16 |
17 17 |
18 |-def foo(var: "MyClass") -> "MyClass":
18 |+def foo(var: MyClass) -> "MyClass":
19 19 | x: "MyClass"
20 20 |
21 21 |
UP037_0.py:18:28: UP037 [*] Remove quotes from type annotation
|
18 | def foo(var: "MyClass") -> "MyClass":
| ^^^^^^^^^ UP037
19 | x: "MyClass"
|
= help: Remove quotes
Safe fix
15 15 | from mypy_extensions import Arg, DefaultArg, DefaultNamedArg, NamedArg, VarArg
16 16 |
17 17 |
18 |-def foo(var: "MyClass") -> "MyClass":
18 |+def foo(var: "MyClass") -> MyClass:
19 19 | x: "MyClass"
20 20 |
21 21 |
UP037_0.py:19:8: UP037 [*] Remove quotes from type annotation
|
18 | def foo(var: "MyClass") -> "MyClass":
19 | x: "MyClass"
| ^^^^^^^^^ UP037
|
= help: Remove quotes
Safe fix
16 16 |
17 17 |
18 18 | def foo(var: "MyClass") -> "MyClass":
19 |- x: "MyClass"
19 |+ x: MyClass
20 20 |
21 21 |
22 22 | def foo(*, inplace: "bool"):
UP037_0.py:22:21: UP037 [*] Remove quotes from type annotation
|
22 | def foo(*, inplace: "bool"):
| ^^^^^^ UP037
23 | pass
|
= help: Remove quotes
Safe fix
19 19 | x: "MyClass"
20 20 |
21 21 |
22 |-def foo(*, inplace: "bool"):
22 |+def foo(*, inplace: bool):
23 23 | pass
24 24 |
25 25 |
UP037_0.py:26:16: UP037 [*] Remove quotes from type annotation
|
26 | def foo(*args: "str", **kwargs: "int"):
| ^^^^^ UP037
27 | pass
|
= help: Remove quotes
Safe fix
23 23 | pass
24 24 |
25 25 |
26 |-def foo(*args: "str", **kwargs: "int"):
26 |+def foo(*args: str, **kwargs: "int"):
27 27 | pass
28 28 |
29 29 |
UP037_0.py:26:33: UP037 [*] Remove quotes from type annotation
|
26 | def foo(*args: "str", **kwargs: "int"):
| ^^^^^ UP037
27 | pass
|
= help: Remove quotes
Safe fix
23 23 | pass
24 24 |
25 25 |
26 |-def foo(*args: "str", **kwargs: "int"):
26 |+def foo(*args: "str", **kwargs: int):
27 27 | pass
28 28 |
29 29 |
UP037_0.py:30:10: UP037 [*] Remove quotes from type annotation
|
30 | x: Tuple["MyClass"]
| ^^^^^^^^^ UP037
31 |
32 | x: Callable[["MyClass"], None]
|
= help: Remove quotes
Safe fix
27 27 | pass
28 28 |
29 29 |
30 |-x: Tuple["MyClass"]
30 |+x: Tuple[MyClass]
31 31 |
32 32 | x: Callable[["MyClass"], None]
33 33 |
UP037_0.py:32:14: UP037 [*] Remove quotes from type annotation
|
30 | x: Tuple["MyClass"]
31 |
32 | x: Callable[["MyClass"], None]
| ^^^^^^^^^ UP037
|
= help: Remove quotes
Safe fix
29 29 |
30 30 | x: Tuple["MyClass"]
31 31 |
32 |-x: Callable[["MyClass"], None]
32 |+x: Callable[[MyClass], None]
33 33 |
34 34 |
35 35 | class Foo(NamedTuple):
UP037_0.py:36:8: UP037 [*] Remove quotes from type annotation
|
35 | class Foo(NamedTuple):
36 | x: "MyClass"
| ^^^^^^^^^ UP037
|
= help: Remove quotes
Safe fix
33 33 |
34 34 |
35 35 | class Foo(NamedTuple):
36 |- x: "MyClass"
36 |+ x: MyClass
37 37 |
38 38 |
39 39 | class D(TypedDict):
UP037_0.py:40:27: UP037 [*] Remove quotes from type annotation
|
39 | class D(TypedDict):
40 | E: TypedDict("E", foo="int", total=False)
| ^^^^^ UP037
|
= help: Remove quotes
Safe fix
37 37 |
38 38 |
39 39 | class D(TypedDict):
40 |- E: TypedDict("E", foo="int", total=False)
40 |+ E: TypedDict("E", foo=int, total=False)
41 41 |
42 42 |
43 43 | class D(TypedDict):
UP037_0.py:44:31: UP037 [*] Remove quotes from type annotation
|
43 | class D(TypedDict):
44 | E: TypedDict("E", {"foo": "int"})
| ^^^^^ UP037
|
= help: Remove quotes
Safe fix
41 41 |
42 42 |
43 43 | class D(TypedDict):
44 |- E: TypedDict("E", {"foo": "int"})
44 |+ E: TypedDict("E", {"foo": int})
45 45 |
46 46 |
47 47 | x: Annotated["str", "metadata"]
UP037_0.py:47:14: UP037 [*] Remove quotes from type annotation
|
47 | x: Annotated["str", "metadata"]
| ^^^^^ UP037
48 |
49 | x: Arg("str", "name")
|
= help: Remove quotes
Safe fix
44 44 | E: TypedDict("E", {"foo": "int"})
45 45 |
46 46 |
47 |-x: Annotated["str", "metadata"]
47 |+x: Annotated[str, "metadata"]
48 48 |
49 49 | x: Arg("str", "name")
50 50 |
UP037_0.py:49:8: UP037 [*] Remove quotes from type annotation
|
47 | x: Annotated["str", "metadata"]
48 |
49 | x: Arg("str", "name")
| ^^^^^ UP037
50 |
51 | x: DefaultArg("str", "name")
|
= help: Remove quotes
Safe fix
46 46 |
47 47 | x: Annotated["str", "metadata"]
48 48 |
49 |-x: Arg("str", "name")
49 |+x: Arg(str, "name")
50 50 |
51 51 | x: DefaultArg("str", "name")
52 52 |
UP037_0.py:51:15: UP037 [*] Remove quotes from type annotation
|
49 | x: Arg("str", "name")
50 |
51 | x: DefaultArg("str", "name")
| ^^^^^ UP037
52 |
53 | x: NamedArg("str", "name")
|
= help: Remove quotes
Safe fix
48 48 |
49 49 | x: Arg("str", "name")
50 50 |
51 |-x: DefaultArg("str", "name")
51 |+x: DefaultArg(str, "name")
52 52 |
53 53 | x: NamedArg("str", "name")
54 54 |
UP037_0.py:53:13: UP037 [*] Remove quotes from type annotation
|
51 | x: DefaultArg("str", "name")
52 |
53 | x: NamedArg("str", "name")
| ^^^^^ UP037
54 |
55 | x: DefaultNamedArg("str", "name")
|
= help: Remove quotes
Safe fix
50 50 |
51 51 | x: DefaultArg("str", "name")
52 52 |
53 |-x: NamedArg("str", "name")
53 |+x: NamedArg(str, "name")
54 54 |
55 55 | x: DefaultNamedArg("str", "name")
56 56 |
UP037_0.py:55:20: UP037 [*] Remove quotes from type annotation
|
53 | x: NamedArg("str", "name")
54 |
55 | x: DefaultNamedArg("str", "name")
| ^^^^^ UP037
56 |
57 | x: DefaultNamedArg("str", name="name")
|
= help: Remove quotes
Safe fix
52 52 |
53 53 | x: NamedArg("str", "name")
54 54 |
55 |-x: DefaultNamedArg("str", "name")
55 |+x: DefaultNamedArg(str, "name")
56 56 |
57 57 | x: DefaultNamedArg("str", name="name")
58 58 |
UP037_0.py:57:20: UP037 [*] Remove quotes from type annotation
|
55 | x: DefaultNamedArg("str", "name")
56 |
57 | x: DefaultNamedArg("str", name="name")
| ^^^^^ UP037
58 |
59 | x: VarArg("str")
|
= help: Remove quotes
Safe fix
54 54 |
55 55 | x: DefaultNamedArg("str", "name")
56 56 |
57 |-x: DefaultNamedArg("str", name="name")
57 |+x: DefaultNamedArg(str, name="name")
58 58 |
59 59 | x: VarArg("str")
60 60 |
UP037_0.py:59:11: UP037 [*] Remove quotes from type annotation
|
57 | x: DefaultNamedArg("str", name="name")
58 |
59 | x: VarArg("str")
| ^^^^^ UP037
60 |
61 | x: List[List[List["MyClass"]]]
|
= help: Remove quotes
Safe fix
56 56 |
57 57 | x: DefaultNamedArg("str", name="name")
58 58 |
59 |-x: VarArg("str")
59 |+x: VarArg(str)
60 60 |
61 61 | x: List[List[List["MyClass"]]]
62 62 |
UP037_0.py:61:19: UP037 [*] Remove quotes from type annotation
|
59 | x: VarArg("str")
60 |
61 | x: List[List[List["MyClass"]]]
| ^^^^^^^^^ UP037
62 |
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
|
= help: Remove quotes
Safe fix
58 58 |
59 59 | x: VarArg("str")
60 60 |
61 |-x: List[List[List["MyClass"]]]
61 |+x: List[List[List[MyClass]]]
62 62 |
63 63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 64 |
UP037_0.py:63:29: UP037 [*] Remove quotes from type annotation
|
61 | x: List[List[List["MyClass"]]]
62 |
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
| ^^^^^ UP037
64 |
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
|
= help: Remove quotes
Safe fix
60 60 |
61 61 | x: List[List[List["MyClass"]]]
62 62 |
63 |-x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
63 |+x: NamedTuple("X", [("foo", int), ("bar", "str")])
64 64 |
65 65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 66 |
UP037_0.py:63:45: UP037 [*] Remove quotes from type annotation
|
61 | x: List[List[List["MyClass"]]]
62 |
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
| ^^^^^ UP037
64 |
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
|
= help: Remove quotes
Safe fix
60 60 |
61 61 | x: List[List[List["MyClass"]]]
62 62 |
63 |-x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
63 |+x: NamedTuple("X", [("foo", "int"), ("bar", str)])
64 64 |
65 65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 66 |
UP037_0.py:65:29: UP037 [*] Remove quotes from type annotation
|
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 |
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
| ^^^^^ UP037
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
|
= help: Remove quotes
Safe fix
62 62 |
63 63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 64 |
65 |-x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
65 |+x: NamedTuple("X", fields=[(foo, "int"), ("bar", "str")])
66 66 |
67 67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
68 68 |
UP037_0.py:65:36: UP037 [*] Remove quotes from type annotation
|
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 |
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
| ^^^^^ UP037
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
|
= help: Remove quotes
Safe fix
62 62 |
63 63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 64 |
65 |-x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
65 |+x: NamedTuple("X", fields=[("foo", int), ("bar", "str")])
66 66 |
67 67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
68 68 |
UP037_0.py:65:45: UP037 [*] Remove quotes from type annotation
|
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 |
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
| ^^^^^ UP037
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
|
= help: Remove quotes
Safe fix
62 62 |
63 63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 64 |
65 |-x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
65 |+x: NamedTuple("X", fields=[("foo", "int"), (bar, "str")])
66 66 |
67 67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
68 68 |
UP037_0.py:65:52: UP037 [*] Remove quotes from type annotation
|
63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 |
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
| ^^^^^ UP037
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
|
= help: Remove quotes
Safe fix
62 62 |
63 63 | x: NamedTuple("X", [("foo", "int"), ("bar", "str")])
64 64 |
65 |-x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
65 |+x: NamedTuple("X", fields=[("foo", "int"), ("bar", str)])
66 66 |
67 67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
68 68 |
UP037_0.py:67:24: UP037 [*] Remove quotes from type annotation
|
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
| ^^^ UP037
68 |
69 | X: MyCallable("X")
|
= help: Remove quotes
Safe fix
64 64 |
65 65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 66 |
67 |-x: NamedTuple(typename="X", fields=[("foo", "int")])
67 |+x: NamedTuple(typename=X, fields=[("foo", "int")])
68 68 |
69 69 | X: MyCallable("X")
70 70 |
UP037_0.py:67:38: UP037 [*] Remove quotes from type annotation
|
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
| ^^^^^ UP037
68 |
69 | X: MyCallable("X")
|
= help: Remove quotes
Safe fix
64 64 |
65 65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 66 |
67 |-x: NamedTuple(typename="X", fields=[("foo", "int")])
67 |+x: NamedTuple(typename="X", fields=[(foo, "int")])
68 68 |
69 69 | X: MyCallable("X")
70 70 |
UP037_0.py:67:45: UP037 [*] Remove quotes from type annotation
|
65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 |
67 | x: NamedTuple(typename="X", fields=[("foo", "int")])
| ^^^^^ UP037
68 |
69 | X: MyCallable("X")
|
= help: Remove quotes
Safe fix
64 64 |
65 65 | x: NamedTuple("X", fields=[("foo", "int"), ("bar", "str")])
66 66 |
67 |-x: NamedTuple(typename="X", fields=[("foo", "int")])
67 |+x: NamedTuple(typename="X", fields=[("foo", int)])
68 68 |
69 69 | X: MyCallable("X")
70 70 |
UP037_0.py:112:12: UP037 [*] Remove quotes from type annotation
|
110 | # Handle end of line comment in string annotation
111 | # See https://github.com/astral-sh/ruff/issues/15816
112 | def f() -> "Literal[0]#":
| ^^^^^^^^^^^^^ UP037
113 | return 0
|
= help: Remove quotes
Safe fix
109 109 |
110 110 | # Handle end of line comment in string annotation
111 111 | # See https://github.com/astral-sh/ruff/issues/15816
112 |-def f() -> "Literal[0]#":
112 |+def f() -> (Literal[0]#
113 |+):
113 114 | return 0
114 115 |
115 116 | def g(x: "Literal['abc']#") -> None:
UP037_0.py:115:10: UP037 [*] Remove quotes from type annotation
|
113 | return 0
114 |
115 | def g(x: "Literal['abc']#") -> None:
| ^^^^^^^^^^^^^^^^^ UP037
116 | return
|
= help: Remove quotes
Safe fix
112 112 | def f() -> "Literal[0]#":
113 113 | return 0
114 114 |
115 |-def g(x: "Literal['abc']#") -> None:
115 |+def g(x: (Literal['abc']#
116 |+)) -> None:
116 117 | return
117 118 |
118 119 | def f() -> """Literal[0]
UP037_0.py:118:12: UP037 [*] Remove quotes from type annotation
|
116 | return
117 |
118 | def f() -> """Literal[0]
| ____________^
119 | | #
120 | |
121 | | """:
| |_______^ UP037
122 | return 0
|
= help: Remove quotes
Safe fix
115 115 | def g(x: "Literal['abc']#") -> None:
116 116 | return
117 117 |
118 |-def f() -> """Literal[0]
118 |+def f() -> (Literal[0]
119 119 | #
120 120 |
121 |- """:
121 |+ ):
122 122 | return 0

View File

@@ -1,42 +0,0 @@
---
source: crates/ruff_linter/src/rules/pyupgrade/mod.rs
---
UP037_1.py:9:8: UP037 [*] Remove quotes from type annotation
|
7 | def foo():
8 | # UP037
9 | x: "Tuple[int, int]" = (0, 0)
| ^^^^^^^^^^^^^^^^^ UP037
10 | print(x)
|
= help: Remove quotes
Safe fix
6 6 |
7 7 | def foo():
8 8 | # UP037
9 |- x: "Tuple[int, int]" = (0, 0)
9 |+ x: Tuple[int, int] = (0, 0)
10 10 | print(x)
11 11 |
12 12 |
UP037_1.py:14:4: UP037 [*] Remove quotes from type annotation
|
13 | # OK
14 | X: "Tuple[int, int]" = (0, 0)
| ^^^^^^^^^^^^^^^^^ UP037
|
= help: Remove quotes
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import TYPE_CHECKING
2 3 |
3 4 | if TYPE_CHECKING:
--------------------------------------------------------------------------------
11 12 |
12 13 |
13 14 | # OK
14 |-X: "Tuple[int, int]" = (0, 0)
15 |+X: Tuple[int, int] = (0, 0)

View File

@@ -1,232 +0,0 @@
---
source: crates/ruff_linter/src/rules/pyupgrade/mod.rs
---
UP037_2.pyi:3:14: UP037 [*] Remove quotes from type annotation
|
1 | # https://github.com/astral-sh/ruff/issues/7102
2 |
3 | def f(a: Foo['SingleLine # Comment']): ...
| ^^^^^^^^^^^^^^^^^^^^^^^ UP037
|
= help: Remove quotes
Safe fix
1 1 | # https://github.com/astral-sh/ruff/issues/7102
2 2 |
3 |-def f(a: Foo['SingleLine # Comment']): ...
3 |+def f(a: Foo[(SingleLine # Comment
4 |+)]): ...
4 5 |
5 6 |
6 7 | def f(a: Foo['''Bar[
UP037_2.pyi:6:14: UP037 [*] Remove quotes from type annotation
|
6 | def f(a: Foo['''Bar[
| ______________^
7 | | Multi |
8 | | Line]''']): ...
| |____________^ UP037
|
= help: Remove quotes
Safe fix
3 3 | def f(a: Foo['SingleLine # Comment']): ...
4 4 |
5 5 |
6 |-def f(a: Foo['''Bar[
6 |+def f(a: Foo[Bar[
7 7 | Multi |
8 |- Line]''']): ...
8 |+ Line]]): ...
9 9 |
10 10 |
11 11 | def f(a: Foo['''Bar[
UP037_2.pyi:11:14: UP037 [*] Remove quotes from type annotation
|
11 | def f(a: Foo['''Bar[
| ______________^
12 | | Multi |
13 | | Line # Comment
14 | | ]''']): ...
| |____^ UP037
|
= help: Remove quotes
Safe fix
8 8 | Line]''']): ...
9 9 |
10 10 |
11 |-def f(a: Foo['''Bar[
11 |+def f(a: Foo[Bar[
12 12 | Multi |
13 13 | Line # Comment
14 |-]''']): ...
14 |+]]): ...
15 15 |
16 16 |
17 17 | def f(a: Foo['''Bar[
UP037_2.pyi:17:14: UP037 [*] Remove quotes from type annotation
|
17 | def f(a: Foo['''Bar[
| ______________^
18 | | Multi |
19 | | Line] # Comment''']): ...
| |_______________________^ UP037
|
= help: Remove quotes
Safe fix
14 14 | ]''']): ...
15 15 |
16 16 |
17 |-def f(a: Foo['''Bar[
17 |+def f(a: Foo[(Bar[
18 18 | Multi |
19 |- Line] # Comment''']): ...
19 |+ Line] # Comment
20 |+)]): ...
20 21 |
21 22 |
22 23 | def f(a: Foo['''
UP037_2.pyi:22:14: UP037 [*] Remove quotes from type annotation
|
22 | def f(a: Foo['''
| ______________^
23 | | Bar[
24 | | Multi |
25 | | Line] # Comment''']): ...
| |_______________________^ UP037
|
= help: Remove quotes
Safe fix
19 19 | Line] # Comment''']): ...
20 20 |
21 21 |
22 |-def f(a: Foo['''
22 |+def f(a: Foo[(
23 23 | Bar[
24 24 | Multi |
25 |- Line] # Comment''']): ...
25 |+ Line] # Comment
26 |+)]): ...
26 27 |
27 28 |
28 29 | def f(a: '''list[int]
UP037_2.pyi:28:10: UP037 [*] Remove quotes from type annotation
|
28 | def f(a: '''list[int]
| __________^
29 | | ''' = []): ...
| |_______^ UP037
|
= help: Remove quotes
Safe fix
25 25 | Line] # Comment''']): ...
26 26 |
27 27 |
28 |-def f(a: '''list[int]
29 |- ''' = []): ...
28 |+def f(a: list[int]
29 |+ = []): ...
30 30 |
31 31 |
32 32 | a: '''\\
UP037_2.pyi:32:4: UP037 [*] Remove quotes from type annotation
|
32 | a: '''\\
| ____^
33 | | list[int]''' = [42]
| |____________^ UP037
|
= help: Remove quotes
Safe fix
29 29 | ''' = []): ...
30 30 |
31 31 |
32 |-a: '''\\
33 |-list[int]''' = [42]
32 |+a: (\
33 |+list[int]) = [42]
34 34 |
35 35 |
36 36 | def f(a: '''
UP037_2.pyi:36:10: UP037 [*] Remove quotes from type annotation
|
36 | def f(a: '''
| __________^
37 | | list[int]
38 | | ''' = []): ...
| |_______^ UP037
|
= help: Remove quotes
Safe fix
33 33 | list[int]''' = [42]
34 34 |
35 35 |
36 |-def f(a: '''
36 |+def f(a:
37 37 | list[int]
38 |- ''' = []): ...
38 |+ = []): ...
39 39 |
40 40 |
41 41 | def f(a: Foo['''
UP037_2.pyi:41:14: UP037 [*] Remove quotes from type annotation
|
41 | def f(a: Foo['''
| ______________^
42 | | Bar
43 | | [
44 | | Multi |
45 | | Line
46 | | ] # Comment''']): ...
| |___________________^ UP037
|
= help: Remove quotes
Safe fix
38 38 | ''' = []): ...
39 39 |
40 40 |
41 |-def f(a: Foo['''
41 |+def f(a: Foo[(
42 42 | Bar
43 43 | [
44 44 | Multi |
45 45 | Line
46 |- ] # Comment''']): ...
46 |+ ] # Comment
47 |+)]): ...
47 48 |
48 49 |
49 50 | a: '''list
UP037_2.pyi:49:4: UP037 [*] Remove quotes from type annotation
|
49 | a: '''list
| ____^
50 | | [int]''' = [42]
| |________^ UP037
|
= help: Remove quotes
Safe fix
46 46 | ] # Comment''']): ...
47 47 |
48 48 |
49 |-a: '''list
50 |-[int]''' = [42]
49 |+a: (list
50 |+[int]) = [42]

View File

@@ -1,12 +1,10 @@
use ruff_macros::{ViolationMetadata, derive_message_formats};
use ruff_python_ast::{self as ast, Expr, ExprCall};
use ruff_python_semantic::analyze::type_inference::{NumberLike, PythonType, ResolvedPythonType};
use ruff_python_semantic::analyze::typing;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::linter::float::as_non_finite_float_string_literal;
use crate::{Applicability, Edit, Fix, FixAvailability, Violation};
use crate::{Edit, Fix, FixAvailability, Violation};
/// ## What it does
/// Checks for unnecessary `from_float` and `from_decimal` usages to construct
@@ -18,12 +16,6 @@ use crate::{Applicability, Edit, Fix, FixAvailability, Violation};
/// the use of `from_float` and `from_decimal` methods is unnecessary, and
/// should be avoided in favor of the more concise constructor syntax.
///
/// However, there are important behavioral differences between the `from_*` methods
/// and the constructors:
/// - The `from_*` methods validate their argument types and raise `TypeError` for invalid types
/// - The constructors accept broader argument types without validation
/// - The `from_*` methods have different parameter names than the constructors
///
/// ## Example
/// ```python
/// Decimal.from_float(4.2)
@@ -40,16 +32,6 @@ use crate::{Applicability, Edit, Fix, FixAvailability, Violation};
/// Fraction(Decimal(4.2))
/// ```
///
/// ## Fix safety
/// This rule's fix is marked as unsafe by default because:
/// - The `from_*` methods provide type validation that the constructors don't
/// - Removing type validation can change program behavior
/// - The parameter names are different between methods and constructors
///
/// The fix is marked as safe only when:
/// - The argument type is known to be valid for the target constructor
/// - No keyword arguments are used, or they match the constructor's parameters
///
/// ## References
/// - [Python documentation: `decimal`](https://docs.python.org/3/library/decimal.html)
/// - [Python documentation: `fractions`](https://docs.python.org/3/library/fractions.html)
@@ -119,178 +101,62 @@ pub(crate) fn unnecessary_from_float(checker: &Checker, call: &ExprCall) {
call.range(),
);
// Validate that the method call has correct arguments and get the argument value
let Some(arg_value) = has_valid_method_arguments(call, method_name, constructor) else {
// Don't suggest a fix for invalid calls
return;
};
let edit = Edit::range_replacement(
checker.locator().slice(&**value).to_string(),
call.func.range(),
);
let constructor_name = checker.locator().slice(&**value).to_string();
// Short-circuit case for special values, such as: `Decimal.from_float(float("inf"))` to `Decimal("inf")`.
'short_circuit: {
if !matches!(constructor, Constructor::Decimal) {
break 'short_circuit;
}
if !(method_name == MethodName::FromFloat) {
break 'short_circuit;
}
let Some(value) = (match method_name {
MethodName::FromFloat => call.arguments.find_argument_value("f", 0),
MethodName::FromDecimal => call.arguments.find_argument_value("dec", 0),
}) else {
return;
};
let Expr::Call(
call @ ast::ExprCall {
func, arguments, ..
},
) = value
else {
break 'short_circuit;
};
// Must have exactly one argument, which is a string literal.
if !arguments.keywords.is_empty() {
break 'short_circuit;
}
let [float] = arguments.args.as_ref() else {
break 'short_circuit;
};
if as_non_finite_float_string_literal(float).is_none() {
break 'short_circuit;
}
// Must be a call to the `float` builtin.
if !semantic.match_builtin_expr(func, "float") {
break 'short_circuit;
}
let replacement = checker.locator().slice(float).to_string();
diagnostic.set_fix(Fix::safe_edits(
edit,
[Edit::range_replacement(replacement, call.range())],
));
// Special case for non-finite float literals: Decimal.from_float(float("inf")) -> Decimal("inf")
if let Some(replacement) = handle_non_finite_float_special_case(
call,
method_name,
constructor,
arg_value,
&constructor_name,
checker,
) {
diagnostic.set_fix(Fix::safe_edit(replacement));
return;
}
// Check if we should suppress the fix due to type validation concerns
let is_type_safe = is_valid_argument_type(arg_value, method_name, constructor, checker);
let has_keywords = !call.arguments.keywords.is_empty();
// Determine fix safety
let applicability = if is_type_safe && !has_keywords {
Applicability::Safe
} else {
Applicability::Unsafe
};
// Build the replacement
let arg_text = checker.locator().slice(arg_value);
let replacement_text = format!("{constructor_name}({arg_text})");
let edit = Edit::range_replacement(replacement_text, call.range());
diagnostic.set_fix(Fix::applicable_edit(edit, applicability));
}
/// Check if the argument would be valid for the target constructor
fn is_valid_argument_type(
arg_expr: &Expr,
method_name: MethodName,
constructor: Constructor,
checker: &Checker,
) -> bool {
let semantic = checker.semantic();
let resolved_type = ResolvedPythonType::from(arg_expr);
let (is_int, is_float) = if let ResolvedPythonType::Unknown = resolved_type {
arg_expr
.as_name_expr()
.and_then(|name| semantic.only_binding(name).map(|id| semantic.binding(id)))
.map(|binding| {
(
typing::is_int(binding, semantic),
typing::is_float(binding, semantic),
)
})
.unwrap_or_default()
} else {
(false, false)
};
match (method_name, constructor) {
// Decimal.from_float accepts int, bool, float
(MethodName::FromFloat, Constructor::Decimal) => match resolved_type {
ResolvedPythonType::Atom(PythonType::Number(
NumberLike::Integer | NumberLike::Bool | NumberLike::Float,
)) => true,
ResolvedPythonType::Unknown => is_int || is_float,
_ => false,
},
// Fraction.from_float accepts int, bool, float
(MethodName::FromFloat, Constructor::Fraction) => match resolved_type {
ResolvedPythonType::Atom(PythonType::Number(
NumberLike::Integer | NumberLike::Bool | NumberLike::Float,
)) => true,
ResolvedPythonType::Unknown => is_int || is_float,
_ => false,
},
// Fraction.from_decimal accepts int, bool, Decimal
(MethodName::FromDecimal, Constructor::Fraction) => match resolved_type {
ResolvedPythonType::Atom(PythonType::Number(
NumberLike::Integer | NumberLike::Bool,
)) => true,
ResolvedPythonType::Unknown => is_int,
_ => {
// Check if it's a Decimal instance
arg_expr
.as_call_expr()
.and_then(|call| semantic.resolve_qualified_name(&call.func))
.is_some_and(|qualified_name| {
matches!(qualified_name.segments(), ["decimal", "Decimal"])
})
}
},
_ => false,
}
}
/// Check if the call has valid arguments for the from_* method
fn has_valid_method_arguments(
call: &ExprCall,
method_name: MethodName,
constructor: Constructor,
) -> Option<&Expr> {
if call.arguments.len() != 1 {
return None;
}
match method_name {
MethodName::FromFloat => {
// Decimal.from_float is positional-only; Fraction.from_float allows keyword 'f'.
if constructor == Constructor::Decimal {
// Only allow positional argument for Decimal.from_float
call.arguments.find_positional(0)
} else {
// Fraction.from_float allows either positional or 'f' keyword
call.arguments.find_argument_value("f", 0)
}
}
MethodName::FromDecimal => {
// from_decimal(dec) - should have exactly one positional argument or 'dec' keyword
call.arguments.find_argument_value("dec", 0)
}
}
}
/// Handle the special case for non-finite float literals
fn handle_non_finite_float_special_case(
call: &ExprCall,
method_name: MethodName,
constructor: Constructor,
arg_value: &Expr,
constructor_name: &str,
checker: &Checker,
) -> Option<Edit> {
// Only applies to Decimal.from_float
if !matches!(
(method_name, constructor),
(MethodName::FromFloat, Constructor::Decimal)
) {
return None;
}
let Expr::Call(ast::ExprCall {
func, arguments, ..
}) = arg_value
else {
return None;
};
// Must be a call to the `float` builtin.
if !checker.semantic().match_builtin_expr(func, "float") {
return None;
}
// Must have exactly one argument, which is a string literal.
if !arguments.keywords.is_empty() {
return None;
}
let [float_arg] = arguments.args.as_ref() else {
return None;
};
as_non_finite_float_string_literal(float_arg)?;
let replacement_arg = checker.locator().slice(float_arg).to_string();
let replacement_text = format!("{constructor_name}({replacement_arg})");
Some(Edit::range_replacement(replacement_text, call.range()))
diagnostic.set_fix(Fix::safe_edit(edit));
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]

View File

@@ -95,7 +95,7 @@ FURB164.py:11:5: FURB164 [*] Verbose method `from_decimal` in `Fraction` constru
|
= help: Replace with `Fraction` constructor
Unsafe fix
Safe fix
8 8 | _ = Fraction.from_float(-0.5)
9 9 | _ = Fraction.from_float(5.0)
10 10 | _ = fractions.Fraction.from_float(4.2)
@@ -116,7 +116,7 @@ FURB164.py:12:5: FURB164 [*] Verbose method `from_decimal` in `Fraction` constru
|
= help: Replace with `Fraction` constructor
Unsafe fix
Safe fix
9 9 | _ = Fraction.from_float(5.0)
10 10 | _ = fractions.Fraction.from_float(4.2)
11 11 | _ = Fraction.from_decimal(Decimal("4.2"))
@@ -137,7 +137,7 @@ FURB164.py:13:5: FURB164 [*] Verbose method `from_decimal` in `Fraction` constru
|
= help: Replace with `Fraction` constructor
Unsafe fix
Safe fix
10 10 | _ = fractions.Fraction.from_float(4.2)
11 11 | _ = Fraction.from_decimal(Decimal("4.2"))
12 12 | _ = Fraction.from_decimal(Decimal("-4.2"))
@@ -459,7 +459,7 @@ FURB164.py:27:5: FURB164 [*] Verbose method `from_float` in `Decimal` constructi
27 |+_ = Decimal(" InfinIty\n\t ")
28 28 | _ = Decimal.from_float(float(" -InfinIty\n \t"))
29 29 |
30 30 | # Cases with keyword arguments - should produce unsafe fixes
30 30 | # OK
FURB164.py:28:5: FURB164 [*] Verbose method `from_float` in `Decimal` construction
|
@@ -468,7 +468,7 @@ FURB164.py:28:5: FURB164 [*] Verbose method `from_float` in `Decimal` constructi
28 | _ = Decimal.from_float(float(" -InfinIty\n \t"))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
29 |
30 | # Cases with keyword arguments - should produce unsafe fixes
30 | # OK
|
= help: Replace with `Decimal` constructor
@@ -479,136 +479,5 @@ FURB164.py:28:5: FURB164 [*] Verbose method `from_float` in `Decimal` constructi
28 |-_ = Decimal.from_float(float(" -InfinIty\n \t"))
28 |+_ = Decimal(" -InfinIty\n \t")
29 29 |
30 30 | # Cases with keyword arguments - should produce unsafe fixes
31 31 | _ = Fraction.from_decimal(dec=Decimal("4.2"))
FURB164.py:31:5: FURB164 [*] Verbose method `from_decimal` in `Fraction` construction
|
30 | # Cases with keyword arguments - should produce unsafe fixes
31 | _ = Fraction.from_decimal(dec=Decimal("4.2"))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
32 | _ = Decimal.from_float(f=4.2)
|
= help: Replace with `Fraction` constructor
Unsafe fix
28 28 | _ = Decimal.from_float(float(" -InfinIty\n \t"))
29 29 |
30 30 | # Cases with keyword arguments - should produce unsafe fixes
31 |-_ = Fraction.from_decimal(dec=Decimal("4.2"))
31 |+_ = Fraction(Decimal("4.2"))
32 32 | _ = Decimal.from_float(f=4.2)
33 33 |
34 34 | # Cases with invalid argument counts - should not get fixes
FURB164.py:32:5: FURB164 Verbose method `from_float` in `Decimal` construction
|
30 | # Cases with keyword arguments - should produce unsafe fixes
31 | _ = Fraction.from_decimal(dec=Decimal("4.2"))
32 | _ = Decimal.from_float(f=4.2)
| ^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
33 |
34 | # Cases with invalid argument counts - should not get fixes
|
= help: Replace with `Decimal` constructor
FURB164.py:35:5: FURB164 Verbose method `from_decimal` in `Fraction` construction
|
34 | # Cases with invalid argument counts - should not get fixes
35 | _ = Fraction.from_decimal(Decimal("4.2"), 1)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
36 | _ = Decimal.from_float(4.2, None)
|
= help: Replace with `Fraction` constructor
FURB164.py:36:5: FURB164 Verbose method `from_float` in `Decimal` construction
|
34 | # Cases with invalid argument counts - should not get fixes
35 | _ = Fraction.from_decimal(Decimal("4.2"), 1)
36 | _ = Decimal.from_float(4.2, None)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
37 |
38 | # Cases with wrong keyword arguments - should not get fixes
|
= help: Replace with `Decimal` constructor
FURB164.py:39:5: FURB164 Verbose method `from_decimal` in `Fraction` construction
|
38 | # Cases with wrong keyword arguments - should not get fixes
39 | _ = Fraction.from_decimal(numerator=Decimal("4.2"))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
40 | _ = Decimal.from_float(value=4.2)
|
= help: Replace with `Fraction` constructor
FURB164.py:40:5: FURB164 Verbose method `from_float` in `Decimal` construction
|
38 | # Cases with wrong keyword arguments - should not get fixes
39 | _ = Fraction.from_decimal(numerator=Decimal("4.2"))
40 | _ = Decimal.from_float(value=4.2)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
41 |
42 | # Cases with type validation issues - should produce unsafe fixes
|
= help: Replace with `Decimal` constructor
FURB164.py:43:5: FURB164 [*] Verbose method `from_float` in `Decimal` construction
|
42 | # Cases with type validation issues - should produce unsafe fixes
43 | _ = Decimal.from_float("4.2") # Invalid type for from_float
| ^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
44 | _ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
45 | _ = Fraction.from_float("4.2") # Invalid type for from_float
|
= help: Replace with `Decimal` constructor
Unsafe fix
40 40 | _ = Decimal.from_float(value=4.2)
41 41 |
42 42 | # Cases with type validation issues - should produce unsafe fixes
43 |-_ = Decimal.from_float("4.2") # Invalid type for from_float
43 |+_ = Decimal("4.2") # Invalid type for from_float
44 44 | _ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
45 45 | _ = Fraction.from_float("4.2") # Invalid type for from_float
46 46 |
FURB164.py:44:5: FURB164 [*] Verbose method `from_decimal` in `Fraction` construction
|
42 | # Cases with type validation issues - should produce unsafe fixes
43 | _ = Decimal.from_float("4.2") # Invalid type for from_float
44 | _ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
45 | _ = Fraction.from_float("4.2") # Invalid type for from_float
|
= help: Replace with `Fraction` constructor
Unsafe fix
41 41 |
42 42 | # Cases with type validation issues - should produce unsafe fixes
43 43 | _ = Decimal.from_float("4.2") # Invalid type for from_float
44 |-_ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
44 |+_ = Fraction(4.2) # Invalid type for from_decimal
45 45 | _ = Fraction.from_float("4.2") # Invalid type for from_float
46 46 |
47 47 | # OK - should not trigger the rule
FURB164.py:45:5: FURB164 [*] Verbose method `from_float` in `Fraction` construction
|
43 | _ = Decimal.from_float("4.2") # Invalid type for from_float
44 | _ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
45 | _ = Fraction.from_float("4.2") # Invalid type for from_float
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB164
46 |
47 | # OK - should not trigger the rule
|
= help: Replace with `Fraction` constructor
Unsafe fix
42 42 | # Cases with type validation issues - should produce unsafe fixes
43 43 | _ = Decimal.from_float("4.2") # Invalid type for from_float
44 44 | _ = Fraction.from_decimal(4.2) # Invalid type for from_decimal
45 |-_ = Fraction.from_float("4.2") # Invalid type for from_float
45 |+_ = Fraction("4.2") # Invalid type for from_float
46 46 |
47 47 | # OK - should not trigger the rule
48 48 | _ = Fraction(0.1)
30 30 | # OK
31 31 | _ = Fraction(0.1)

View File

@@ -599,24 +599,4 @@ mod tests {
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
#[test_case(Rule::ImplicitOptional, Path::new("RUF013_0.py"))]
#[test_case(Rule::ImplicitOptional, Path::new("RUF013_1.py"))]
#[test_case(Rule::ImplicitOptional, Path::new("RUF013_2.py"))]
#[test_case(Rule::ImplicitOptional, Path::new("RUF013_3.py"))]
#[test_case(Rule::ImplicitOptional, Path::new("RUF013_4.py"))]
fn ruf013_add_future_import(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("add_future_import_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("ruff").join(path).as_path(),
&settings::LinterSettings {
preview: PreviewMode::Enabled,
future_annotations: true,
unresolved_target_version: PythonVersion::PY39.into(),
..settings::LinterSettings::for_rule(rule_code)
},
)?;
assert_diagnostics!(snapshot, diagnostics);
Ok(())
}
}

View File

@@ -71,13 +71,6 @@ use crate::rules::ruff::typing::type_hint_explicitly_allows_none;
///
/// ## Options
/// - `target-version`
/// - `lint.future-annotations`
///
/// ## Preview
///
/// When [preview] is enabled, if [`lint.future-annotations`] is set to `true`,
/// `from __future__ import annotations` will be added if doing so would allow using the `|`
/// operator on a Python version before 3.10.
///
/// ## Fix safety
///
@@ -143,15 +136,10 @@ fn generate_fix(checker: &Checker, conversion_type: ConversionType, expr: &Expr)
node_index: ruff_python_ast::AtomicNodeIndex::dummy(),
});
let content = checker.generator().expr(&new_expr);
let edit = Edit::range_replacement(content, expr.range());
if checker.target_version() < PythonVersion::PY310 {
Ok(Fix::unsafe_edits(
edit,
[checker.importer().add_future_import()],
))
} else {
Ok(Fix::unsafe_edit(edit))
}
Ok(Fix::unsafe_edit(Edit::range_replacement(
content,
expr.range(),
)))
}
ConversionType::Optional => {
let importer = checker
@@ -199,7 +187,6 @@ pub(crate) fn implicit_optional(checker: &Checker, parameters: &Parameters) {
) else {
continue;
};
let conversion_type = checker.target_version().into();
let mut diagnostic =
@@ -215,14 +202,7 @@ pub(crate) fn implicit_optional(checker: &Checker, parameters: &Parameters) {
else {
continue;
};
let conversion_type = if checker.target_version() >= PythonVersion::PY310
|| checker.settings().future_annotations()
{
ConversionType::BinOpOr
} else {
ConversionType::Optional
};
let conversion_type = checker.target_version().into();
let mut diagnostic =
checker.report_diagnostic(ImplicitOptional { conversion_type }, expr.range());

View File

@@ -1,445 +0,0 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF013_0.py:20:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
20 | def f(arg: int = None): # RUF013
| ^^^ RUF013
21 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
17 18 | pass
18 19 |
19 20 |
20 |-def f(arg: int = None): # RUF013
21 |+def f(arg: int | None = None): # RUF013
21 22 | pass
22 23 |
23 24 |
RUF013_0.py:24:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
24 | def f(arg: str = None): # RUF013
| ^^^ RUF013
25 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
21 22 | pass
22 23 |
23 24 |
24 |-def f(arg: str = None): # RUF013
25 |+def f(arg: str | None = None): # RUF013
25 26 | pass
26 27 |
27 28 |
RUF013_0.py:28:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
28 | def f(arg: Tuple[str] = None): # RUF013
| ^^^^^^^^^^ RUF013
29 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
25 26 | pass
26 27 |
27 28 |
28 |-def f(arg: Tuple[str] = None): # RUF013
29 |+def f(arg: Tuple[str] | None = None): # RUF013
29 30 | pass
30 31 |
31 32 |
RUF013_0.py:58:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
58 | def f(arg: Union = None): # RUF013
| ^^^^^ RUF013
59 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
55 56 | pass
56 57 |
57 58 |
58 |-def f(arg: Union = None): # RUF013
59 |+def f(arg: Union | None = None): # RUF013
59 60 | pass
60 61 |
61 62 |
RUF013_0.py:62:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
62 | def f(arg: Union[int] = None): # RUF013
| ^^^^^^^^^^ RUF013
63 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
59 60 | pass
60 61 |
61 62 |
62 |-def f(arg: Union[int] = None): # RUF013
63 |+def f(arg: Union[int] | None = None): # RUF013
63 64 | pass
64 65 |
65 66 |
RUF013_0.py:66:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
66 | def f(arg: Union[int, str] = None): # RUF013
| ^^^^^^^^^^^^^^^ RUF013
67 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
63 64 | pass
64 65 |
65 66 |
66 |-def f(arg: Union[int, str] = None): # RUF013
67 |+def f(arg: Union[int, str] | None = None): # RUF013
67 68 | pass
68 69 |
69 70 |
RUF013_0.py:85:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
85 | def f(arg: int | float = None): # RUF013
| ^^^^^^^^^^^ RUF013
86 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
82 83 | pass
83 84 |
84 85 |
85 |-def f(arg: int | float = None): # RUF013
86 |+def f(arg: int | float | None = None): # RUF013
86 87 | pass
87 88 |
88 89 |
RUF013_0.py:89:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
89 | def f(arg: int | float | str | bytes = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
90 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
86 87 | pass
87 88 |
88 89 |
89 |-def f(arg: int | float | str | bytes = None): # RUF013
90 |+def f(arg: int | float | str | bytes | None = None): # RUF013
90 91 | pass
91 92 |
92 93 |
RUF013_0.py:108:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
108 | def f(arg: Literal[1] = None): # RUF013
| ^^^^^^^^^^ RUF013
109 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
105 106 | pass
106 107 |
107 108 |
108 |-def f(arg: Literal[1] = None): # RUF013
109 |+def f(arg: Literal[1] | None = None): # RUF013
109 110 | pass
110 111 |
111 112 |
RUF013_0.py:112:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
112 | def f(arg: Literal[1, "foo"] = None): # RUF013
| ^^^^^^^^^^^^^^^^^ RUF013
113 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
109 110 | pass
110 111 |
111 112 |
112 |-def f(arg: Literal[1, "foo"] = None): # RUF013
113 |+def f(arg: Literal[1, "foo"] | None = None): # RUF013
113 114 | pass
114 115 |
115 116 |
RUF013_0.py:131:22: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
131 | def f(arg: Annotated[int, ...] = None): # RUF013
| ^^^ RUF013
132 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
128 129 | pass
129 130 |
130 131 |
131 |-def f(arg: Annotated[int, ...] = None): # RUF013
132 |+def f(arg: Annotated[int | None, ...] = None): # RUF013
132 133 | pass
133 134 |
134 135 |
RUF013_0.py:135:32: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
135 | def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
| ^^^^^^^^^ RUF013
136 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
132 133 | pass
133 134 |
134 135 |
135 |-def f(arg: Annotated[Annotated[int | str, ...], ...] = None): # RUF013
136 |+def f(arg: Annotated[Annotated[int | str | None, ...], ...] = None): # RUF013
136 137 | pass
137 138 |
138 139 |
RUF013_0.py:151:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
150 | def f(
151 | arg1: int = None, # RUF013
| ^^^ RUF013
152 | arg2: Union[int, float] = None, # RUF013
153 | arg3: Literal[1, 2, 3] = None, # RUF013
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
148 149 |
149 150 |
150 151 | def f(
151 |- arg1: int = None, # RUF013
152 |+ arg1: int | None = None, # RUF013
152 153 | arg2: Union[int, float] = None, # RUF013
153 154 | arg3: Literal[1, 2, 3] = None, # RUF013
154 155 | ):
RUF013_0.py:152:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
150 | def f(
151 | arg1: int = None, # RUF013
152 | arg2: Union[int, float] = None, # RUF013
| ^^^^^^^^^^^^^^^^^ RUF013
153 | arg3: Literal[1, 2, 3] = None, # RUF013
154 | ):
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
149 150 |
150 151 | def f(
151 152 | arg1: int = None, # RUF013
152 |- arg2: Union[int, float] = None, # RUF013
153 |+ arg2: Union[int, float] | None = None, # RUF013
153 154 | arg3: Literal[1, 2, 3] = None, # RUF013
154 155 | ):
155 156 | pass
RUF013_0.py:153:11: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
151 | arg1: int = None, # RUF013
152 | arg2: Union[int, float] = None, # RUF013
153 | arg3: Literal[1, 2, 3] = None, # RUF013
| ^^^^^^^^^^^^^^^^ RUF013
154 | ):
155 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
150 151 | def f(
151 152 | arg1: int = None, # RUF013
152 153 | arg2: Union[int, float] = None, # RUF013
153 |- arg3: Literal[1, 2, 3] = None, # RUF013
154 |+ arg3: Literal[1, 2, 3] | None = None, # RUF013
154 155 | ):
155 156 | pass
156 157 |
RUF013_0.py:181:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
181 | def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
182 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
178 179 | pass
179 180 |
180 181 |
181 |-def f(arg: Union[Annotated[int, ...], Union[str, bytes]] = None): # RUF013
182 |+def f(arg: Union[Annotated[int, ...], Union[str, bytes]] | None = None): # RUF013
182 183 | pass
183 184 |
184 185 |
RUF013_0.py:188:13: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
188 | def f(arg: "int" = None): # RUF013
| ^^^ RUF013
189 | pass
|
= help: Convert to `Optional[T]`
Unsafe fix
185 185 | # Quoted
186 186 |
187 187 |
188 |-def f(arg: "int" = None): # RUF013
188 |+def f(arg: "Optional[int]" = None): # RUF013
189 189 | pass
190 190 |
191 191 |
RUF013_0.py:192:13: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
192 | def f(arg: "str" = None): # RUF013
| ^^^ RUF013
193 | pass
|
= help: Convert to `Optional[T]`
Unsafe fix
189 189 | pass
190 190 |
191 191 |
192 |-def f(arg: "str" = None): # RUF013
192 |+def f(arg: "Optional[str]" = None): # RUF013
193 193 | pass
194 194 |
195 195 |
RUF013_0.py:196:12: RUF013 PEP 484 prohibits implicit `Optional`
|
196 | def f(arg: "st" "r" = None): # RUF013
| ^^^^^^^^ RUF013
197 | pass
|
= help: Convert to `Optional[T]`
RUF013_0.py:204:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
204 | def f(arg: Union["int", "str"] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^ RUF013
205 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | from typing import Annotated, Any, Literal, Optional, Tuple, Union, Hashable
2 3 |
3 4 |
--------------------------------------------------------------------------------
201 202 | pass
202 203 |
203 204 |
204 |-def f(arg: Union["int", "str"] = None): # RUF013
205 |+def f(arg: Union["int", "str"] | None = None): # RUF013
205 206 | pass
206 207 |
207 208 |

View File

@@ -1,19 +0,0 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF013_1.py:4:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
4 | def f(arg: int = None): # RUF013
| ^^^ RUF013
5 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 1 | # No `typing.Optional` import
2 |+from __future__ import annotations
2 3 |
3 4 |
4 |-def f(arg: int = None): # RUF013
5 |+def f(arg: int | None = None): # RUF013
5 6 | pass

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---

View File

@@ -1,65 +0,0 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF013_3.py:4:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
4 | def f(arg: typing.List[str] = None): # RUF013
| ^^^^^^^^^^^^^^^^ RUF013
5 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | import typing
2 3 |
3 4 |
4 |-def f(arg: typing.List[str] = None): # RUF013
5 |+def f(arg: typing.List[str] | None = None): # RUF013
5 6 | pass
6 7 |
7 8 |
RUF013_3.py:22:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
22 | def f(arg: typing.Union[int, str] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^ RUF013
23 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | import typing
2 3 |
3 4 |
--------------------------------------------------------------------------------
19 20 | pass
20 21 |
21 22 |
22 |-def f(arg: typing.Union[int, str] = None): # RUF013
23 |+def f(arg: typing.Union[int, str] | None = None): # RUF013
23 24 | pass
24 25 |
25 26 |
RUF013_3.py:29:12: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
29 | def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF013
30 | pass
|
= help: Convert to `T | None`
Unsafe fix
1 |+from __future__ import annotations
1 2 | import typing
2 3 |
3 4 |
--------------------------------------------------------------------------------
26 27 | # Literal
27 28 |
28 29 |
29 |-def f(arg: typing.Literal[1, "foo", True] = None): # RUF013
30 |+def f(arg: typing.Literal[1, "foo", True] | None = None): # RUF013
30 31 | pass

View File

@@ -1,25 +0,0 @@
---
source: crates/ruff_linter/src/rules/ruff/mod.rs
---
RUF013_4.py:15:61: RUF013 [*] PEP 484 prohibits implicit `Optional`
|
15 | def multiple_2(arg1: Optional, arg2: Optional = None, arg3: int = None): ...
| ^^^ RUF013
|
= help: Convert to `T | None`
Unsafe fix
1 1 | # https://github.com/astral-sh/ruff/issues/13833
2 |+from __future__ import annotations
2 3 |
3 4 | from typing import Optional
4 5 |
--------------------------------------------------------------------------------
12 13 | def multiple_1(arg1: Optional, arg2: Optional = None): ...
13 14 |
14 15 |
15 |-def multiple_2(arg1: Optional, arg2: Optional = None, arg3: int = None): ...
16 |+def multiple_2(arg1: Optional, arg2: Optional = None, arg3: int | None = None): ...
16 17 |
17 18 |
18 19 | def return_type(arg: Optional = None) -> Optional: ...

View File

@@ -210,7 +210,6 @@ macro_rules! display_settings {
}
#[derive(Debug, Clone, CacheKey)]
#[expect(clippy::struct_excessive_bools)]
pub struct LinterSettings {
pub exclude: FilePatternSet,
pub extension: ExtensionMapping,
@@ -252,7 +251,6 @@ pub struct LinterSettings {
pub task_tags: Vec<String>,
pub typing_modules: Vec<String>,
pub typing_extensions: bool,
pub future_annotations: bool,
// Plugins
pub flake8_annotations: flake8_annotations::settings::Settings,
@@ -455,7 +453,6 @@ impl LinterSettings {
explicit_preview_rules: false,
extension: ExtensionMapping::default(),
typing_extensions: true,
future_annotations: false,
}
}
@@ -475,11 +472,6 @@ impl LinterSettings {
.is_match(path)
.map_or(self.unresolved_target_version, TargetVersion::from)
}
pub fn future_annotations(&self) -> bool {
// TODO(brent) we can just access the field directly once this is stabilized.
self.future_annotations && crate::preview::is_add_future_annotations_imports_enabled(self)
}
}
impl Default for LinterSettings {

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_wasm"
version = "0.12.4"
version = "0.12.3"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -250,14 +250,6 @@ impl Configuration {
conflicting_import_settings(&isort, &flake8_import_conventions)?;
let future_annotations = lint.future_annotations.unwrap_or_default();
if lint_preview.is_disabled() && future_annotations {
warn_user_once!(
"The `lint.future-annotations` setting will have no effect \
because `preview` is disabled"
);
}
Ok(Settings {
cache_dir: self
.cache_dir
@@ -440,7 +432,6 @@ impl Configuration {
.map(RuffOptions::into_settings)
.unwrap_or_default(),
typing_extensions: lint.typing_extensions.unwrap_or(true),
future_annotations,
},
formatter,
@@ -645,7 +636,6 @@ pub struct LintConfiguration {
pub task_tags: Option<Vec<String>>,
pub typing_modules: Option<Vec<String>>,
pub typing_extensions: Option<bool>,
pub future_annotations: Option<bool>,
// Plugins
pub flake8_annotations: Option<Flake8AnnotationsOptions>,
@@ -762,7 +752,6 @@ impl LintConfiguration {
logger_objects: options.common.logger_objects,
typing_modules: options.common.typing_modules,
typing_extensions: options.typing_extensions,
future_annotations: options.future_annotations,
// Plugins
flake8_annotations: options.common.flake8_annotations,
@@ -1190,7 +1179,6 @@ impl LintConfiguration {
pyupgrade: self.pyupgrade.combine(config.pyupgrade),
ruff: self.ruff.combine(config.ruff),
typing_extensions: self.typing_extensions.or(config.typing_extensions),
future_annotations: self.future_annotations.or(config.future_annotations),
}
}
}

View File

@@ -529,24 +529,6 @@ pub struct LintOptions {
"#
)]
pub typing_extensions: Option<bool>,
/// Whether to allow rules to add `from __future__ import annotations` in cases where this would
/// simplify a fix or enable a new diagnostic.
///
/// For example, `TC001`, `TC002`, and `TC003` can move more imports into `TYPE_CHECKING` blocks
/// if `__future__` annotations are enabled.
///
/// This setting is currently in [preview](https://docs.astral.sh/ruff/preview/) and requires
/// preview mode to be enabled to have any effect.
#[option(
default = "false",
value_type = "bool",
example = r#"
# Enable `from __future__ import annotations` imports
future-annotations = true
"#
)]
pub future_annotations: Option<bool>,
}
/// Newtype wrapper for [`LintCommonOptions`] that allows customizing the JSON schema and omitting the fields from the [`OptionsMetadata`].
@@ -3914,7 +3896,6 @@ pub struct LintOptionsWire {
ruff: Option<RuffOptions>,
preview: Option<bool>,
typing_extensions: Option<bool>,
future_annotations: Option<bool>,
}
impl From<LintOptionsWire> for LintOptions {
@@ -3970,7 +3951,6 @@ impl From<LintOptionsWire> for LintOptions {
ruff,
preview,
typing_extensions,
future_annotations,
} = value;
LintOptions {
@@ -4027,7 +4007,6 @@ impl From<LintOptionsWire> for LintOptions {
ruff,
preview,
typing_extensions,
future_annotations,
}
}
}

View File

@@ -1,16 +1,34 @@
pub use crate::goto_declaration::goto_declaration;
pub use crate::goto_definition::goto_definition;
pub use crate::goto_type_definition::goto_type_definition;
use crate::find_node::covering_node;
use crate::stub_mapping::StubMapper;
use ruff_db::parsed::ParsedModuleRef;
use crate::{Db, HasNavigationTargets, NavigationTargets, RangedValue};
use ruff_db::files::{File, FileRange};
use ruff_db::parsed::{ParsedModuleRef, parsed_module};
use ruff_python_ast::{self as ast, AnyNodeRef};
use ruff_python_parser::TokenKind;
use ruff_text_size::{Ranged, TextRange, TextSize};
use ty_python_semantic::types::Type;
use ty_python_semantic::{HasType, SemanticModel};
pub fn goto_type_definition(
db: &dyn Db,
file: File,
offset: TextSize,
) -> Option<RangedValue<NavigationTargets>> {
let module = parsed_module(db, file).load(db);
let goto_target = find_goto_target(&module, offset)?;
let model = SemanticModel::new(db, file);
let ty = goto_target.inferred_type(&model)?;
tracing::debug!("Inferred type of covering node is {}", ty.display(db));
let navigation_targets = ty.navigation_targets(db);
Some(RangedValue {
range: FileRange::new(file, goto_target.range()),
value: navigation_targets,
})
}
#[derive(Clone, Copy, Debug)]
pub(crate) enum GotoTarget<'a> {
Expression(ast::ExprRef<'a>),
@@ -136,100 +154,6 @@ impl GotoTarget<'_> {
Some(ty)
}
/// Gets the navigation ranges for this goto target.
/// If a stub mapper is provided, definitions from stub files will be mapped to
/// their corresponding source file implementations.
pub(crate) fn get_definition_targets(
self,
file: ruff_db::files::File,
db: &dyn crate::Db,
stub_mapper: Option<&StubMapper>,
) -> Option<crate::NavigationTargets> {
use crate::NavigationTarget;
use ruff_python_ast as ast;
match self {
// For names, find the definitions of the symbol
GotoTarget::Expression(expression) => {
if let ast::ExprRef::Name(name) = expression {
Self::get_name_definition_targets(name, file, db, stub_mapper)
} else {
// For other expressions, we can't find definitions
None
}
}
// For already-defined symbols, they are their own definitions
GotoTarget::FunctionDef(function) => {
let range = function.name.range;
Some(crate::NavigationTargets::single(NavigationTarget {
file,
focus_range: range,
full_range: function.range(),
}))
}
GotoTarget::ClassDef(class) => {
let range = class.name.range;
Some(crate::NavigationTargets::single(NavigationTarget {
file,
focus_range: range,
full_range: class.range(),
}))
}
GotoTarget::Parameter(parameter) => {
let range = parameter.name.range;
Some(crate::NavigationTargets::single(NavigationTarget {
file,
focus_range: range,
full_range: parameter.range(),
}))
}
// For imports, find the symbol being imported
GotoTarget::Alias(_alias) => {
// For aliases, we don't have the ExprName node, so we can't get the scope
// For now, return None. In the future, we could look up the imported symbol
None
}
// TODO: Handle attribute and method accesses (y in `x.y` expressions)
// TODO: Handle keyword arguments in call expression
// TODO: Handle multi-part module names in import statements
// TODO: Handle imported symbol in y in `from x import y as z` statement
// TODO: Handle string literals that map to TypedDict fields
_ => None,
}
}
/// Get navigation targets for definitions associated with a name expression
fn get_name_definition_targets(
name: &ruff_python_ast::ExprName,
file: ruff_db::files::File,
db: &dyn crate::Db,
stub_mapper: Option<&StubMapper>,
) -> Option<crate::NavigationTargets> {
use ty_python_semantic::definitions_for_name;
// Get all definitions for this name
let mut definitions = definitions_for_name(db, file, name);
// Apply stub mapping if a mapper is provided
if let Some(mapper) = stub_mapper {
definitions = mapper.map_definitions(definitions);
}
if definitions.is_empty() {
return None;
}
// Convert definitions to navigation targets
let targets = convert_resolved_definitions_to_targets(db, definitions);
Some(crate::NavigationTargets::unique(targets))
}
}
impl Ranged for GotoTarget<'_> {
@@ -256,41 +180,6 @@ impl Ranged for GotoTarget<'_> {
}
}
/// Converts a collection of `ResolvedDefinition` items into `NavigationTarget` items.
fn convert_resolved_definitions_to_targets(
db: &dyn crate::Db,
definitions: Vec<ty_python_semantic::ResolvedDefinition<'_>>,
) -> Vec<crate::NavigationTarget> {
definitions
.into_iter()
.map(|resolved_definition| match resolved_definition {
ty_python_semantic::ResolvedDefinition::Definition(definition) => {
// Get the parsed module for range calculation
let definition_file = definition.file(db);
let module = ruff_db::parsed::parsed_module(db, definition_file).load(db);
// Get the ranges for this definition
let focus_range = definition.focus_range(db, &module);
let full_range = definition.full_range(db, &module);
crate::NavigationTarget {
file: focus_range.file(),
focus_range: focus_range.range(),
full_range: full_range.range(),
}
}
ty_python_semantic::ResolvedDefinition::ModuleFile(module_file) => {
// For module files, navigate to the beginning of the file
crate::NavigationTarget {
file: module_file,
focus_range: ruff_text_size::TextRange::default(), // Start of file
full_range: ruff_text_size::TextRange::default(), // Start of file
}
}
})
.collect()
}
pub(crate) fn find_goto_target(
parsed: &ParsedModuleRef,
offset: TextSize,
@@ -361,3 +250,637 @@ pub(crate) fn find_goto_target(
node => node.as_expr_ref().map(GotoTarget::Expression),
}
}
#[cfg(test)]
mod tests {
use crate::tests::{CursorTest, IntoDiagnostic, cursor_test};
use crate::{NavigationTarget, goto_type_definition};
use insta::assert_snapshot;
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, LintName, Severity, Span, SubDiagnostic,
};
use ruff_db::files::FileRange;
use ruff_text_size::Ranged;
#[test]
fn goto_type_of_expression_with_class_type() {
let test = cursor_test(
r#"
class Test: ...
a<CURSOR>b = Test()
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:19
|
2 | class Test: ...
| ^^^^
3 |
4 | ab = Test()
|
info: Source
--> main.py:4:13
|
2 | class Test: ...
3 |
4 | ab = Test()
| ^^
|
");
}
#[test]
fn goto_type_of_expression_with_function_type() {
let test = cursor_test(
r#"
def foo(a, b): ...
ab = foo
a<CURSOR>b
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:17
|
2 | def foo(a, b): ...
| ^^^
3 |
4 | ab = foo
|
info: Source
--> main.py:6:13
|
4 | ab = foo
5 |
6 | ab
| ^^
|
");
}
#[test]
fn goto_type_of_expression_with_union_type() {
let test = cursor_test(
r#"
def foo(a, b): ...
def bar(a, b): ...
if random.choice():
a = foo
else:
a = bar
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:3:17
|
3 | def foo(a, b): ...
| ^^^
4 |
5 | def bar(a, b): ...
|
info: Source
--> main.py:12:13
|
10 | a = bar
11 |
12 | a
| ^
|
info[goto-type-definition]: Type definition
--> main.py:5:17
|
3 | def foo(a, b): ...
4 |
5 | def bar(a, b): ...
| ^^^
6 |
7 | if random.choice():
|
info: Source
--> main.py:12:13
|
10 | a = bar
11 |
12 | a
| ^
|
");
}
#[test]
fn goto_type_of_expression_with_module() {
let mut test = cursor_test(
r#"
import lib
lib<CURSOR>
"#,
);
test.write_file("lib.py", "a = 10").unwrap();
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> lib.py:1:1
|
1 | a = 10
| ^^^^^^
|
info: Source
--> main.py:4:13
|
2 | import lib
3 |
4 | lib
| ^^^
|
");
}
#[test]
fn goto_type_of_expression_with_literal_type() {
let test = cursor_test(
r#"
a: str = "test"
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:4:13
|
2 | a: str = "test"
3 |
4 | a
| ^
|
"#);
}
#[test]
fn goto_type_of_expression_with_literal_node() {
let test = cursor_test(
r#"
a: str = "te<CURSOR>st"
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:2:22
|
2 | a: str = "test"
| ^^^^^^
|
"#);
}
#[test]
fn goto_type_of_expression_with_type_var_type() {
let test = cursor_test(
r#"
type Alias[T: int = bool] = list[T<CURSOR>]
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:24
|
2 | type Alias[T: int = bool] = list[T]
| ^
|
info: Source
--> main.py:2:46
|
2 | type Alias[T: int = bool] = list[T]
| ^
|
");
}
#[test]
fn goto_type_of_expression_with_type_param_spec() {
let test = cursor_test(
r#"
type Alias[**P = [int, str]] = Callable[P<CURSOR>, int]
"#,
);
// TODO: Goto type definition currently doesn't work for type param specs
// because the inference doesn't support them yet.
// This snapshot should show a single target pointing to `T`
assert_snapshot!(test.goto_type_definition(), @"No type definitions found");
}
#[test]
fn goto_type_of_expression_with_type_var_tuple() {
let test = cursor_test(
r#"
type Alias[*Ts = ()] = tuple[*Ts<CURSOR>]
"#,
);
// TODO: Goto type definition currently doesn't work for type var tuples
// because the inference doesn't support them yet.
// This snapshot should show a single target pointing to `T`
assert_snapshot!(test.goto_type_definition(), @"No type definitions found");
}
#[test]
fn goto_type_of_bare_type_alias_type() {
let test = cursor_test(
r#"
from typing_extensions import TypeAliasType
Alias = TypeAliasType("Alias", tuple[int, int])
Alias<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> main.py:4:13
|
2 | from typing_extensions import TypeAliasType
3 |
4 | Alias = TypeAliasType("Alias", tuple[int, int])
| ^^^^^
5 |
6 | Alias
|
info: Source
--> main.py:6:13
|
4 | Alias = TypeAliasType("Alias", tuple[int, int])
5 |
6 | Alias
| ^^^^^
|
"#);
}
#[test]
fn goto_type_on_keyword_argument() {
let test = cursor_test(
r#"
def test(a: str): ...
test(a<CURSOR>= "123")
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:4:18
|
2 | def test(a: str): ...
3 |
4 | test(a= "123")
| ^
|
"#);
}
#[test]
fn goto_type_on_incorrectly_typed_keyword_argument() {
let test = cursor_test(
r#"
def test(a: str): ...
test(a<CURSOR>= 123)
"#,
);
// TODO: This should jump to `str` and not `int` because
// the keyword is typed as a string. It's only the passed argument that
// is an int. Navigating to `str` would match pyright's behavior.
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:338:7
|
336 | _LiteralInteger = _PositiveInteger | _NegativeInteger | Literal[0] # noqa: Y026 # TODO: Use TypeAlias once mypy bugs are fixed
337 |
338 | class int:
| ^^^
339 | """int([x]) -> integer
340 | int(x, base=10) -> integer
|
info: Source
--> main.py:4:18
|
2 | def test(a: str): ...
3 |
4 | test(a= 123)
| ^
|
"#);
}
#[test]
fn goto_type_on_kwargs() {
let test = cursor_test(
r#"
def f(name: str): ...
kwargs = { "name": "test"}
f(**kwargs<CURSOR>)
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:2892:7
|
2890 | """See PEP 585"""
2891 |
2892 | class dict(MutableMapping[_KT, _VT]):
| ^^^^
2893 | """dict() -> new empty dictionary
2894 | dict(mapping) -> new dictionary initialized from a mapping object's
|
info: Source
--> main.py:6:5
|
4 | kwargs = { "name": "test"}
5 |
6 | f(**kwargs)
| ^^^^^^
|
"#);
}
#[test]
fn goto_type_of_expression_with_builtin() {
let test = cursor_test(
r#"
def foo(a: str):
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:3:17
|
2 | def foo(a: str):
3 | a
| ^
|
"#);
}
#[test]
fn goto_type_definition_cursor_between_object_and_attribute() {
let test = cursor_test(
r#"
class X:
def foo(a, b): ...
x = X()
x<CURSOR>.foo()
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:19
|
2 | class X:
| ^
3 | def foo(a, b): ...
|
info: Source
--> main.py:7:13
|
5 | x = X()
6 |
7 | x.foo()
| ^
|
");
}
#[test]
fn goto_between_call_arguments() {
let test = cursor_test(
r#"
def foo(a, b): ...
foo<CURSOR>()
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:17
|
2 | def foo(a, b): ...
| ^^^
3 |
4 | foo()
|
info: Source
--> main.py:4:13
|
2 | def foo(a, b): ...
3 |
4 | foo()
| ^^^
|
");
}
#[test]
fn goto_type_narrowing() {
let test = cursor_test(
r#"
def foo(a: str | None, b):
if a is not None:
print(a<CURSOR>)
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:4:27
|
2 | def foo(a: str | None, b):
3 | if a is not None:
4 | print(a)
| ^
|
"#);
}
#[test]
fn goto_type_none() {
let test = cursor_test(
r#"
def foo(a: str | None, b):
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/types.pyi:922:11
|
920 | if sys.version_info >= (3, 10):
921 | @final
922 | class NoneType:
| ^^^^^^^^
923 | """The type of the None singleton."""
|
info: Source
--> main.py:3:17
|
2 | def foo(a: str | None, b):
3 | a
| ^
|
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:3:17
|
2 | def foo(a: str | None, b):
3 | a
| ^
|
"#);
}
impl CursorTest {
fn goto_type_definition(&self) -> String {
let Some(targets) =
goto_type_definition(&self.db, self.cursor.file, self.cursor.offset)
else {
return "No goto target found".to_string();
};
if targets.is_empty() {
return "No type definitions found".to_string();
}
let source = targets.range;
self.render_diagnostics(
targets
.into_iter()
.map(|target| GotoTypeDefinitionDiagnostic::new(source, &target)),
)
}
}
struct GotoTypeDefinitionDiagnostic {
source: FileRange,
target: FileRange,
}
impl GotoTypeDefinitionDiagnostic {
fn new(source: FileRange, target: &NavigationTarget) -> Self {
Self {
source,
target: FileRange::new(target.file(), target.focus_range()),
}
}
}
impl IntoDiagnostic for GotoTypeDefinitionDiagnostic {
fn into_diagnostic(self) -> Diagnostic {
let mut source = SubDiagnostic::new(Severity::Info, "Source");
source.annotate(Annotation::primary(
Span::from(self.source.file()).with_range(self.source.range()),
));
let mut main = Diagnostic::new(
DiagnosticId::Lint(LintName::of("goto-type-definition")),
Severity::Info,
"Type definition".to_string(),
);
main.annotate(Annotation::primary(
Span::from(self.target.file()).with_range(self.target.range()),
));
main.sub(source);
main
}
}
}

View File

@@ -1,813 +0,0 @@
use crate::goto::find_goto_target;
use crate::{Db, NavigationTargets, RangedValue};
use ruff_db::files::{File, FileRange};
use ruff_db::parsed::parsed_module;
use ruff_text_size::{Ranged, TextSize};
/// Navigate to the declaration of a symbol.
///
/// A "declaration" includes both formal declarations (class statements, def statements,
/// and variable annotations) but also variable assignments. This expansive definition
/// is needed because Python doesn't require formal declarations of variables like most languages do.
pub fn goto_declaration(
db: &dyn Db,
file: File,
offset: TextSize,
) -> Option<RangedValue<NavigationTargets>> {
let module = parsed_module(db, file).load(db);
let goto_target = find_goto_target(&module, offset)?;
let declaration_targets = goto_target.get_definition_targets(file, db, None)?;
Some(RangedValue {
range: FileRange::new(file, goto_target.range()),
value: declaration_targets,
})
}
#[cfg(test)]
mod tests {
use crate::tests::{CursorTest, IntoDiagnostic, cursor_test};
use crate::{NavigationTarget, goto_declaration};
use insta::assert_snapshot;
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, LintName, Severity, Span, SubDiagnostic,
};
use ruff_db::files::FileRange;
use ruff_text_size::Ranged;
#[test]
fn goto_declaration_function_call_to_definition() {
let test = cursor_test(
"
def my_function(x, y):
return x + y
result = my_func<CURSOR>tion(1, 2)
",
);
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:2:17
|
2 | def my_function(x, y):
| ^^^^^^^^^^^
3 | return x + y
|
info: Source
--> main.py:5:22
|
3 | return x + y
4 |
5 | result = my_function(1, 2)
| ^^^^^^^^^^^
|
");
}
#[test]
fn goto_declaration_variable_assignment() {
let test = cursor_test(
"
x = 42
y = x<CURSOR>
",
);
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:2:13
|
2 | x = 42
| ^
3 | y = x
|
info: Source
--> main.py:3:17
|
2 | x = 42
3 | y = x
| ^
|
");
}
#[test]
fn goto_declaration_class_instantiation() {
let test = cursor_test(
"
class MyClass:
def __init__(self):
pass
instance = My<CURSOR>Class()
",
);
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:2:19
|
2 | class MyClass:
| ^^^^^^^
3 | def __init__(self):
4 | pass
|
info: Source
--> main.py:6:24
|
4 | pass
5 |
6 | instance = MyClass()
| ^^^^^^^
|
");
}
#[test]
fn goto_declaration_parameter_usage() {
let test = cursor_test(
"
def foo(param):
return pa<CURSOR>ram * 2
",
);
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:2:21
|
2 | def foo(param):
| ^^^^^
3 | return param * 2
|
info: Source
--> main.py:3:24
|
2 | def foo(param):
3 | return param * 2
| ^^^^^
|
");
}
#[test]
fn goto_declaration_type_parameter() {
let test = cursor_test(
"
def generic_func[T](value: T) -> T:
v: T<CURSOR> = value
return v
",
);
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:2:30
|
2 | def generic_func[T](value: T) -> T:
| ^
3 | v: T = value
4 | return v
|
info: Source
--> main.py:3:20
|
2 | def generic_func[T](value: T) -> T:
3 | v: T = value
| ^
4 | return v
|
");
}
#[test]
fn goto_declaration_type_parameter_class() {
let test = cursor_test(
"
class GenericClass[T]:
def __init__(self, value: T<CURSOR>):
self.value = value
",
);
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:2:32
|
2 | class GenericClass[T]:
| ^
3 | def __init__(self, value: T):
4 | self.value = value
|
info: Source
--> main.py:3:43
|
2 | class GenericClass[T]:
3 | def __init__(self, value: T):
| ^
4 | self.value = value
|
");
}
#[test]
fn goto_declaration_nested_scope_variable() {
let test = cursor_test(
"
x = \"outer\"
def outer_func():
def inner_func():
return x<CURSOR> # Should find outer x
return inner_func
",
);
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> main.py:2:13
|
2 | x = "outer"
| ^
3 | def outer_func():
4 | def inner_func():
|
info: Source
--> main.py:5:28
|
3 | def outer_func():
4 | def inner_func():
5 | return x # Should find outer x
| ^
6 | return inner_func
|
"#);
}
#[test]
fn goto_declaration_class_scope_skipped() {
let test = cursor_test(
r#"
class A:
x = 1
def method(self):
def inner():
return <CURSOR>x # Should NOT find class variable x
return inner
"#,
);
// Should not find the class variable 'x' due to Python's scoping rules
assert_snapshot!(test.goto_declaration(), @"No goto target found");
}
#[test]
fn goto_declaration_import_simple() {
let test = CursorTest::builder()
.source(
"main.py",
"
import mymodule
print(mymod<CURSOR>ule.function())
",
)
.source(
"mymodule.py",
r#"
def function():
return "hello from mymodule"
variable = 42
"#,
)
.build();
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> mymodule.py:1:1
|
1 |
| ^
2 | def function():
3 | return "hello from mymodule"
|
info: Source
--> main.py:3:7
|
2 | import mymodule
3 | print(mymodule.function())
| ^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_import_from() {
let test = CursorTest::builder()
.source(
"main.py",
"
from mymodule import my_function
print(my_func<CURSOR>tion())
",
)
.source(
"mymodule.py",
r#"
def my_function():
return "hello"
def other_function():
return "other"
"#,
)
.build();
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> mymodule.py:2:5
|
2 | def my_function():
| ^^^^^^^^^^^
3 | return "hello"
|
info: Source
--> main.py:3:7
|
2 | from mymodule import my_function
3 | print(my_function())
| ^^^^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_import_as() {
let test = CursorTest::builder()
.source(
"main.py",
"
import mymodule.submodule as sub
print(<CURSOR>sub.helper())
",
)
.source(
"mymodule/__init__.py",
"
# Main module init
",
)
.source(
"mymodule/submodule.py",
r#"
FOO = 0
"#,
)
.build();
// Should find the submodule file itself
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> mymodule/submodule.py:1:1
|
1 |
| ^
2 | FOO = 0
|
info: Source
--> main.py:3:7
|
2 | import mymodule.submodule as sub
3 | print(sub.helper())
| ^^^
|
"#);
}
#[test]
fn goto_declaration_from_import_as() {
let test = CursorTest::builder()
.source(
"main.py",
r#"
from utils import func as h
print(<CURSOR>h("test"))
"#,
)
.source(
"utils.py",
r#"
def func(arg):
return f"Processed: {arg}"
"#,
)
.build();
// Should resolve to the actual function definition, not the import statement
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> utils.py:2:5
|
2 | def func(arg):
| ^^^^
3 | return f"Processed: {arg}"
|
info: Source
--> main.py:3:7
|
2 | from utils import func as h
3 | print(h("test"))
| ^
|
"#);
}
#[test]
fn goto_declaration_from_import_chain() {
let test = CursorTest::builder()
.source(
"main.py",
r#"
from intermediate import shared_function
print(shared_func<CURSOR>tion())
"#,
)
.source(
"intermediate.py",
r#"
# Re-export the function from the original module
from original import shared_function
"#,
)
.source(
"original.py",
r#"
def shared_function():
return "from original"
"#,
)
.build();
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> original.py:2:5
|
2 | def shared_function():
| ^^^^^^^^^^^^^^^
3 | return "from original"
|
info: Source
--> main.py:3:7
|
2 | from intermediate import shared_function
3 | print(shared_function())
| ^^^^^^^^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_from_star_import() {
let test = CursorTest::builder()
.source(
"main.py",
r#"
from math_utils import *
result = add_n<CURSOR>umbers(5, 3)
"#,
)
.source(
"math_utils.py",
r#"
def add_numbers(a, b):
"""Add two numbers together."""
return a + b
def multiply_numbers(a, b):
"""Multiply two numbers together."""
return a * b
"#,
)
.build();
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> math_utils.py:2:5
|
2 | def add_numbers(a, b):
| ^^^^^^^^^^^
3 | """Add two numbers together."""
4 | return a + b
|
info: Source
--> main.py:3:10
|
2 | from math_utils import *
3 | result = add_numbers(5, 3)
| ^^^^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_relative_import() {
let test = CursorTest::builder()
.source(
"package/main.py",
r#"
from .utils import helper_function
result = helper_func<CURSOR>tion("test")
"#,
)
.source(
"package/__init__.py",
r#"
# Package init file
"#,
)
.source(
"package/utils.py",
r#"
def helper_function(arg):
"""A helper function in utils module."""
return f"Processed: {arg}"
def another_helper():
"""Another helper function."""
pass
"#,
)
.build();
// Should resolve the relative import to find the actual function definition
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> package/utils.py:2:5
|
2 | def helper_function(arg):
| ^^^^^^^^^^^^^^^
3 | """A helper function in utils module."""
4 | return f"Processed: {arg}"
|
info: Source
--> package/main.py:3:10
|
2 | from .utils import helper_function
3 | result = helper_function("test")
| ^^^^^^^^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_relative_star_import() {
let test = CursorTest::builder()
.source(
"package/main.py",
r#"
from .utils import *
result = helper_func<CURSOR>tion("test")
"#,
)
.source(
"package/__init__.py",
r#"
# Package init file
"#,
)
.source(
"package/utils.py",
r#"
def helper_function(arg):
"""A helper function in utils module."""
return f"Processed: {arg}"
def another_helper():
"""Another helper function."""
pass
"#,
)
.build();
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> package/utils.py:2:5
|
2 | def helper_function(arg):
| ^^^^^^^^^^^^^^^
3 | """A helper function in utils module."""
4 | return f"Processed: {arg}"
|
info: Source
--> package/main.py:3:10
|
2 | from .utils import *
3 | result = helper_function("test")
| ^^^^^^^^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_builtin_type() {
let test = cursor_test(
r#"
x: i<CURSOR>nt = 42
"#,
);
// Test that we can navigate to builtin types, but don't snapshot the exact content
// since typeshed stubs can change frequently
let result = test.goto_declaration();
// Should not be "No goto target found" - we should find the builtin int type
assert!(
!result.contains("No goto target found"),
"Should find builtin int type"
);
assert!(
!result.contains("No declarations found"),
"Should find builtin int declarations"
);
// Should navigate to a stdlib file containing the int class
assert!(
result.contains("builtins.pyi"),
"Should navigate to builtins.pyi"
);
assert!(
result.contains("class int:"),
"Should find the int class definition"
);
assert!(
result.contains("info[goto-declaration]: Declaration"),
"Should be a goto-declaration result"
);
}
#[test]
fn goto_declaration_nonlocal_binding() {
let test = cursor_test(
r#"
def outer():
x = "outer_value"
def inner():
nonlocal x
x = "modified"
return x<CURSOR> # Should find the nonlocal x declaration in outer scope
return inner
"#,
);
// Should find the variable declaration in the outer scope, not the nonlocal statement
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> main.py:3:5
|
2 | def outer():
3 | x = "outer_value"
| ^
4 |
5 | def inner():
|
info: Source
--> main.py:8:16
|
6 | nonlocal x
7 | x = "modified"
8 | return x # Should find the nonlocal x declaration in outer scope
| ^
9 |
10 | return inner
|
"#);
}
#[test]
fn goto_declaration_global_binding() {
let test = cursor_test(
r#"
global_var = "global_value"
def function():
global global_var
global_var = "modified"
return global_<CURSOR>var # Should find the global variable declaration
"#,
);
// Should find the global variable declaration, not the global statement
assert_snapshot!(test.goto_declaration(), @r#"
info[goto-declaration]: Declaration
--> main.py:2:1
|
2 | global_var = "global_value"
| ^^^^^^^^^^
3 |
4 | def function():
|
info: Source
--> main.py:7:12
|
5 | global global_var
6 | global_var = "modified"
7 | return global_var # Should find the global variable declaration
| ^^^^^^^^^^
|
"#);
}
#[test]
fn goto_declaration_generic_method_class_type() {
let test = cursor_test(
r#"
class MyClass:
ClassType = int
def generic_method[T](self, value: Class<CURSOR>Type) -> T:
return value
"#,
);
// Should find the ClassType defined in the class body, not fail to resolve
assert_snapshot!(test.goto_declaration(), @r"
info[goto-declaration]: Declaration
--> main.py:3:5
|
2 | class MyClass:
3 | ClassType = int
| ^^^^^^^^^
4 |
5 | def generic_method[T](self, value: ClassType) -> T:
|
info: Source
--> main.py:5:40
|
3 | ClassType = int
4 |
5 | def generic_method[T](self, value: ClassType) -> T:
| ^^^^^^^^^
6 | return value
|
");
}
impl CursorTest {
fn goto_declaration(&self) -> String {
let Some(targets) = goto_declaration(&self.db, self.cursor.file, self.cursor.offset)
else {
return "No goto target found".to_string();
};
if targets.is_empty() {
return "No declarations found".to_string();
}
let source = targets.range;
self.render_diagnostics(
targets
.into_iter()
.map(|target| GotoDeclarationDiagnostic::new(source, &target)),
)
}
}
struct GotoDeclarationDiagnostic {
source: FileRange,
target: FileRange,
}
impl GotoDeclarationDiagnostic {
fn new(source: FileRange, target: &NavigationTarget) -> Self {
Self {
source,
target: FileRange::new(target.file(), target.focus_range()),
}
}
}
impl IntoDiagnostic for GotoDeclarationDiagnostic {
fn into_diagnostic(self) -> Diagnostic {
let mut source = SubDiagnostic::new(Severity::Info, "Source");
source.annotate(Annotation::primary(
Span::from(self.source.file()).with_range(self.source.range()),
));
let mut main = Diagnostic::new(
DiagnosticId::Lint(LintName::of("goto-declaration")),
Severity::Info,
"Declaration".to_string(),
);
main.annotate(Annotation::primary(
Span::from(self.target.file()).with_range(self.target.range()),
));
main.sub(source);
main
}
}
}

View File

@@ -1,31 +0,0 @@
use crate::goto::find_goto_target;
use crate::stub_mapping::StubMapper;
use crate::{Db, NavigationTargets, RangedValue};
use ruff_db::files::{File, FileRange};
use ruff_db::parsed::parsed_module;
use ruff_text_size::{Ranged, TextSize};
/// Navigate to the definition of a symbol.
///
/// A "definition" is the actual implementation of a symbol, potentially in a source file
/// rather than a stub file. This differs from "declaration" which may navigate to stub files.
/// When possible, this function will map from stub file declarations to their corresponding
/// source file implementations using the `StubMapper`.
pub fn goto_definition(
db: &dyn Db,
file: File,
offset: TextSize,
) -> Option<RangedValue<NavigationTargets>> {
let module = parsed_module(db, file).load(db);
let goto_target = find_goto_target(&module, offset)?;
// Create a StubMapper to map from stub files to source files
let stub_mapper = StubMapper::new(db);
let definition_targets = goto_target.get_definition_targets(file, db, Some(&stub_mapper))?;
Some(RangedValue {
range: FileRange::new(file, goto_target.range()),
value: definition_targets,
})
}

View File

@@ -1,661 +0,0 @@
use crate::goto::find_goto_target;
use crate::{Db, HasNavigationTargets, NavigationTargets, RangedValue};
use ruff_db::files::{File, FileRange};
use ruff_db::parsed::parsed_module;
use ruff_text_size::{Ranged, TextSize};
use ty_python_semantic::SemanticModel;
pub fn goto_type_definition(
db: &dyn Db,
file: File,
offset: TextSize,
) -> Option<RangedValue<NavigationTargets>> {
let module = parsed_module(db, file).load(db);
let goto_target = find_goto_target(&module, offset)?;
let model = SemanticModel::new(db, file);
let ty = goto_target.inferred_type(&model)?;
tracing::debug!("Inferred type of covering node is {}", ty.display(db));
let navigation_targets = ty.navigation_targets(db);
Some(RangedValue {
range: FileRange::new(file, goto_target.range()),
value: navigation_targets,
})
}
#[cfg(test)]
mod tests {
use crate::tests::{CursorTest, IntoDiagnostic, cursor_test};
use crate::{NavigationTarget, goto_type_definition};
use insta::assert_snapshot;
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, LintName, Severity, Span, SubDiagnostic,
};
use ruff_db::files::FileRange;
use ruff_text_size::Ranged;
#[test]
fn goto_type_of_expression_with_class_type() {
let test = cursor_test(
r#"
class Test: ...
a<CURSOR>b = Test()
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:19
|
2 | class Test: ...
| ^^^^
3 |
4 | ab = Test()
|
info: Source
--> main.py:4:13
|
2 | class Test: ...
3 |
4 | ab = Test()
| ^^
|
");
}
#[test]
fn goto_type_of_expression_with_function_type() {
let test = cursor_test(
r#"
def foo(a, b): ...
ab = foo
a<CURSOR>b
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:17
|
2 | def foo(a, b): ...
| ^^^
3 |
4 | ab = foo
|
info: Source
--> main.py:6:13
|
4 | ab = foo
5 |
6 | ab
| ^^
|
");
}
#[test]
fn goto_type_of_expression_with_union_type() {
let test = cursor_test(
r#"
def foo(a, b): ...
def bar(a, b): ...
if random.choice():
a = foo
else:
a = bar
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:3:17
|
3 | def foo(a, b): ...
| ^^^
4 |
5 | def bar(a, b): ...
|
info: Source
--> main.py:12:13
|
10 | a = bar
11 |
12 | a
| ^
|
info[goto-type-definition]: Type definition
--> main.py:5:17
|
3 | def foo(a, b): ...
4 |
5 | def bar(a, b): ...
| ^^^
6 |
7 | if random.choice():
|
info: Source
--> main.py:12:13
|
10 | a = bar
11 |
12 | a
| ^
|
");
}
#[test]
fn goto_type_of_expression_with_module() {
let mut test = cursor_test(
r#"
import lib
lib<CURSOR>
"#,
);
test.write_file("lib.py", "a = 10").unwrap();
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> lib.py:1:1
|
1 | a = 10
| ^^^^^^
|
info: Source
--> main.py:4:13
|
2 | import lib
3 |
4 | lib
| ^^^
|
");
}
#[test]
fn goto_type_of_expression_with_literal_type() {
let test = cursor_test(
r#"
a: str = "test"
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:4:13
|
2 | a: str = "test"
3 |
4 | a
| ^
|
"#);
}
#[test]
fn goto_type_of_expression_with_literal_node() {
let test = cursor_test(
r#"
a: str = "te<CURSOR>st"
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:2:22
|
2 | a: str = "test"
| ^^^^^^
|
"#);
}
#[test]
fn goto_type_of_expression_with_type_var_type() {
let test = cursor_test(
r#"
type Alias[T: int = bool] = list[T<CURSOR>]
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:24
|
2 | type Alias[T: int = bool] = list[T]
| ^
|
info: Source
--> main.py:2:46
|
2 | type Alias[T: int = bool] = list[T]
| ^
|
");
}
#[test]
fn goto_type_of_expression_with_type_param_spec() {
let test = cursor_test(
r#"
type Alias[**P = [int, str]] = Callable[P<CURSOR>, int]
"#,
);
// TODO: Goto type definition currently doesn't work for type param specs
// because the inference doesn't support them yet.
// This snapshot should show a single target pointing to `T`
assert_snapshot!(test.goto_type_definition(), @"No type definitions found");
}
#[test]
fn goto_type_of_expression_with_type_var_tuple() {
let test = cursor_test(
r#"
type Alias[*Ts = ()] = tuple[*Ts<CURSOR>]
"#,
);
// TODO: Goto type definition currently doesn't work for type var tuples
// because the inference doesn't support them yet.
// This snapshot should show a single target pointing to `T`
assert_snapshot!(test.goto_type_definition(), @"No type definitions found");
}
#[test]
fn goto_type_of_bare_type_alias_type() {
let test = cursor_test(
r#"
from typing_extensions import TypeAliasType
Alias = TypeAliasType("Alias", tuple[int, int])
Alias<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> main.py:4:13
|
2 | from typing_extensions import TypeAliasType
3 |
4 | Alias = TypeAliasType("Alias", tuple[int, int])
| ^^^^^
5 |
6 | Alias
|
info: Source
--> main.py:6:13
|
4 | Alias = TypeAliasType("Alias", tuple[int, int])
5 |
6 | Alias
| ^^^^^
|
"#);
}
#[test]
fn goto_type_on_keyword_argument() {
let test = cursor_test(
r#"
def test(a: str): ...
test(a<CURSOR>= "123")
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:4:18
|
2 | def test(a: str): ...
3 |
4 | test(a= "123")
| ^
|
"#);
}
#[test]
fn goto_type_on_incorrectly_typed_keyword_argument() {
let test = cursor_test(
r#"
def test(a: str): ...
test(a<CURSOR>= 123)
"#,
);
// TODO: This should jump to `str` and not `int` because
// the keyword is typed as a string. It's only the passed argument that
// is an int. Navigating to `str` would match pyright's behavior.
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:338:7
|
336 | _LiteralInteger = _PositiveInteger | _NegativeInteger | Literal[0] # noqa: Y026 # TODO: Use TypeAlias once mypy bugs are fixed
337 |
338 | class int:
| ^^^
339 | """int([x]) -> integer
340 | int(x, base=10) -> integer
|
info: Source
--> main.py:4:18
|
2 | def test(a: str): ...
3 |
4 | test(a= 123)
| ^
|
"#);
}
#[test]
fn goto_type_on_kwargs() {
let test = cursor_test(
r#"
def f(name: str): ...
kwargs = { "name": "test"}
f(**kwargs<CURSOR>)
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:2892:7
|
2890 | """See PEP 585"""
2891 |
2892 | class dict(MutableMapping[_KT, _VT]):
| ^^^^
2893 | """dict() -> new empty dictionary
2894 | dict(mapping) -> new dictionary initialized from a mapping object's
|
info: Source
--> main.py:6:5
|
4 | kwargs = { "name": "test"}
5 |
6 | f(**kwargs)
| ^^^^^^
|
"#);
}
#[test]
fn goto_type_of_expression_with_builtin() {
let test = cursor_test(
r#"
def foo(a: str):
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:3:17
|
2 | def foo(a: str):
3 | a
| ^
|
"#);
}
#[test]
fn goto_type_definition_cursor_between_object_and_attribute() {
let test = cursor_test(
r#"
class X:
def foo(a, b): ...
x = X()
x<CURSOR>.foo()
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:19
|
2 | class X:
| ^
3 | def foo(a, b): ...
|
info: Source
--> main.py:7:13
|
5 | x = X()
6 |
7 | x.foo()
| ^
|
");
}
#[test]
fn goto_between_call_arguments() {
let test = cursor_test(
r#"
def foo(a, b): ...
foo<CURSOR>()
"#,
);
assert_snapshot!(test.goto_type_definition(), @r"
info[goto-type-definition]: Type definition
--> main.py:2:17
|
2 | def foo(a, b): ...
| ^^^
3 |
4 | foo()
|
info: Source
--> main.py:4:13
|
2 | def foo(a, b): ...
3 |
4 | foo()
| ^^^
|
");
}
#[test]
fn goto_type_narrowing() {
let test = cursor_test(
r#"
def foo(a: str | None, b):
if a is not None:
print(a<CURSOR>)
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:4:27
|
2 | def foo(a: str | None, b):
3 | if a is not None:
4 | print(a)
| ^
|
"#);
}
#[test]
fn goto_type_none() {
let test = cursor_test(
r#"
def foo(a: str | None, b):
a<CURSOR>
"#,
);
assert_snapshot!(test.goto_type_definition(), @r#"
info[goto-type-definition]: Type definition
--> stdlib/types.pyi:922:11
|
920 | if sys.version_info >= (3, 10):
921 | @final
922 | class NoneType:
| ^^^^^^^^
923 | """The type of the None singleton."""
|
info: Source
--> main.py:3:17
|
2 | def foo(a: str | None, b):
3 | a
| ^
|
info[goto-type-definition]: Type definition
--> stdlib/builtins.pyi:892:7
|
890 | def __getitem__(self, key: int, /) -> str | int | None: ...
891 |
892 | class str(Sequence[str]):
| ^^^
893 | """str(object='') -> str
894 | str(bytes_or_buffer[, encoding[, errors]]) -> str
|
info: Source
--> main.py:3:17
|
2 | def foo(a: str | None, b):
3 | a
| ^
|
"#);
}
impl CursorTest {
fn goto_type_definition(&self) -> String {
let Some(targets) =
goto_type_definition(&self.db, self.cursor.file, self.cursor.offset)
else {
return "No goto target found".to_string();
};
if targets.is_empty() {
return "No type definitions found".to_string();
}
let source = targets.range;
self.render_diagnostics(
targets
.into_iter()
.map(|target| GotoTypeDefinitionDiagnostic::new(source, &target)),
)
}
}
struct GotoTypeDefinitionDiagnostic {
source: FileRange,
target: FileRange,
}
impl GotoTypeDefinitionDiagnostic {
fn new(source: FileRange, target: &NavigationTarget) -> Self {
Self {
source,
target: FileRange::new(target.file(), target.focus_range()),
}
}
}
impl IntoDiagnostic for GotoTypeDefinitionDiagnostic {
fn into_diagnostic(self) -> Diagnostic {
let mut source = SubDiagnostic::new(Severity::Info, "Source");
source.annotate(Annotation::primary(
Span::from(self.source.file()).with_range(self.source.range()),
));
let mut main = Diagnostic::new(
DiagnosticId::Lint(LintName::of("goto-type-definition")),
Severity::Info,
"Type definition".to_string(),
);
main.annotate(Annotation::primary(
Span::from(self.target.file()).with_range(self.target.range()),
));
main.sub(source);
main
}
}
}

View File

@@ -3,20 +3,16 @@ mod db;
mod docstring;
mod find_node;
mod goto;
mod goto_declaration;
mod goto_definition;
mod goto_type_definition;
mod hover;
mod inlay_hints;
mod markup;
mod semantic_tokens;
mod signature_help;
mod stub_mapping;
pub use completion::completion;
pub use db::Db;
pub use docstring::get_parameter_documentation;
pub use goto::{goto_declaration, goto_definition, goto_type_definition};
pub use goto::goto_type_definition;
pub use hover::hover;
pub use inlay_hints::inlay_hints;
pub use markup::MarkupKind;

View File

@@ -1,47 +0,0 @@
use ty_python_semantic::ResolvedDefinition;
/// Maps `ResolvedDefinitions` from stub files to corresponding definitions in source files.
///
/// This mapper is used to implement "Go To Definition" functionality that navigates from
/// stub file declarations to their actual implementations in source files. It also allows
/// other language server providers (like hover, completion, and signature help) to find
/// docstrings for functions that resolve to stubs.
pub(crate) struct StubMapper<'db> {
#[allow(dead_code)] // Will be used when implementation is added
db: &'db dyn crate::Db,
}
impl<'db> StubMapper<'db> {
#[allow(dead_code)] // Will be used in the future
pub(crate) fn new(db: &'db dyn crate::Db) -> Self {
Self { db }
}
/// Map a `ResolvedDefinition` from a stub file to corresponding definitions in source files.
///
/// If the definition is in a stub file and a corresponding source file definition exists,
/// returns the source file definition(s). Otherwise, returns the original definition.
#[allow(dead_code)] // Will be used when implementation is added
#[allow(clippy::unused_self)] // Will use self when implementation is added
pub(crate) fn map_definition(
&self,
def: ResolvedDefinition<'db>,
) -> Vec<ResolvedDefinition<'db>> {
// TODO: Implement stub-to-source mapping logic
// For now, just return the original definition
vec![def]
}
/// Map multiple `ResolvedDefinitions`, applying stub-to-source mapping to each.
///
/// This is a convenience method that applies `map_definition` to each element
/// in the input vector and flattens the results.
pub(crate) fn map_definitions(
&self,
defs: Vec<ResolvedDefinition<'db>>,
) -> Vec<ResolvedDefinition<'db>> {
defs.into_iter()
.flat_map(|def| self.map_definition(def))
.collect()
}
}

View File

@@ -469,14 +469,6 @@ impl Project {
self.set_file_set(db).to(IndexedFiles::lazy());
}
}
/// Check if the project's settings have any issues
pub fn check_settings(&self, db: &dyn Db) -> Vec<Diagnostic> {
self.settings_diagnostics(db)
.iter()
.map(OptionDiagnostic::to_diagnostic)
.collect()
}
}
#[salsa::tracked(returns(deref), heap_size=get_size2::GetSize::get_heap_size)]

View File

@@ -35,7 +35,6 @@ indexmap = { workspace = true }
itertools = { workspace = true }
ordermap = { workspace = true }
salsa = { workspace = true, features = ["compact_str"] }
thin-vec = { workspace = true }
thiserror = { workspace = true }
tracing = { workspace = true }
rustc-hash = { workspace = true }

View File

@@ -127,80 +127,6 @@ def f(x: int | str):
return x
```
### In `if TYPE_CHECKING` block
Inside an `if TYPE_CHECKING` block, we allow "stub" style function definitions with empty bodies,
since these functions will never actually be called.
```py
from typing import TYPE_CHECKING
if TYPE_CHECKING:
def f() -> int: ...
else:
def f() -> str:
return "hello"
reveal_type(f) # revealed: def f() -> int
if not TYPE_CHECKING:
...
elif True:
def g() -> str: ...
else:
def h() -> str: ...
if not TYPE_CHECKING:
def i() -> int:
return 1
else:
def i() -> str: ...
reveal_type(i) # revealed: def i() -> str
if False:
...
elif TYPE_CHECKING:
def j() -> str: ...
else:
def j_() -> str: ... # error: [invalid-return-type]
if False:
...
elif not TYPE_CHECKING:
def k_() -> str: ... # error: [invalid-return-type]
else:
def k() -> str: ...
class Foo:
if TYPE_CHECKING:
def f(self) -> int: ...
if TYPE_CHECKING:
class Bar:
def f(self) -> int: ...
def get_bool() -> bool:
return True
if TYPE_CHECKING:
if get_bool():
def l() -> str: ...
if get_bool():
if TYPE_CHECKING:
def m() -> str: ...
if TYPE_CHECKING:
if not TYPE_CHECKING:
def n() -> str: ...
```
## Conditional return type
```py

View File

@@ -31,35 +31,17 @@ def f():
reveal_type(x) # revealed: Unknown | Literal[1]
```
## Reads respect annotation-only declarations
## Skips annotation-only assignment
```py
def f():
x: int = 1
x = 1
def g():
# TODO: This example should actually be an unbound variable error. However to avoid false
# positives, we'd need to analyze `nonlocal x` statements in other inner functions.
x: str
# it's pretty weird to have an annotated assignment in a function where the
# name is otherwise not defined; maybe should be an error?
x: int
def h():
reveal_type(x) # revealed: str
```
## Reads terminate at the `global` keyword in an enclosing scope, even if there's no binding in that scope
_Unlike_ variables that are explicitly declared `nonlocal` (below), implicitly nonlocal ("free")
reads can come from a variable that's declared `global` in an enclosing scope. It doesn't matter
whether the variable is bound in that scope:
```py
x: int = 1
def f():
x: str = "hello"
def g():
global x
def h():
# allowed: this loads the global `x` variable due to the `global` declaration in the immediate enclosing scope
y: int = x
reveal_type(x) # revealed: Unknown | Literal[1]
```
## The `nonlocal` keyword
@@ -247,7 +229,7 @@ def f():
nonlocal x # error: [invalid-syntax] "no binding for nonlocal `x` found"
```
## Assigning to a `nonlocal` respects the declared type from its defining scope, even without a binding in that scope
## `nonlocal` bindings respect declared types from the defining scope, even without a binding
```py
def f():
@@ -282,8 +264,8 @@ def f1():
@staticmethod
def f3():
# This scope declares `x` nonlocal, shadows `y` without a type declaration, and
# declares `z` global.
# This scope declares `x` nonlocal and `y` as global, and it shadows `z` without
# giving it a type declaration.
nonlocal x
x = 4
y = 5

View File

@@ -17,8 +17,6 @@ pub use program::{
pub use python_platform::PythonPlatform;
pub use semantic_model::{Completion, CompletionKind, HasType, NameKind, SemanticModel};
pub use site_packages::{PythonEnvironment, SitePackagesPaths, SysPrefixPathOrigin};
pub use types::definitions_for_name;
pub use types::ide_support::ResolvedDefinition;
pub use util::diagnostics::add_inferred_python_version_hint_to_diagnostic;
pub mod ast_node_ref;

View File

@@ -26,7 +26,6 @@ use crate::semantic_index::place::{
};
use crate::semantic_index::use_def::{EagerSnapshotKey, ScopedEagerSnapshotId, UseDefMap};
use crate::semantic_model::HasTrackedScope;
use crate::util::get_size::ThinVecSized;
use crate::util::get_size::untracked_arc_size;
pub mod ast_ids;
@@ -239,7 +238,7 @@ pub(crate) struct SemanticIndex<'db> {
eager_snapshots: FxHashMap<EagerSnapshotKey, ScopedEagerSnapshotId>,
/// List of all semantic syntax errors in this file.
semantic_syntax_errors: ThinVecSized<SemanticSyntaxError>,
semantic_syntax_errors: Vec<SemanticSyntaxError>,
/// Set of all generator functions in this file.
generator_functions: FxHashSet<FileScopeId>,
@@ -389,24 +388,6 @@ impl<'db> SemanticIndex<'db> {
AncestorsIter::new(self, scope)
}
/// Returns an iterator over ancestors of `scope` that are visible for name resolution,
/// starting with `scope` itself. This follows Python's lexical scoping rules where
/// class scopes are skipped during name resolution (except for the starting scope
/// if it happens to be a class scope).
///
/// For example, in this code:
/// ```python
/// x = 1
/// class A:
/// x = 2
/// def method(self):
/// print(x) # Refers to global x=1, not class x=2
/// ```
/// The `method` function can see the global scope but not the class scope.
pub(crate) fn visible_ancestor_scopes(&self, scope: FileScopeId) -> VisibleAncestorsIter {
VisibleAncestorsIter::new(self, scope)
}
/// Returns the [`definition::Definition`] salsa ingredient(s) for `definition_key`.
///
/// There will only ever be >1 `Definition` associated with a `definition_key`
@@ -572,53 +553,6 @@ impl<'a> Iterator for AncestorsIter<'a> {
impl FusedIterator for AncestorsIter<'_> {}
pub struct VisibleAncestorsIter<'a> {
inner: AncestorsIter<'a>,
starting_scope_kind: ScopeKind,
yielded_count: usize,
}
impl<'a> VisibleAncestorsIter<'a> {
fn new(module_table: &'a SemanticIndex, start: FileScopeId) -> Self {
let starting_scope = &module_table.scopes[start];
Self {
inner: AncestorsIter::new(module_table, start),
starting_scope_kind: starting_scope.kind(),
yielded_count: 0,
}
}
}
impl<'a> Iterator for VisibleAncestorsIter<'a> {
type Item = (FileScopeId, &'a Scope);
fn next(&mut self) -> Option<Self::Item> {
loop {
let (scope_id, scope) = self.inner.next()?;
self.yielded_count += 1;
// Always return the first scope (the starting scope)
if self.yielded_count == 1 {
return Some((scope_id, scope));
}
// Skip class scopes for subsequent scopes (following Python's lexical scoping rules)
// Exception: type parameter scopes can see names defined in an immediately-enclosing class scope
if scope.kind() == ScopeKind::Class {
// Allow type parameter scopes to see their immediately-enclosing class scope exactly once
if self.starting_scope_kind.is_type_parameter() && self.yielded_count == 2 {
return Some((scope_id, scope));
}
continue;
}
return Some((scope_id, scope));
}
}
}
impl FusedIterator for VisibleAncestorsIter<'_> {}
pub struct DescendantsIter<'a> {
next_id: FileScopeId,
descendants: std::slice::Iter<'a, Scope>,

View File

@@ -90,8 +90,6 @@ pub(super) struct SemanticIndexBuilder<'db, 'ast> {
/// Flags about the file's global scope
has_future_annotations: bool,
/// Whether we are currently visiting an `if TYPE_CHECKING` block.
in_type_checking_block: bool,
// Used for checking semantic syntax errors
python_version: PythonVersion,
@@ -115,7 +113,7 @@ pub(super) struct SemanticIndexBuilder<'db, 'ast> {
generator_functions: FxHashSet<FileScopeId>,
eager_snapshots: FxHashMap<EagerSnapshotKey, ScopedEagerSnapshotId>,
/// Errors collected by the `semantic_checker`.
semantic_syntax_errors: RefCell<thin_vec::ThinVec<SemanticSyntaxError>>,
semantic_syntax_errors: RefCell<Vec<SemanticSyntaxError>>,
}
impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
@@ -132,7 +130,6 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
try_node_context_stack_manager: TryNodeContextStackManager::default(),
has_future_annotations: false,
in_type_checking_block: false,
scopes: IndexVec::new(),
place_tables: IndexVec::new(),
@@ -251,7 +248,6 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
node_with_kind,
children_start..children_start,
reachability,
self.in_type_checking_block,
);
let is_class_scope = scope.kind().is_class();
self.try_node_context_stack_manager.enter_nested_scope();
@@ -723,7 +719,7 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
// since its the pattern that introduces any constraints, not the body.) Ideally, that
// standalone expression would wrap the match arm's pattern as a whole. But a standalone
// expression can currently only wrap an ast::Expr, which patterns are not. So, we need to
// choose an Expr that can "stand in" for the pattern, which we can wrap in a standalone
// choose an Expr that can stand in for the pattern, which we can wrap in a standalone
// expression.
//
// See the comment in TypeInferenceBuilder::infer_match_pattern for more details.
@@ -1063,7 +1059,7 @@ impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast> {
imported_modules: Arc::new(self.imported_modules),
has_future_annotations: self.has_future_annotations,
eager_snapshots: self.eager_snapshots,
semantic_syntax_errors: self.semantic_syntax_errors.into_inner().into(),
semantic_syntax_errors: self.semantic_syntax_errors.into_inner(),
generator_functions: self.generator_functions,
}
}
@@ -1502,17 +1498,6 @@ impl<'ast> Visitor<'ast> for SemanticIndexBuilder<'_, 'ast> {
let mut last_predicate = self.record_expression_narrowing_constraint(&node.test);
let mut last_reachability_constraint =
self.record_reachability_constraint(last_predicate);
let is_outer_block_in_type_checking = self.in_type_checking_block;
let if_block_in_type_checking = is_if_type_checking(&node.test);
// Track if we're in a chain that started with "not TYPE_CHECKING"
let mut is_in_not_type_checking_chain = is_if_not_type_checking(&node.test);
self.in_type_checking_block =
if_block_in_type_checking || is_outer_block_in_type_checking;
self.visit_body(&node.body);
let mut post_clauses: Vec<FlowSnapshot> = vec![];
@@ -1531,7 +1516,6 @@ impl<'ast> Visitor<'ast> for SemanticIndexBuilder<'_, 'ast> {
// if there's no `else` branch, we should add a no-op `else` branch
Some((None, Default::default()))
});
for (clause_test, clause_body) in elif_else_clauses {
// snapshot after every block except the last; the last one will just become
// the state that we merge the other snapshots into
@@ -1554,34 +1538,12 @@ impl<'ast> Visitor<'ast> for SemanticIndexBuilder<'_, 'ast> {
self.record_reachability_constraint(last_predicate);
}
// Determine if this clause is in type checking context
let clause_in_type_checking = if let Some(elif_test) = clause_test {
if is_if_type_checking(elif_test) {
// This block has "TYPE_CHECKING" condition
true
} else if is_if_not_type_checking(elif_test) {
// This block has "not TYPE_CHECKING" condition so we update the chain state for future blocks
is_in_not_type_checking_chain = true;
false
} else {
// This block has some other condition
// It's in type checking only if we're in a "not TYPE_CHECKING" chain
is_in_not_type_checking_chain
}
} else {
is_in_not_type_checking_chain
};
self.in_type_checking_block = clause_in_type_checking;
self.visit_body(clause_body);
}
for post_clause_state in post_clauses {
self.flow_merge(post_clause_state);
}
self.in_type_checking_block = is_outer_block_in_type_checking;
}
ast::Stmt::While(ast::StmtWhile {
test,
@@ -2749,18 +2711,3 @@ impl ExpressionsScopeMapBuilder {
ExpressionsScopeMap(interval_map.into_boxed_slice())
}
}
/// Returns if the expression is a `TYPE_CHECKING` expression.
fn is_if_type_checking(expr: &ast::Expr) -> bool {
matches!(expr, ast::Expr::Name(ast::ExprName { id, .. }) if id == "TYPE_CHECKING")
}
/// Returns if the expression is a `not TYPE_CHECKING` expression.
fn is_if_not_type_checking(expr: &ast::Expr) -> bool {
matches!(expr, ast::Expr::UnaryOp(ast::ExprUnaryOp { op, operand, .. }) if *op == ruff_python_ast::UnaryOp::Not
&& matches!(
&**operand,
ast::Expr::Name(ast::ExprName { id, .. }) if id == "TYPE_CHECKING"
)
)
}

View File

@@ -23,7 +23,7 @@ use crate::unpack::{Unpack, UnpackPosition};
#[salsa::tracked(debug)]
pub struct Definition<'db> {
/// The file in which the definition occurs.
pub file: File,
pub(crate) file: File,
/// The scope in which the definition occurs.
pub(crate) file_scope: FileScopeId,

View File

@@ -10,13 +10,13 @@ use ruff_index::{IndexVec, newtype_index};
use ruff_python_ast as ast;
use ruff_python_ast::name::Name;
use rustc_hash::FxHasher;
use smallvec::{SmallVec, smallvec};
use crate::Db;
use crate::ast_node_ref::AstNodeRef;
use crate::node_key::NodeKey;
use crate::semantic_index::reachability_constraints::ScopedReachabilityConstraintId;
use crate::semantic_index::{PlaceSet, SemanticIndex, semantic_index};
use crate::util::get_size::ThinVecSized;
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
pub(crate) enum PlaceExprSubSegment {
@@ -41,7 +41,7 @@ impl PlaceExprSubSegment {
#[derive(Eq, PartialEq, Debug, get_size2::GetSize)]
pub struct PlaceExpr {
root_name: Name,
sub_segments: ThinVecSized<PlaceExprSubSegment>,
sub_segments: SmallVec<[PlaceExprSubSegment; 1]>,
}
impl std::fmt::Display for PlaceExpr {
@@ -165,7 +165,7 @@ impl PlaceExpr {
pub(crate) fn name(name: Name) -> Self {
Self {
root_name: name,
sub_segments: ThinVecSized::new(),
sub_segments: smallvec![],
}
}
@@ -230,9 +230,7 @@ impl std::fmt::Display for PlaceExprWithFlags {
}
impl PlaceExprWithFlags {
pub(crate) fn new(mut expr: PlaceExpr) -> Self {
expr.sub_segments.shrink_to_fit();
pub(crate) fn new(expr: PlaceExpr) -> Self {
PlaceExprWithFlags {
expr,
flags: PlaceFlags::empty(),
@@ -527,20 +525,10 @@ impl FileScopeId {
#[derive(Debug, salsa::Update, get_size2::GetSize)]
pub struct Scope {
/// The parent scope, if any.
parent: Option<FileScopeId>,
/// The node that introduces this scope.
node: NodeWithScopeKind,
/// The range of [`FileScopeId`]s that are descendants of this scope.
descendants: Range<FileScopeId>,
/// The constraint that determines the reachability of this scope.
reachability: ScopedReachabilityConstraintId,
/// Whether this scope is defined inside an `if TYPE_CHECKING:` block.
in_type_checking_block: bool,
}
impl Scope {
@@ -549,14 +537,12 @@ impl Scope {
node: NodeWithScopeKind,
descendants: Range<FileScopeId>,
reachability: ScopedReachabilityConstraintId,
in_type_checking_block: bool,
) -> Self {
Scope {
parent,
node,
descendants,
reachability,
in_type_checking_block,
}
}
@@ -587,10 +573,6 @@ impl Scope {
pub(crate) fn reachability(&self) -> ScopedReachabilityConstraintId {
self.reachability
}
pub(crate) fn in_type_checking_block(&self) -> bool {
self.in_type_checking_block
}
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]

View File

@@ -71,12 +71,16 @@ impl ScopedDefinitionId {
}
}
/// Can keep inline this many live bindings or declarations per place at a given time; more will
/// go to heap.
const INLINE_DEFINITIONS_PER_PLACE: usize = 4;
/// Live declarations for a single place at some point in control flow, with their
/// corresponding reachability constraints.
#[derive(Clone, Debug, Default, PartialEq, Eq, salsa::Update, get_size2::GetSize)]
pub(super) struct Declarations {
/// A list of live declarations for this place, sorted by their `ScopedDefinitionId`
live_declarations: SmallVec<[LiveDeclaration; 2]>,
live_declarations: SmallVec<[LiveDeclaration; INLINE_DEFINITIONS_PER_PLACE]>,
}
/// One of the live declarations for a single place at some point in control flow.
@@ -195,7 +199,7 @@ pub(super) struct Bindings {
/// "unbound" binding.
unbound_narrowing_constraint: Option<ScopedNarrowingConstraint>,
/// A list of live bindings for this place, sorted by their `ScopedDefinitionId`
live_bindings: SmallVec<[LiveBinding; 2]>,
live_bindings: SmallVec<[LiveBinding; INLINE_DEFINITIONS_PER_PLACE]>,
}
impl Bindings {

View File

@@ -49,7 +49,6 @@ use crate::types::generics::{
};
pub use crate::types::ide_support::{
CallSignatureDetails, Member, all_members, call_signature_details, definition_kind_for_name,
definitions_for_name,
};
use crate::types::infer::infer_unpack_types;
use crate::types::mro::{Mro, MroError, MroIterator};
@@ -93,7 +92,7 @@ mod definition;
#[cfg(test)]
mod property_tests;
pub fn check_types(db: &dyn Db, file: File) -> thin_vec::ThinVec<Diagnostic> {
pub fn check_types(db: &dyn Db, file: File) -> Vec<Diagnostic> {
let _span = tracing::trace_span!("check_types", ?file).entered();
tracing::debug!("Checking file '{path}'", path = file.path(db));

View File

@@ -18,7 +18,6 @@ use crate::types::string_annotation::{
use crate::types::tuple::TupleType;
use crate::types::{SpecialFormType, Type, protocol_class::ProtocolClassLiteral};
use crate::util::diagnostics::format_enumeration;
use crate::util::get_size::ThinVecSized;
use crate::{Db, FxIndexMap, Module, ModuleName, Program, declare_lint};
use itertools::Itertools;
use ruff_db::diagnostic::{Annotation, Diagnostic, Severity, SubDiagnostic};
@@ -1615,7 +1614,7 @@ declare_lint! {
/// A collection of type check diagnostics.
#[derive(Default, Eq, PartialEq, get_size2::GetSize)]
pub struct TypeCheckDiagnostics {
diagnostics: ThinVecSized<Diagnostic>,
diagnostics: Vec<Diagnostic>,
used_suppressions: FxHashSet<FileSuppressionId>,
}
@@ -1650,8 +1649,8 @@ impl TypeCheckDiagnostics {
self.diagnostics.shrink_to_fit();
}
pub(crate) fn into_vec(self) -> thin_vec::ThinVec<Diagnostic> {
self.diagnostics.into_inner()
pub(crate) fn into_vec(self) -> Vec<Diagnostic> {
self.diagnostics
}
pub fn iter(&self) -> std::slice::Iter<'_, Diagnostic> {
@@ -1667,7 +1666,7 @@ impl std::fmt::Debug for TypeCheckDiagnostics {
impl IntoIterator for TypeCheckDiagnostics {
type Item = Diagnostic;
type IntoIter = thin_vec::IntoIter<Diagnostic>;
type IntoIter = std::vec::IntoIter<Diagnostic>;
fn into_iter(self) -> Self::IntoIter {
self.diagnostics.into_iter()

View File

@@ -1,8 +1,6 @@
use std::cmp::Ordering;
use crate::place::{
Place, builtins_module_scope, imported_symbol, place_from_bindings, place_from_declarations,
};
use crate::place::{Place, imported_symbol, place_from_bindings, place_from_declarations};
use crate::semantic_index::definition::Definition;
use crate::semantic_index::definition::DefinitionKind;
use crate::semantic_index::place::ScopeId;
@@ -388,121 +386,6 @@ pub fn definition_kind_for_name<'db>(
None
}
/// Returns all definitions for a name. If any definitions are imports, they
/// are resolved (recursively) to the original definitions or module files.
pub fn definitions_for_name<'db>(
db: &'db dyn Db,
file: File,
name: &ast::ExprName,
) -> Vec<ResolvedDefinition<'db>> {
let index = semantic_index(db, file);
let name_str = name.id.as_str();
// Get the scope for this name expression
let Some(file_scope) = index.try_expression_scope_id(&ast::Expr::Name(name.clone())) else {
return Vec::new();
};
let mut all_definitions = Vec::new();
// Search through the scope hierarchy: start from the current scope and
// traverse up through parent scopes to find definitions
for (scope_id, _scope) in index.visible_ancestor_scopes(file_scope) {
let place_table = index.place_table(scope_id);
let Some(place_id) = place_table.place_id_by_name(name_str) else {
continue; // Name not found in this scope, try parent scope
};
// Check if this place is marked as global or nonlocal
let place_expr = place_table.place_expr(place_id);
let is_global = place_expr.is_marked_global();
let is_nonlocal = place_expr.is_marked_nonlocal();
// TODO: The current algorithm doesn't return definintions or bindings
// for other scopes that are outside of this scope hierarchy that target
// this name using a nonlocal or global binding. The semantic analyzer
// doesn't appear to track these in a way that we can easily access
// them from here without walking all scopes in the module.
// If marked as global, skip to global scope
if is_global {
let global_scope_id = global_scope(db, file);
let global_place_table = crate::semantic_index::place_table(db, global_scope_id);
if let Some(global_place_id) = global_place_table.place_id_by_name(name_str) {
let global_use_def_map = crate::semantic_index::use_def_map(db, global_scope_id);
let global_bindings = global_use_def_map.all_reachable_bindings(global_place_id);
let global_declarations =
global_use_def_map.all_reachable_declarations(global_place_id);
for binding in global_bindings {
if let Some(def) = binding.binding.definition() {
all_definitions.push(def);
}
}
for declaration in global_declarations {
if let Some(def) = declaration.declaration.definition() {
all_definitions.push(def);
}
}
}
break;
}
// If marked as nonlocal, skip current scope and search in ancestor scopes
if is_nonlocal {
// Continue searching in parent scopes, but skip the current scope
continue;
}
let use_def_map = index.use_def_map(scope_id);
// Get all definitions (both bindings and declarations) for this place
let bindings = use_def_map.all_reachable_bindings(place_id);
let declarations = use_def_map.all_reachable_declarations(place_id);
for binding in bindings {
if let Some(def) = binding.binding.definition() {
all_definitions.push(def);
}
}
for declaration in declarations {
if let Some(def) = declaration.declaration.definition() {
all_definitions.push(def);
}
}
// If we found definitions in this scope, we can stop searching
if !all_definitions.is_empty() {
break;
}
}
// Resolve import definitions to their targets
let mut resolved_definitions = Vec::new();
for definition in &all_definitions {
let resolved = resolve_definition(db, *definition, Some(name_str));
resolved_definitions.extend(resolved);
}
// If we didn't find any definitions in scopes, fallback to builtins
if resolved_definitions.is_empty() {
let Some(builtins_scope) = builtins_module_scope(db) else {
return Vec::new();
};
find_symbol_in_scope(db, builtins_scope, name_str)
.into_iter()
.flat_map(|def| resolve_definition(db, def, Some(name_str)))
.collect()
} else {
resolved_definitions
}
}
/// Details about a callable signature for IDE support.
#[derive(Debug, Clone)]
pub struct CallSignatureDetails<'db> {
@@ -572,201 +455,3 @@ pub fn call_signature_details<'db>(
vec![]
}
}
mod resolve_definition {
//! Resolves an Import, `ImportFrom` or `StarImport` definition to one or more
//! "resolved definitions". This is done recursively to find the original
//! definition targeted by the import.
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_python_ast as ast;
use rustc_hash::FxHashSet;
use crate::semantic_index::definition::{Definition, DefinitionKind};
use crate::semantic_index::place::ScopeId;
use crate::semantic_index::{global_scope, place_table, use_def_map};
use crate::{Db, ModuleName, resolve_module};
/// Represents the result of resolving an import to either a specific definition or a module file.
/// This enum helps distinguish between cases where an import resolves to:
/// - A specific definition within a module (e.g., `from os import path` -> definition of `path`)
/// - An entire module file (e.g., `import os` -> the `os` module file itself)
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ResolvedDefinition<'db> {
/// The import resolved to a specific definition within a module
Definition(Definition<'db>),
/// The import resolved to an entire module file
ModuleFile(File),
}
/// Resolve import definitions to their targets.
/// Returns resolved definitions which can be either specific definitions or module files.
/// For non-import definitions, returns the definition wrapped in `ResolvedDefinition::Definition`.
/// Always returns at least the original definition as a fallback if resolution fails.
pub(crate) fn resolve_definition<'db>(
db: &'db dyn Db,
definition: Definition<'db>,
symbol_name: Option<&str>,
) -> Vec<ResolvedDefinition<'db>> {
let mut visited = FxHashSet::default();
let resolved = resolve_definition_recursive(db, definition, &mut visited, symbol_name);
// If resolution failed, return the original definition as fallback
if resolved.is_empty() {
vec![ResolvedDefinition::Definition(definition)]
} else {
resolved
}
}
/// Helper function to resolve import definitions recursively.
fn resolve_definition_recursive<'db>(
db: &'db dyn Db,
definition: Definition<'db>,
visited: &mut FxHashSet<Definition<'db>>,
symbol_name: Option<&str>,
) -> Vec<ResolvedDefinition<'db>> {
// Prevent infinite recursion if there are circular imports
if visited.contains(&definition) {
return Vec::new(); // Return empty list for circular imports
}
visited.insert(definition);
let kind = definition.kind(db);
match kind {
DefinitionKind::Import(import_def) => {
let file = definition.file(db);
let module = parsed_module(db, file).load(db);
let alias = import_def.alias(&module);
// Get the full module name being imported
let Some(module_name) = ModuleName::new(&alias.name) else {
return Vec::new(); // Invalid module name, return empty list
};
// Resolve the module to its file
let Some(resolved_module) = resolve_module(db, &module_name) else {
return Vec::new(); // Module not found, return empty list
};
let Some(module_file) = resolved_module.file() else {
return Vec::new(); // No file for module, return empty list
};
// For simple imports like "import os", we want to navigate to the module itself.
// Return the module file directly instead of trying to find definitions within it.
vec![ResolvedDefinition::ModuleFile(module_file)]
}
DefinitionKind::ImportFrom(import_from_def) => {
let file = definition.file(db);
let module = parsed_module(db, file).load(db);
let import_node = import_from_def.import(&module);
let alias = import_from_def.alias(&module);
// For `ImportFrom`, we need to resolve the original imported symbol name
// (alias.name), not the local alias (symbol_name)
resolve_from_import_definitions(db, file, import_node, &alias.name, visited)
}
// For star imports, try to resolve to the specific symbol being accessed
DefinitionKind::StarImport(star_import_def) => {
let file = definition.file(db);
let module = parsed_module(db, file).load(db);
let import_node = star_import_def.import(&module);
// If we have a symbol name, use the helper to resolve it in the target module
if let Some(symbol_name) = symbol_name {
resolve_from_import_definitions(db, file, import_node, symbol_name, visited)
} else {
// No symbol context provided, can't resolve star import
Vec::new()
}
}
// For non-import definitions, return the definition as is
_ => vec![ResolvedDefinition::Definition(definition)],
}
}
/// Helper function to resolve import definitions for `ImportFrom` and `StarImport` cases.
fn resolve_from_import_definitions<'db>(
db: &'db dyn Db,
file: File,
import_node: &ast::StmtImportFrom,
symbol_name: &str,
visited: &mut FxHashSet<Definition<'db>>,
) -> Vec<ResolvedDefinition<'db>> {
// Resolve the target module file
let module_file = {
// Resolve the module being imported from (handles both relative and absolute imports)
let Some(module_name) = ModuleName::from_import_statement(db, file, import_node).ok()
else {
return Vec::new();
};
let Some(resolved_module) = resolve_module(db, &module_name) else {
return Vec::new();
};
resolved_module.file()
};
let Some(module_file) = module_file else {
return Vec::new(); // Module resolution failed
};
// Find the definition of this symbol in the imported module's global scope
let global_scope = global_scope(db, module_file);
let definitions_in_module = find_symbol_in_scope(db, global_scope, symbol_name);
// Recursively resolve any import definitions found in the target module
if definitions_in_module.is_empty() {
// If we can't find the specific symbol, return empty list
Vec::new()
} else {
let mut resolved_definitions = Vec::new();
for def in definitions_in_module {
let resolved = resolve_definition_recursive(db, def, visited, Some(symbol_name));
resolved_definitions.extend(resolved);
}
resolved_definitions
}
}
/// Find definitions for a symbol name in a specific scope.
pub(crate) fn find_symbol_in_scope<'db>(
db: &'db dyn Db,
scope: ScopeId<'db>,
symbol_name: &str,
) -> Vec<Definition<'db>> {
let place_table = place_table(db, scope);
let Some(place_id) = place_table.place_id_by_name(symbol_name) else {
return Vec::new();
};
let use_def_map = use_def_map(db, scope);
let mut definitions = Vec::new();
// Get all definitions (both bindings and declarations) for this place
let bindings = use_def_map.all_reachable_bindings(place_id);
let declarations = use_def_map.all_reachable_declarations(place_id);
for binding in bindings {
if let Some(def) = binding.binding.definition() {
definitions.push(def);
}
}
for declaration in declarations {
if let Some(def) = declaration.declaration.definition() {
definitions.push(def);
}
}
definitions
}
}
pub use resolve_definition::ResolvedDefinition;
use resolve_definition::{find_symbol_in_scope, resolve_definition};

View File

@@ -1618,8 +1618,8 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
// here and just bail out of this loop.
break;
}
// We found the closest definition. Note that (as in `infer_place_load`) this does
// *not* need to be a binding. It could be just a declaration, e.g. `x: int`.
// We found the closest definition. Note that (unlike in `infer_place_load`) this
// does *not* need to be a binding. It could be just `x: int`.
nonlocal_use_def_map = self.index.use_def_map(enclosing_scope_file_id);
declarations = nonlocal_use_def_map.end_of_scope_declarations(enclosing_place_id);
is_local = false;
@@ -2135,9 +2135,6 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
if self.in_function_overload_or_abstractmethod() {
return;
}
if self.scope().scope(self.db()).in_type_checking_block() {
return;
}
if let Some(class) = self.class_context_of_current_method() {
enclosing_class_context = Some(class);
if class.is_protocol(self.db()) {
@@ -4675,7 +4672,7 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
// in the global scope.
let ast::StmtGlobal {
node_index: _,
range: _,
range,
names,
} = global;
let global_place_table = self.index.place_table(FileScopeId::global());
@@ -4697,7 +4694,7 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
}
// This variable isn't explicitly defined in the global scope, nor is it an
// implicit global from `types.ModuleType`, so we consider this `global` statement invalid.
let Some(builder) = self.context.report_lint(&UNRESOLVED_GLOBAL, name) else {
let Some(builder) = self.context.report_lint(&UNRESOLVED_GLOBAL, range) else {
return;
};
let mut diag =
@@ -6082,15 +6079,7 @@ impl<'db, 'ast> TypeInferenceBuilder<'db, 'ast> {
let Some(enclosing_place) = enclosing_place_table.place_by_expr(expr) else {
continue;
};
if enclosing_place.is_marked_global() {
// Reads of "free" variables can terminate at an enclosing scope that marks the
// variable `global` but doesn't actually bind it. In that case, stop walking
// scopes and proceed to the global handling below. (But note that it's a
// semantic syntax error for the `nonlocal` keyword to do this. See
// `infer_nonlocal_statement`.)
break;
}
if enclosing_place.is_bound() || enclosing_place.is_declared() {
if enclosing_place.is_bound() {
// We can return early here, because the nearest function-like scope that
// defines a name must be the only source for the nonlocal reference (at
// runtime, it is the scope that creates the cell for our closure.) If the name

View File

@@ -1,4 +1,3 @@
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use get_size2::GetSize;
@@ -14,105 +13,3 @@ where
{
T::get_heap_size(&**arc)
}
#[derive(Clone, Hash, PartialEq, Eq)]
pub(crate) struct ThinVecSized<T>(thin_vec::ThinVec<T>);
impl<T> ThinVecSized<T> {
pub(crate) fn into_inner(self) -> thin_vec::ThinVec<T> {
self.0
}
pub(crate) fn new() -> Self {
Self(thin_vec::ThinVec::new())
}
}
impl<T> Default for ThinVecSized<T> {
fn default() -> Self {
Self::new()
}
}
#[allow(unsafe_code)]
unsafe impl<T> salsa::Update for ThinVecSized<T>
where
T: salsa::Update,
{
unsafe fn maybe_update(old_pointer: *mut Self, new_value: Self) -> bool {
let old: &mut Self = unsafe { &mut *old_pointer };
unsafe { salsa::Update::maybe_update(&raw mut old.0, new_value.0) }
}
}
impl<T> std::fmt::Debug for ThinVecSized<T>
where
T: std::fmt::Debug,
{
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
impl<T> From<thin_vec::ThinVec<T>> for ThinVecSized<T> {
fn from(vec: thin_vec::ThinVec<T>) -> Self {
Self(vec)
}
}
impl<T> Deref for ThinVecSized<T> {
type Target = thin_vec::ThinVec<T>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<T> DerefMut for ThinVecSized<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl<T> GetSize for ThinVecSized<T>
where
T: GetSize,
{
fn get_heap_size(&self) -> usize {
let mut total = 0;
for v in self {
total += GetSize::get_size(v);
}
let additional: usize = self.capacity() - self.len();
total += additional * T::get_stack_size();
total
}
}
impl<'a, T> IntoIterator for &'a ThinVecSized<T> {
type Item = &'a T;
type IntoIter = std::slice::Iter<'a, T>;
fn into_iter(self) -> Self::IntoIter {
self.0.iter()
}
}
impl<'a, T> IntoIterator for &'a mut ThinVecSized<T> {
type Item = &'a mut T;
type IntoIter = std::slice::IterMut<'a, T>;
fn into_iter(self) -> Self::IntoIter {
self.0.iter_mut()
}
}
impl<T> IntoIterator for ThinVecSized<T> {
type Item = T;
type IntoIter = thin_vec::IntoIter<T>;
fn into_iter(self) -> Self::IntoIter {
self.0.into_iter()
}
}

View File

@@ -23,7 +23,6 @@ ty_python_semantic = { workspace = true }
ty_vendored = { workspace = true }
anyhow = { workspace = true }
bitflags = { workspace = true }
crossbeam = { workspace = true }
jod-thread = { workspace = true }
lsp-server = { workspace = true }

View File

@@ -5,9 +5,9 @@ use crate::PositionEncoding;
use crate::session::{AllOptions, ClientOptions, DiagnosticMode, Session};
use lsp_server::Connection;
use lsp_types::{
ClientCapabilities, DeclarationCapability, DiagnosticOptions, DiagnosticServerCapabilities,
HoverProviderCapability, InitializeParams, InlayHintOptions, InlayHintServerCapabilities,
MessageType, SemanticTokensLegend, SemanticTokensOptions, SemanticTokensServerCapabilities,
ClientCapabilities, DiagnosticOptions, DiagnosticServerCapabilities, HoverProviderCapability,
InitializeParams, InlayHintOptions, InlayHintServerCapabilities, MessageType,
SemanticTokensLegend, SemanticTokensOptions, SemanticTokensServerCapabilities,
ServerCapabilities, SignatureHelpOptions, TextDocumentSyncCapability, TextDocumentSyncKind,
TextDocumentSyncOptions, TypeDefinitionProviderCapability, Url, WorkDoneProgressOptions,
};
@@ -21,8 +21,8 @@ mod schedule;
use crate::session::client::Client;
pub(crate) use api::Error;
pub(crate) use api::publish_settings_diagnostics;
pub(crate) use main_loop::{Action, ConnectionSender, Event, MainLoopReceiver, MainLoopSender};
pub(crate) type Result<T> = std::result::Result<T, api::Error>;
pub(crate) struct Server {
@@ -194,8 +194,6 @@ impl Server {
},
)),
type_definition_provider: Some(TypeDefinitionProviderCapability::Simple(true)),
definition_provider: Some(lsp_types::OneOf::Left(true)),
declaration_provider: Some(DeclarationCapability::Simple(true)),
hover_provider: Some(HoverProviderCapability::Simple(true)),
signature_help_provider: Some(SignatureHelpOptions {
trigger_characters: Some(vec!["(".to_string(), ",".to_string()]),

View File

@@ -17,7 +17,6 @@ mod traits;
use self::traits::{NotificationHandler, RequestHandler};
use super::{Result, schedule::BackgroundSchedule};
use crate::session::client::Client;
pub(crate) use diagnostics::publish_settings_diagnostics;
use ruff_db::panic::PanicError;
/// Processes a request from the client to the server.
@@ -45,14 +44,6 @@ pub(super) fn request(req: server::Request) -> Task {
>(
req, BackgroundSchedule::Worker
),
requests::GotoDeclarationRequestHandler::METHOD => background_document_request_task::<
requests::GotoDeclarationRequestHandler,
>(
req, BackgroundSchedule::Worker
),
requests::GotoDefinitionRequestHandler::METHOD => background_document_request_task::<
requests::GotoDefinitionRequestHandler,
>(req, BackgroundSchedule::Worker),
requests::HoverRequestHandler::METHOD => background_document_request_task::<
requests::HoverRequestHandler,
>(req, BackgroundSchedule::Worker),

View File

@@ -8,13 +8,11 @@ use rustc_hash::FxHashMap;
use ruff_db::diagnostic::{Annotation, Severity, SubDiagnostic};
use ruff_db::files::FileRange;
use ruff_db::source::{line_index, source_text};
use ruff_db::system::SystemPathBuf;
use ty_project::{Db, ProjectDatabase};
use crate::document::{DocumentKey, FileRangeExt, ToRangeExt};
use crate::session::DocumentSnapshot;
use crate::session::client::Client;
use crate::system::{AnySystemPath, file_to_url};
use crate::{PositionEncoding, Session};
/// Represents the diagnostics for a text document or a notebook document.
@@ -66,7 +64,7 @@ pub(super) fn clear_diagnostics(key: &DocumentKey, client: &Client) {
///
/// [publish diagnostics notification]: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_publishDiagnostics
pub(super) fn publish_diagnostics(session: &Session, key: &DocumentKey, client: &Client) {
if session.client_capabilities().supports_pull_diagnostics() {
if session.client_capabilities().pull_diagnostics {
return;
}
@@ -111,82 +109,6 @@ pub(super) fn publish_diagnostics(session: &Session, key: &DocumentKey, client:
}
}
/// Publishes settings diagnostics for all the project at the given path
/// using the [publish diagnostics notification].
///
/// [publish diagnostics notification]: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_publishDiagnostics
pub(crate) fn publish_settings_diagnostics(
session: &mut Session,
client: &Client,
path: SystemPathBuf,
) {
// Don't publish settings diagnostics for workspace that are already doing full diagnostics.
//
// Note we DO NOT respect the fact that clients support pulls because these are
// files they *specifically* won't pull diagnostics from us for, because we don't
// claim to be an LSP for them.
let has_workspace_diagnostics = session
.workspaces()
.for_path(&path)
.map(|workspace| workspace.settings().diagnostic_mode().is_workspace())
.unwrap_or(false);
if has_workspace_diagnostics {
return;
}
let session_encoding = session.position_encoding();
let state = session.project_state_mut(&AnySystemPath::System(path));
let db = &state.db;
let project = db.project();
let settings_diagnostics = project.check_settings(db);
// We need to send diagnostics if we have non-empty ones, or we have ones to clear.
// These will both almost always be empty so this function will almost always be a no-op.
if settings_diagnostics.is_empty() && state.untracked_files_with_pushed_diagnostics.is_empty() {
return;
}
// Group diagnostics by URL
let mut diagnostics_by_url: FxHashMap<Url, Vec<_>> = FxHashMap::default();
for diagnostic in settings_diagnostics {
if let Some(span) = diagnostic.primary_span() {
let file = span.expect_ty_file();
let Some(url) = file_to_url(db, file) else {
tracing::debug!("Failed to convert file to URL at {}", file.path(db));
continue;
};
diagnostics_by_url.entry(url).or_default().push(diagnostic);
}
}
// Record the URLs we're sending non-empty diagnostics for, so we know to clear them
// the next time we publish settings diagnostics!
let old_untracked = std::mem::replace(
&mut state.untracked_files_with_pushed_diagnostics,
diagnostics_by_url.keys().cloned().collect(),
);
// Add empty diagnostics for any files that had diagnostics before but don't now.
// This will clear them (either the file is no longer relevant to us or fixed!)
for url in old_untracked {
diagnostics_by_url.entry(url).or_default();
}
// Send the settings diagnostics!
for (url, file_diagnostics) in diagnostics_by_url {
// Convert diagnostics to LSP format
let lsp_diagnostics = file_diagnostics
.into_iter()
.map(|diagnostic| to_lsp_diagnostic(db, &diagnostic, session_encoding))
.collect::<Vec<_>>();
client.send_notification::<PublishDiagnostics>(PublishDiagnosticsParams {
uri: url,
diagnostics: lsp_diagnostics,
version: None,
});
}
}
pub(super) fn compute_diagnostics(
db: &ProjectDatabase,
snapshot: &DocumentSnapshot,

View File

@@ -1,5 +1,5 @@
use crate::server::Result;
use crate::server::api::diagnostics::{publish_diagnostics, publish_settings_diagnostics};
use crate::server::api::diagnostics::publish_diagnostics;
use crate::server::api::traits::{NotificationHandler, SyncNotificationHandler};
use crate::session::Session;
use crate::session::client::Client;
@@ -88,8 +88,7 @@ impl SyncNotificationHandler for DidChangeWatchedFiles {
for (root, changes) in events_by_db {
tracing::debug!("Applying changes to `{root}`");
let result = session.apply_changes(&AnySystemPath::System(root.clone()), changes);
publish_settings_diagnostics(session, client, root);
let result = session.apply_changes(&AnySystemPath::System(root), changes);
project_changed |= result.project_changed();
}
@@ -97,7 +96,7 @@ impl SyncNotificationHandler for DidChangeWatchedFiles {
let client_capabilities = session.client_capabilities();
if project_changed {
if client_capabilities.supports_workspace_diagnostic_refresh() {
if client_capabilities.diagnostics_refresh {
client.send_request::<types::request::WorkspaceDiagnosticRefresh>(
session,
(),
@@ -108,10 +107,11 @@ impl SyncNotificationHandler for DidChangeWatchedFiles {
publish_diagnostics(session, &key, client);
}
}
// TODO: always publish diagnostics for notebook files (since they don't use pull diagnostics)
}
if client_capabilities.supports_inlay_hint_refresh() {
if client_capabilities.inlay_refresh {
client.send_request::<types::request::InlayHintRefreshRequest>(session, (), |_, ()| {});
}

Some files were not shown because too many files have changed in this diff Show More