Compare commits
41 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
49a5a9ccc2 | ||
|
|
69d9212817 | ||
|
|
4a305588e9 | ||
|
|
16acd4913f | ||
|
|
3989cb8b56 | ||
|
|
a38c05bf13 | ||
|
|
ab107ef1f3 | ||
|
|
b36c713279 | ||
|
|
34a5063aa2 | ||
|
|
adc0a5d126 | ||
|
|
e28e737296 | ||
|
|
37ad994318 | ||
|
|
246a3388ee | ||
|
|
6be00d5775 | ||
|
|
9200dfc79f | ||
|
|
5dcde88099 | ||
|
|
7794eb2bde | ||
|
|
40bfae4f99 | ||
|
|
7b064b25b2 | ||
|
|
9993115f63 | ||
|
|
f0a21c9161 | ||
|
|
f26c155de5 | ||
|
|
c3fa826b0a | ||
|
|
8b69794f1d | ||
|
|
4e7c84df1d | ||
|
|
99c400000a | ||
|
|
b5d147d219 | ||
|
|
77da4615c1 | ||
|
|
627d230688 | ||
|
|
0eef834e89 | ||
|
|
650c578e07 | ||
|
|
9567fddf69 | ||
|
|
ab6d9d4658 | ||
|
|
677893226a | ||
|
|
33fd50027c | ||
|
|
3e30962077 | ||
|
|
81275a6c3d | ||
|
|
52c946a4c5 | ||
|
|
ebdaf5765a | ||
|
|
9a93409e1c | ||
|
|
102b9d930f |
5
.vscode/extensions.json
vendored
Normal file
5
.vscode/extensions.json
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"recommendations": [
|
||||
"rust-lang.rust-analyzer"
|
||||
]
|
||||
}
|
||||
6
.vscode/settings.json
vendored
Normal file
6
.vscode/settings.json
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"rust-analyzer.check.extraArgs": [
|
||||
"--all-features"
|
||||
],
|
||||
"rust-analyzer.check.command": "clippy",
|
||||
}
|
||||
37
CHANGELOG.md
37
CHANGELOG.md
@@ -1,5 +1,42 @@
|
||||
# Changelog
|
||||
|
||||
## 0.4.6
|
||||
|
||||
### Breaking changes
|
||||
|
||||
- Use project-relative paths when calculating GitLab fingerprints ([#11532](https://github.com/astral-sh/ruff/pull/11532))
|
||||
|
||||
### Preview features
|
||||
|
||||
- \[`flake8-async`\] Sleep with >24 hour interval should usually sleep forever (`ASYNC116`) ([#11498](https://github.com/astral-sh/ruff/pull/11498))
|
||||
|
||||
### Rule changes
|
||||
|
||||
- \[`numpy`\] Add missing functions to NumPy 2.0 migration rule ([#11528](https://github.com/astral-sh/ruff/pull/11528))
|
||||
- \[`mccabe`\] Consider irrefutable pattern similar to `if .. else` for `C901` ([#11565](https://github.com/astral-sh/ruff/pull/11565))
|
||||
- Consider `match`-`case` statements for `C901`, `PLR0912`, and `PLR0915` ([#11521](https://github.com/astral-sh/ruff/pull/11521))
|
||||
- Remove empty strings when converting to f-string (`UP032`) ([#11524](https://github.com/astral-sh/ruff/pull/11524))
|
||||
- \[`flake8-bandit`\] `request-without-timeout` should warn for `requests.request` ([#11548](https://github.com/astral-sh/ruff/pull/11548))
|
||||
- \[`flake8-self`\] Ignore sunder accesses in `flake8-self` rules ([#11546](https://github.com/astral-sh/ruff/pull/11546))
|
||||
- \[`pyupgrade`\] Lint for `TypeAliasType` usages (`UP040`) ([#11530](https://github.com/astral-sh/ruff/pull/11530))
|
||||
|
||||
### Server
|
||||
|
||||
- Respect excludes in `ruff server` configuration discovery ([#11551](https://github.com/astral-sh/ruff/pull/11551))
|
||||
- Use default settings if initialization options is empty or not provided ([#11566](https://github.com/astral-sh/ruff/pull/11566))
|
||||
- `ruff server` correctly treats `.pyi` files as stub files ([#11535](https://github.com/astral-sh/ruff/pull/11535))
|
||||
- `ruff server` searches for configuration in parent directories ([#11537](https://github.com/astral-sh/ruff/pull/11537))
|
||||
- `ruff server`: An empty code action filter no longer returns notebook source actions ([#11526](https://github.com/astral-sh/ruff/pull/11526))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- \[`flake8-logging-format`\] Fix autofix title in `logging-warn` (`G010`) ([#11514](https://github.com/astral-sh/ruff/pull/11514))
|
||||
- \[`refurb`\] Avoid recommending `operator.itemgetter` with dependence on lambda arguments ([#11574](https://github.com/astral-sh/ruff/pull/11574))
|
||||
- \[`flake8-simplify`\] Avoid recommending context manager in `__enter__` implementations ([#11575](https://github.com/astral-sh/ruff/pull/11575))
|
||||
- Create intermediary directories for `--output-file` ([#11550](https://github.com/astral-sh/ruff/pull/11550))
|
||||
- Propagate reads on global variables ([#11584](https://github.com/astral-sh/ruff/pull/11584))
|
||||
- Treat all `singledispatch` arguments as runtime-required ([#11523](https://github.com/astral-sh/ruff/pull/11523))
|
||||
|
||||
## 0.4.5
|
||||
|
||||
### Ruff's language server is now in Beta
|
||||
|
||||
@@ -101,6 +101,8 @@ pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting,
|
||||
These checks will run on GitHub Actions when you open your pull request, but running them locally
|
||||
will save you time and expedite the merge process.
|
||||
|
||||
If you're using VS Code, you can also install the recommended [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer) extension to get these checks while editing.
|
||||
|
||||
Note that many code changes also require updating the snapshot tests, which is done interactively
|
||||
after running `cargo test` like so:
|
||||
|
||||
|
||||
99
Cargo.lock
generated
99
Cargo.lock
generated
@@ -129,9 +129,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "anyhow"
|
||||
version = "1.0.83"
|
||||
version = "1.0.86"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "25bdb32cbbdce2b519a9cd7df3a678443100e265d5e25ca763b7572a5104f5f3"
|
||||
checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da"
|
||||
|
||||
[[package]]
|
||||
name = "argfile"
|
||||
@@ -886,12 +886,6 @@ version = "0.3.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024"
|
||||
|
||||
[[package]]
|
||||
name = "hexf-parse"
|
||||
version = "0.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dfa686283ad6dd069f105e5ab091b04c62850d3e4cf5d67debad1933f55023df"
|
||||
|
||||
[[package]]
|
||||
name = "home"
|
||||
version = "0.5.9"
|
||||
@@ -1176,41 +1170,11 @@ version = "1.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
|
||||
|
||||
[[package]]
|
||||
name = "lexical-parse-float"
|
||||
version = "0.8.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "683b3a5ebd0130b8fb52ba0bdc718cc56815b6a097e28ae5a6997d0ad17dc05f"
|
||||
dependencies = [
|
||||
"lexical-parse-integer",
|
||||
"lexical-util",
|
||||
"static_assertions",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lexical-parse-integer"
|
||||
version = "0.8.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6d0994485ed0c312f6d965766754ea177d07f9c00c9b82a5ee62ed5b47945ee9"
|
||||
dependencies = [
|
||||
"lexical-util",
|
||||
"static_assertions",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lexical-util"
|
||||
version = "0.8.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5255b9ff16ff898710eb9eb63cb39248ea8a5bb036bea8085b1a767ff6c4e3fc"
|
||||
dependencies = [
|
||||
"static_assertions",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "libc"
|
||||
version = "0.2.154"
|
||||
version = "0.2.155"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
|
||||
checksum = "97b3888a4aecf77e811145cadf6eef5901f4782c53886191b2f693f24761847c"
|
||||
|
||||
[[package]]
|
||||
name = "libcst"
|
||||
@@ -1239,9 +1203,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "libmimalloc-sys"
|
||||
version = "0.1.37"
|
||||
version = "0.1.38"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "81eb4061c0582dedea1cbc7aff2240300dd6982e0239d1c99e65c1dbf4a30ba7"
|
||||
checksum = "0e7bb23d733dfcc8af652a78b7bf232f0e967710d044732185e561e47c0336b6"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"libc",
|
||||
@@ -1338,9 +1302,9 @@ checksum = "6c8640c5d730cb13ebd907d8d04b52f55ac9a2eec55b440c8892f40d56c76c1d"
|
||||
|
||||
[[package]]
|
||||
name = "mimalloc"
|
||||
version = "0.1.41"
|
||||
version = "0.1.42"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9f41a2280ded0da56c8cf898babb86e8f10651a34adcfff190ae9a1159c6908d"
|
||||
checksum = "e9186d86b79b52f4a77af65604b51225e8db1d6ee7e3f41aec1e40829c71a176"
|
||||
dependencies = [
|
||||
"libmimalloc-sys",
|
||||
]
|
||||
@@ -1497,9 +1461,9 @@ checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
|
||||
|
||||
[[package]]
|
||||
name = "parking_lot"
|
||||
version = "0.12.2"
|
||||
version = "0.12.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7e4af0ca4f6caed20e900d564c242b8e5d4903fdacf31d3daf527b66fe6f42fb"
|
||||
checksum = "f1bf18183cf54e8d6059647fc3063646a1801cf30896933ec2311622cc4b9a27"
|
||||
dependencies = [
|
||||
"lock_api",
|
||||
"parking_lot_core",
|
||||
@@ -1706,9 +1670,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro2"
|
||||
version = "1.0.82"
|
||||
version = "1.0.84"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8ad3d49ab951a01fbaafe34f2ec74122942fe18a3f9814c3268f1bb72042131b"
|
||||
checksum = "ec96c6a92621310b51366f1e28d05ef11489516e93be030060e5fc12024a49d6"
|
||||
dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
@@ -1939,7 +1903,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.4.5"
|
||||
version = "0.4.6"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"argfile",
|
||||
@@ -2100,7 +2064,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff_linter"
|
||||
version = "0.4.5"
|
||||
version = "0.4.6"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"annotate-snippets 0.9.2",
|
||||
@@ -2277,9 +2241,7 @@ name = "ruff_python_literal"
|
||||
version = "0.0.0"
|
||||
dependencies = [
|
||||
"bitflags 2.5.0",
|
||||
"hexf-parse",
|
||||
"itertools 0.12.1",
|
||||
"lexical-parse-float",
|
||||
"ruff_python_ast",
|
||||
"unic-ucd-category",
|
||||
]
|
||||
@@ -2367,6 +2329,7 @@ version = "0.2.2"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"crossbeam",
|
||||
"globset",
|
||||
"insta",
|
||||
"jod-thread",
|
||||
"libc",
|
||||
@@ -2555,9 +2518,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "schemars"
|
||||
version = "0.8.19"
|
||||
version = "0.8.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fc6e7ed6919cb46507fb01ff1654309219f62b4d603822501b0b80d42f6f21ef"
|
||||
checksum = "09c024468a378b7e36765cd36702b7a90cc3cba11654f6685c8f233408e89e92"
|
||||
dependencies = [
|
||||
"dyn-clone",
|
||||
"schemars_derive",
|
||||
@@ -2567,9 +2530,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "schemars_derive"
|
||||
version = "0.8.19"
|
||||
version = "0.8.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "185f2b7aa7e02d418e453790dde16890256bbd2bcd04b7dc5348811052b53f49"
|
||||
checksum = "b1eee588578aff73f856ab961cd2f79e36bc45d7ded33a7562adba4667aecc0e"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -2597,9 +2560,9 @@ checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.201"
|
||||
version = "1.0.203"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "780f1cebed1629e4753a1a38a3c72d30b97ec044f0aef68cb26650a3c5cf363c"
|
||||
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
@@ -2617,9 +2580,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.201"
|
||||
version = "1.0.203"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c5e405930b9796f1c00bee880d03fc7e0bb4b9a11afc776885ffe84320da2865"
|
||||
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -2744,9 +2707,9 @@ checksum = "b7c388c1b5e93756d0c740965c41e8822f866621d41acbdf6336a6a168f8840c"
|
||||
|
||||
[[package]]
|
||||
name = "smol_str"
|
||||
version = "0.2.1"
|
||||
version = "0.2.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e6845563ada680337a52d43bb0b29f396f2d911616f6573012645b9e3d048a49"
|
||||
checksum = "dd538fb6910ac1099850255cf94a94df6551fbdd602454387d0adb2d1ca6dead"
|
||||
dependencies = [
|
||||
"serde",
|
||||
]
|
||||
@@ -2814,9 +2777,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
|
||||
|
||||
[[package]]
|
||||
name = "syn"
|
||||
version = "2.0.63"
|
||||
version = "2.0.66"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bf5be731623ca1a1fb7d8be6f261a3be6d3e2337b8a1f97be944d020c8fcb704"
|
||||
checksum = "c42f3f41a2de00b01c0aaad383c5a45241efc8b2d1eda5661812fda5f3cdcff5"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -2904,18 +2867,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "thiserror"
|
||||
version = "1.0.60"
|
||||
version = "1.0.61"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "579e9083ca58dd9dcf91a9923bb9054071b9ebbd800b342194c9feb0ee89fc18"
|
||||
checksum = "c546c80d6be4bc6a00c0f01730c08df82eaa7a7a61f11d656526506112cc1709"
|
||||
dependencies = [
|
||||
"thiserror-impl",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thiserror-impl"
|
||||
version = "1.0.60"
|
||||
version = "1.0.61"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e2470041c06ec3ac1ab38d0356a6119054dedaea53e12fbefc0de730a1c08524"
|
||||
checksum = "46c3384250002a6d5af4d114f2845d37b57521033f30d5c3f46c4d70e1197533"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
||||
@@ -62,7 +62,6 @@ filetime = { version = "0.2.23" }
|
||||
glob = { version = "0.3.1" }
|
||||
globset = { version = "0.4.14" }
|
||||
hashbrown = "0.14.3"
|
||||
hexf-parse = { version = "0.2.1" }
|
||||
ignore = { version = "0.4.22" }
|
||||
imara-diff = { version = "0.1.5" }
|
||||
imperative = { version = "1.0.4" }
|
||||
@@ -76,12 +75,11 @@ is-wsl = { version = "0.4.0" }
|
||||
itertools = { version = "0.12.1" }
|
||||
js-sys = { version = "0.3.69" }
|
||||
jod-thread = { version = "0.1.2" }
|
||||
lexical-parse-float = { version = "0.8.0", features = ["format"] }
|
||||
libc = { version = "0.2.153" }
|
||||
libcst = { version = "1.1.0", default-features = false }
|
||||
log = { version = "0.4.17" }
|
||||
lsp-server = { version = "0.7.6" }
|
||||
lsp-types = { git="https://github.com/astral-sh/lsp-types.git", rev = "3512a9f", features = ["proposed"] }
|
||||
lsp-types = { git = "https://github.com/astral-sh/lsp-types.git", rev = "3512a9f", features = ["proposed"] }
|
||||
matchit = { version = "0.8.1" }
|
||||
memchr = { version = "2.7.1" }
|
||||
mimalloc = { version = "0.1.39" }
|
||||
|
||||
@@ -152,7 +152,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.4.5
|
||||
rev: v0.4.6
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
@@ -433,6 +433,7 @@ Ruff is used by a number of major open-source projects and companies, including:
|
||||
- Modern Treasury ([Python SDK](https://github.com/Modern-Treasury/modern-treasury-python))
|
||||
- Mozilla ([Firefox](https://github.com/mozilla/gecko-dev))
|
||||
- [Mypy](https://github.com/python/mypy)
|
||||
- [Nautobot](https://github.com/nautobot/nautobot)
|
||||
- Netflix ([Dispatch](https://github.com/Netflix/dispatch))
|
||||
- [Neon](https://github.com/neondatabase/neon)
|
||||
- [Nokia](https://nokia.com/)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff"
|
||||
version = "0.4.5"
|
||||
version = "0.4.6"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -237,6 +237,9 @@ pub fn check(args: CheckCommand, global_options: GlobalConfigArgs) -> Result<Exi
|
||||
let mut writer: Box<dyn Write> = match cli.output_file {
|
||||
Some(path) if !cli.watch => {
|
||||
colored::control::set_override(false);
|
||||
if let Some(parent) = path.parent() {
|
||||
std::fs::create_dir_all(parent)?;
|
||||
}
|
||||
let file = File::create(path)?;
|
||||
Box::new(BufWriter::new(file))
|
||||
}
|
||||
|
||||
@@ -553,11 +553,6 @@ impl PrintedRange {
|
||||
pub fn source_range(&self) -> TextRange {
|
||||
self.source_range
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_code(self, code: String) -> Self {
|
||||
Self { code, ..self }
|
||||
}
|
||||
}
|
||||
|
||||
/// Public return type of the formatter
|
||||
@@ -780,10 +775,6 @@ where
|
||||
self.item = item;
|
||||
self
|
||||
}
|
||||
|
||||
pub fn into_item(self) -> T {
|
||||
self.item
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, R, C> Format<C> for FormatOwnedWithRule<T, R, C>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff_linter"
|
||||
version = "0.4.5"
|
||||
version = "0.4.6"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
57
crates/ruff_linter/resources/test/fixtures/flake8_async/ASYNC116.py
vendored
Normal file
57
crates/ruff_linter/resources/test/fixtures/flake8_async/ASYNC116.py
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
# type: ignore
|
||||
# ASYNCIO_NO_ERROR - no asyncio.sleep_forever, so check intentionally doesn't trigger.
|
||||
import math
|
||||
from math import inf
|
||||
|
||||
|
||||
async def import_trio():
|
||||
import trio
|
||||
|
||||
# These examples are probably not meant to ever wake up:
|
||||
await trio.sleep(100000) # error: 116, "async"
|
||||
|
||||
# 'inf literal' overflow trick
|
||||
await trio.sleep(1e999) # error: 116, "async"
|
||||
|
||||
await trio.sleep(86399)
|
||||
await trio.sleep(86400)
|
||||
await trio.sleep(86400.01) # error: 116, "async"
|
||||
await trio.sleep(86401) # error: 116, "async"
|
||||
|
||||
await trio.sleep(-1) # will raise a runtime error
|
||||
await trio.sleep(0) # handled by different check
|
||||
|
||||
# these ones _definitely_ never wake up (TODO)
|
||||
await trio.sleep(float("inf"))
|
||||
await trio.sleep(math.inf)
|
||||
await trio.sleep(inf)
|
||||
|
||||
# don't require inf to be in math (TODO)
|
||||
await trio.sleep(np.inf)
|
||||
|
||||
# don't evaluate expressions (TODO)
|
||||
one_day = 86401
|
||||
await trio.sleep(86400 + 1)
|
||||
await trio.sleep(60 * 60 * 24 + 1)
|
||||
await trio.sleep(foo())
|
||||
await trio.sleep(one_day)
|
||||
await trio.sleep(86400 + foo())
|
||||
await trio.sleep(86400 + ...)
|
||||
await trio.sleep("hello")
|
||||
await trio.sleep(...)
|
||||
|
||||
|
||||
def not_async_fun():
|
||||
import trio
|
||||
|
||||
# does not require the call to be awaited, nor in an async fun
|
||||
trio.sleep(86401) # error: 116, "async"
|
||||
# also checks that we don't break visit_Call
|
||||
trio.run(trio.sleep(86401)) # error: 116, "async"
|
||||
|
||||
|
||||
async def import_from_trio():
|
||||
from trio import sleep
|
||||
|
||||
# catch from import
|
||||
await sleep(86401) # error: 116, "async"
|
||||
@@ -77,3 +77,8 @@ print(foo._asdict())
|
||||
import os
|
||||
|
||||
os._exit()
|
||||
|
||||
|
||||
from enum import Enum
|
||||
|
||||
Enum._missing_(1) # OK
|
||||
|
||||
@@ -46,3 +46,15 @@ with contextlib.ExitStack() as exit_stack:
|
||||
# OK (quick one-liner to clear file contents)
|
||||
open("filename", "w").close()
|
||||
pathlib.Path("filename").open("w").close()
|
||||
|
||||
|
||||
# OK (custom context manager)
|
||||
class MyFile:
|
||||
def __init__(self, filename: str):
|
||||
self.filename = filename
|
||||
|
||||
def __enter__(self):
|
||||
self.file = open(self.filename)
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.file.close()
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from functools import singledispatch
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from numpy import asarray
|
||||
@@ -32,3 +33,24 @@ def _(a: spmatrix) -> spmatrix:
|
||||
|
||||
def _(a: DataFrame) -> DataFrame:
|
||||
return a
|
||||
|
||||
|
||||
@singledispatch
|
||||
def process_path(a: int | str, p: Path) -> int:
|
||||
"""Convert arg to array or leaves it as sparse matrix."""
|
||||
msg = f"Unhandled type {type(a)}"
|
||||
raise NotImplementedError(msg)
|
||||
|
||||
|
||||
@process_path.register
|
||||
def _(a: int, p: Path) -> int:
|
||||
return asarray(a)
|
||||
|
||||
|
||||
@process_path.register
|
||||
def _(a: str, p: Path) -> int:
|
||||
return a
|
||||
|
||||
|
||||
def _(a: DataFrame, p: Path) -> DataFrame:
|
||||
return a
|
||||
|
||||
@@ -106,3 +106,11 @@ def func():
|
||||
np.who()
|
||||
|
||||
np.row_stack(([1,2], [3,4]))
|
||||
|
||||
np.alltrue([True, True])
|
||||
|
||||
np.anytrue([True, False])
|
||||
|
||||
np.cumproduct([1, 2, 3])
|
||||
|
||||
np.product([1, 2, 3])
|
||||
|
||||
@@ -51,3 +51,37 @@ x: int = 1
|
||||
# type alias.
|
||||
T = typing.TypeVar["T"]
|
||||
Decorator: TypeAlias = typing.Callable[[T], T]
|
||||
|
||||
|
||||
from typing import TypeVar, Annotated, TypeAliasType
|
||||
|
||||
from annotated_types import Gt, SupportGt
|
||||
|
||||
|
||||
# https://github.com/astral-sh/ruff/issues/11422
|
||||
T = TypeVar("T")
|
||||
PositiveList = TypeAliasType(
|
||||
"PositiveList", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
)
|
||||
|
||||
# Bound
|
||||
T = TypeVar("T", bound=SupportGt)
|
||||
PositiveList = TypeAliasType(
|
||||
"PositiveList", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
)
|
||||
|
||||
# Multiple bounds
|
||||
T1 = TypeVar("T1", bound=SupportGt)
|
||||
T2 = TypeVar("T2")
|
||||
T3 = TypeVar("T3")
|
||||
Tuple3 = TypeAliasType("Tuple3", tuple[T1, T2, T3], type_params=(T1, T2, T3))
|
||||
|
||||
# No type_params
|
||||
PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)], type_params=())
|
||||
|
||||
# OK: Other name
|
||||
T = TypeVar("T", bound=SupportGt)
|
||||
PositiveList = TypeAliasType(
|
||||
"PositiveList2", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
)
|
||||
|
||||
@@ -60,6 +60,7 @@ op_itemgetter = lambda x, y: (x[0], y[0])
|
||||
op_itemgetter = lambda x: ()
|
||||
op_itemgetter = lambda x: (*x[0], x[1])
|
||||
op_itemgetter = lambda x: (x[0],)
|
||||
op_itemgetter = lambda x: x[x]
|
||||
|
||||
|
||||
def op_neg3(x, y):
|
||||
|
||||
@@ -508,6 +508,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
|
||||
if checker.enabled(Rule::BlockingOsCallInAsyncFunction) {
|
||||
flake8_async::rules::blocking_os_call(checker, call);
|
||||
}
|
||||
if checker.enabled(Rule::SleepForeverCall) {
|
||||
flake8_async::rules::sleep_forever_call(checker, call);
|
||||
}
|
||||
if checker.any_enabled(&[Rule::Print, Rule::PPrint]) {
|
||||
flake8_print::rules::print_call(checker, call);
|
||||
}
|
||||
|
||||
@@ -1558,6 +1558,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
|
||||
if checker.enabled(Rule::ListReverseCopy) {
|
||||
refurb::rules::list_assign_reversed(checker, assign);
|
||||
}
|
||||
if checker.enabled(Rule::NonPEP695TypeAlias) {
|
||||
pyupgrade::rules::non_pep695_type_alias_type(checker, assign);
|
||||
}
|
||||
}
|
||||
Stmt::AnnAssign(
|
||||
assign_stmt @ ast::StmtAnnAssign {
|
||||
|
||||
@@ -588,8 +588,10 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
Stmt::Global(ast::StmtGlobal { names, range: _ }) => {
|
||||
if !self.semantic.scope_id.is_global() {
|
||||
for name in names {
|
||||
if let Some(binding_id) = self.semantic.global_scope().get(name) {
|
||||
// Mark the binding in the global scope as "rebound" in the current scope.
|
||||
let binding_id = self.semantic.global_scope().get(name);
|
||||
|
||||
// Mark the binding in the global scope as "rebound" in the current scope.
|
||||
if let Some(binding_id) = binding_id {
|
||||
self.semantic
|
||||
.add_rebinding_scope(binding_id, self.semantic.scope_id);
|
||||
}
|
||||
@@ -597,7 +599,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
// Add a binding to the current scope.
|
||||
let binding_id = self.semantic.push_binding(
|
||||
name.range(),
|
||||
BindingKind::Global,
|
||||
BindingKind::Global(binding_id),
|
||||
BindingFlags::GLOBAL,
|
||||
);
|
||||
let scope = self.semantic.current_scope_mut();
|
||||
@@ -609,7 +611,8 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
if !self.semantic.scope_id.is_global() {
|
||||
for name in names {
|
||||
if let Some((scope_id, binding_id)) = self.semantic.nonlocal(name) {
|
||||
// Mark the binding as "used".
|
||||
// Mark the binding as "used", since the `nonlocal` requires an existing
|
||||
// binding.
|
||||
self.semantic.add_local_reference(
|
||||
binding_id,
|
||||
ExprContext::Load,
|
||||
@@ -624,7 +627,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
// Add a binding to the current scope.
|
||||
let binding_id = self.semantic.push_binding(
|
||||
name.range(),
|
||||
BindingKind::Nonlocal(scope_id),
|
||||
BindingKind::Nonlocal(binding_id, scope_id),
|
||||
BindingFlags::NONLOCAL,
|
||||
);
|
||||
let scope = self.semantic.current_scope_mut();
|
||||
@@ -661,7 +664,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
AnnotationContext::from_function(function_def, &self.semantic, self.settings);
|
||||
|
||||
// The first parameter may be a single dispatch.
|
||||
let mut singledispatch =
|
||||
let singledispatch =
|
||||
flake8_type_checking::helpers::is_singledispatch_implementation(
|
||||
function_def,
|
||||
self.semantic(),
|
||||
@@ -677,7 +680,6 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
if let Some(expr) = parameter.annotation() {
|
||||
if singledispatch && !parameter.is_variadic() {
|
||||
self.visit_runtime_required_annotation(expr);
|
||||
singledispatch = false;
|
||||
} else {
|
||||
match annotation {
|
||||
AnnotationContext::RuntimeRequired => {
|
||||
|
||||
@@ -334,6 +334,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
|
||||
(Flake8Async, "100") => (RuleGroup::Stable, rules::flake8_async::rules::BlockingHttpCallInAsyncFunction),
|
||||
(Flake8Async, "101") => (RuleGroup::Stable, rules::flake8_async::rules::OpenSleepOrSubprocessInAsyncFunction),
|
||||
(Flake8Async, "102") => (RuleGroup::Stable, rules::flake8_async::rules::BlockingOsCallInAsyncFunction),
|
||||
(Flake8Async, "116") => (RuleGroup::Preview, rules::flake8_async::rules::SleepForeverCall),
|
||||
|
||||
// flake8-trio
|
||||
(Flake8Trio, "100") => (RuleGroup::Stable, rules::flake8_trio::rules::TrioTimeoutWithoutAwait),
|
||||
|
||||
@@ -4,6 +4,7 @@ use std::iter::Peekable;
|
||||
use std::str::FromStr;
|
||||
|
||||
use bitflags::bitflags;
|
||||
use ruff_python_ast::StringFlags;
|
||||
use ruff_python_parser::lexer::LexResult;
|
||||
use ruff_python_parser::Tok;
|
||||
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
|
||||
@@ -45,22 +46,6 @@ pub struct IsortDirectives {
|
||||
pub skip_file: bool,
|
||||
}
|
||||
|
||||
impl IsortDirectives {
|
||||
pub fn is_excluded(&self, offset: TextSize) -> bool {
|
||||
for range in &self.exclusions {
|
||||
if range.contains(offset) {
|
||||
return true;
|
||||
}
|
||||
|
||||
if range.start() > offset {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Directives {
|
||||
pub noqa_line_for: NoqaMapping,
|
||||
pub isort: IsortDirectives,
|
||||
|
||||
@@ -82,12 +82,12 @@ impl Serialize for SerializedMessages<'_> {
|
||||
|project_dir| relativize_path_to(message.filename(), project_dir),
|
||||
);
|
||||
|
||||
let mut message_fingerprint = fingerprint(message, 0);
|
||||
let mut message_fingerprint = fingerprint(message, &path, 0);
|
||||
|
||||
// Make sure that we do not get a fingerprint that is already in use
|
||||
// by adding in the previously generated one.
|
||||
while fingerprints.contains(&message_fingerprint) {
|
||||
message_fingerprint = fingerprint(message, message_fingerprint);
|
||||
message_fingerprint = fingerprint(message, &path, message_fingerprint);
|
||||
}
|
||||
fingerprints.insert(message_fingerprint);
|
||||
|
||||
@@ -109,12 +109,12 @@ impl Serialize for SerializedMessages<'_> {
|
||||
}
|
||||
|
||||
/// Generate a unique fingerprint to identify a violation.
|
||||
fn fingerprint(message: &Message, salt: u64) -> u64 {
|
||||
fn fingerprint(message: &Message, project_path: &str, salt: u64) -> u64 {
|
||||
let Message {
|
||||
kind,
|
||||
range: _,
|
||||
fix: _fix,
|
||||
file,
|
||||
file: _,
|
||||
noqa_offset: _,
|
||||
} = message;
|
||||
|
||||
@@ -122,7 +122,7 @@ fn fingerprint(message: &Message, salt: u64) -> u64 {
|
||||
|
||||
salt.hash(&mut hasher);
|
||||
kind.name.hash(&mut hasher);
|
||||
file.name().hash(&mut hasher);
|
||||
project_path.hash(&mut hasher);
|
||||
|
||||
hasher.finish()
|
||||
}
|
||||
|
||||
@@ -125,8 +125,8 @@ impl Renamer {
|
||||
let scope_id = scope.get_all(name).find_map(|binding_id| {
|
||||
let binding = semantic.binding(binding_id);
|
||||
match binding.kind {
|
||||
BindingKind::Global => Some(ScopeId::global()),
|
||||
BindingKind::Nonlocal(symbol_id) => Some(symbol_id),
|
||||
BindingKind::Global(_) => Some(ScopeId::global()),
|
||||
BindingKind::Nonlocal(_, scope_id) => Some(scope_id),
|
||||
_ => None,
|
||||
}
|
||||
});
|
||||
@@ -266,8 +266,8 @@ impl Renamer {
|
||||
| BindingKind::LoopVar
|
||||
| BindingKind::ComprehensionVar
|
||||
| BindingKind::WithItemVar
|
||||
| BindingKind::Global
|
||||
| BindingKind::Nonlocal(_)
|
||||
| BindingKind::Global(_)
|
||||
| BindingKind::Nonlocal(_, _)
|
||||
| BindingKind::ClassDefinition(_)
|
||||
| BindingKind::FunctionDefinition(_)
|
||||
| BindingKind::Deletion
|
||||
|
||||
@@ -93,7 +93,7 @@ impl Violation for SysVersion2 {
|
||||
/// ## Why is this bad?
|
||||
/// If the current major or minor version consists of multiple digits,
|
||||
/// `sys.version[0]` will select the first digit of the major version number
|
||||
/// only (e.g., `"3.10"` would evaluate to `"1"`). This is likely unintended,
|
||||
/// only (e.g., `"10.2"` would evaluate to `"1"`). This is likely unintended,
|
||||
/// and can lead to subtle bugs if the version string is used to test against a
|
||||
/// major version number.
|
||||
///
|
||||
|
||||
@@ -16,6 +16,7 @@ mod tests {
|
||||
#[test_case(Rule::BlockingHttpCallInAsyncFunction, Path::new("ASYNC100.py"))]
|
||||
#[test_case(Rule::OpenSleepOrSubprocessInAsyncFunction, Path::new("ASYNC101.py"))]
|
||||
#[test_case(Rule::BlockingOsCallInAsyncFunction, Path::new("ASYNC102.py"))]
|
||||
#[test_case(Rule::SleepForeverCall, Path::new("ASYNC116.py"))]
|
||||
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
|
||||
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
|
||||
let diagnostics = test_path(
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
pub(crate) use blocking_http_call::*;
|
||||
pub(crate) use blocking_os_call::*;
|
||||
pub(crate) use open_sleep_or_subprocess_call::*;
|
||||
pub(crate) use sleep_forever_call::*;
|
||||
|
||||
mod blocking_http_call;
|
||||
mod blocking_os_call;
|
||||
mod open_sleep_or_subprocess_call;
|
||||
mod sleep_forever_call;
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::{Expr, ExprCall, ExprNumberLiteral, Number};
|
||||
use ruff_python_semantic::Modules;
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::{checkers::ast::Checker, importer::ImportRequest};
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for uses of `trio.sleep()` with an interval greater than 24 hours.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// `trio.sleep()` with an interval greater than 24 hours is usually intended
|
||||
/// to sleep indefinitely. Instead of using a large interval,
|
||||
/// `trio.sleep_forever()` better conveys the intent.
|
||||
///
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// import trio
|
||||
///
|
||||
///
|
||||
/// async def func():
|
||||
/// await trio.sleep(86401)
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// import trio
|
||||
///
|
||||
///
|
||||
/// async def func():
|
||||
/// await trio.sleep_forever()
|
||||
/// ```
|
||||
#[violation]
|
||||
pub struct SleepForeverCall;
|
||||
|
||||
impl Violation for SleepForeverCall {
|
||||
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
format!("`trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`")
|
||||
}
|
||||
|
||||
fn fix_title(&self) -> Option<String> {
|
||||
Some(format!("Replace with `trio.sleep_forever()`"))
|
||||
}
|
||||
}
|
||||
|
||||
/// ASYNC116
|
||||
pub(crate) fn sleep_forever_call(checker: &mut Checker, call: &ExprCall) {
|
||||
if !checker.semantic().seen_module(Modules::TRIO) {
|
||||
return;
|
||||
}
|
||||
|
||||
if call.arguments.len() != 1 {
|
||||
return;
|
||||
}
|
||||
|
||||
let Some(arg) = call.arguments.find_argument("seconds", 0) else {
|
||||
return;
|
||||
};
|
||||
|
||||
if !checker
|
||||
.semantic()
|
||||
.resolve_qualified_name(call.func.as_ref())
|
||||
.is_some_and(|qualified_name| matches!(qualified_name.segments(), ["trio", "sleep"]))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
let Expr::NumberLiteral(ExprNumberLiteral { value, .. }) = arg else {
|
||||
return;
|
||||
};
|
||||
|
||||
// TODO(ekohilas): Replace with Duration::from_days(1).as_secs(); when available.
|
||||
let one_day_in_secs = 60 * 60 * 24;
|
||||
match value {
|
||||
Number::Int(int_value) => {
|
||||
let Some(int_value) = int_value.as_u64() else {
|
||||
return;
|
||||
};
|
||||
if int_value <= one_day_in_secs {
|
||||
return;
|
||||
}
|
||||
}
|
||||
Number::Float(float_value) =>
|
||||
{
|
||||
#[allow(clippy::cast_precision_loss)]
|
||||
if *float_value <= one_day_in_secs as f64 {
|
||||
return;
|
||||
}
|
||||
}
|
||||
Number::Complex { .. } => return,
|
||||
}
|
||||
|
||||
let mut diagnostic = Diagnostic::new(SleepForeverCall, call.range());
|
||||
let replacement_function = "sleep_forever";
|
||||
diagnostic.try_set_fix(|| {
|
||||
let (import_edit, binding) = checker.importer().get_or_import_symbol(
|
||||
&ImportRequest::import_from("trio", replacement_function),
|
||||
call.func.start(),
|
||||
checker.semantic(),
|
||||
)?;
|
||||
let reference_edit = Edit::range_replacement(binding, call.func.range());
|
||||
let arg_edit = Edit::range_replacement("()".to_string(), call.arguments.range());
|
||||
Ok(Fix::unsafe_edits(import_edit, [reference_edit, arg_edit]))
|
||||
});
|
||||
checker.diagnostics.push(diagnostic);
|
||||
}
|
||||
@@ -0,0 +1,145 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_async/mod.rs
|
||||
---
|
||||
ASYNC116.py:11:11: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
10 | # These examples are probably not meant to ever wake up:
|
||||
11 | await trio.sleep(100000) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^^^^^^^ ASYNC116
|
||||
12 |
|
||||
13 | # 'inf literal' overflow trick
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
8 8 | import trio
|
||||
9 9 |
|
||||
10 10 | # These examples are probably not meant to ever wake up:
|
||||
11 |- await trio.sleep(100000) # error: 116, "async"
|
||||
11 |+ await trio.sleep_forever() # error: 116, "async"
|
||||
12 12 |
|
||||
13 13 | # 'inf literal' overflow trick
|
||||
14 14 | await trio.sleep(1e999) # error: 116, "async"
|
||||
|
||||
ASYNC116.py:14:11: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
13 | # 'inf literal' overflow trick
|
||||
14 | await trio.sleep(1e999) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^^^^^^ ASYNC116
|
||||
15 |
|
||||
16 | await trio.sleep(86399)
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
11 11 | await trio.sleep(100000) # error: 116, "async"
|
||||
12 12 |
|
||||
13 13 | # 'inf literal' overflow trick
|
||||
14 |- await trio.sleep(1e999) # error: 116, "async"
|
||||
14 |+ await trio.sleep_forever() # error: 116, "async"
|
||||
15 15 |
|
||||
16 16 | await trio.sleep(86399)
|
||||
17 17 | await trio.sleep(86400)
|
||||
|
||||
ASYNC116.py:18:11: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
16 | await trio.sleep(86399)
|
||||
17 | await trio.sleep(86400)
|
||||
18 | await trio.sleep(86400.01) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^^^^^^^^^ ASYNC116
|
||||
19 | await trio.sleep(86401) # error: 116, "async"
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
15 15 |
|
||||
16 16 | await trio.sleep(86399)
|
||||
17 17 | await trio.sleep(86400)
|
||||
18 |- await trio.sleep(86400.01) # error: 116, "async"
|
||||
18 |+ await trio.sleep_forever() # error: 116, "async"
|
||||
19 19 | await trio.sleep(86401) # error: 116, "async"
|
||||
20 20 |
|
||||
21 21 | await trio.sleep(-1) # will raise a runtime error
|
||||
|
||||
ASYNC116.py:19:11: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
17 | await trio.sleep(86400)
|
||||
18 | await trio.sleep(86400.01) # error: 116, "async"
|
||||
19 | await trio.sleep(86401) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^^^^^^ ASYNC116
|
||||
20 |
|
||||
21 | await trio.sleep(-1) # will raise a runtime error
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
16 16 | await trio.sleep(86399)
|
||||
17 17 | await trio.sleep(86400)
|
||||
18 18 | await trio.sleep(86400.01) # error: 116, "async"
|
||||
19 |- await trio.sleep(86401) # error: 116, "async"
|
||||
19 |+ await trio.sleep_forever() # error: 116, "async"
|
||||
20 20 |
|
||||
21 21 | await trio.sleep(-1) # will raise a runtime error
|
||||
22 22 | await trio.sleep(0) # handled by different check
|
||||
|
||||
ASYNC116.py:48:5: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
47 | # does not require the call to be awaited, nor in an async fun
|
||||
48 | trio.sleep(86401) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^^^^^^ ASYNC116
|
||||
49 | # also checks that we don't break visit_Call
|
||||
50 | trio.run(trio.sleep(86401)) # error: 116, "async"
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
45 45 | import trio
|
||||
46 46 |
|
||||
47 47 | # does not require the call to be awaited, nor in an async fun
|
||||
48 |- trio.sleep(86401) # error: 116, "async"
|
||||
48 |+ trio.sleep_forever() # error: 116, "async"
|
||||
49 49 | # also checks that we don't break visit_Call
|
||||
50 50 | trio.run(trio.sleep(86401)) # error: 116, "async"
|
||||
51 51 |
|
||||
|
||||
ASYNC116.py:50:14: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
48 | trio.sleep(86401) # error: 116, "async"
|
||||
49 | # also checks that we don't break visit_Call
|
||||
50 | trio.run(trio.sleep(86401)) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^^^^^^ ASYNC116
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
47 47 | # does not require the call to be awaited, nor in an async fun
|
||||
48 48 | trio.sleep(86401) # error: 116, "async"
|
||||
49 49 | # also checks that we don't break visit_Call
|
||||
50 |- trio.run(trio.sleep(86401)) # error: 116, "async"
|
||||
50 |+ trio.run(trio.sleep_forever()) # error: 116, "async"
|
||||
51 51 |
|
||||
52 52 |
|
||||
53 53 | async def import_from_trio():
|
||||
|
||||
ASYNC116.py:57:11: ASYNC116 [*] `trio.sleep()` with >24 hour interval should usually be `trio.sleep_forever()`
|
||||
|
|
||||
56 | # catch from import
|
||||
57 | await sleep(86401) # error: 116, "async"
|
||||
| ^^^^^^^^^^^^ ASYNC116
|
||||
|
|
||||
= help: Replace with `trio.sleep_forever()`
|
||||
|
||||
ℹ Unsafe fix
|
||||
2 2 | # ASYNCIO_NO_ERROR - no asyncio.sleep_forever, so check intentionally doesn't trigger.
|
||||
3 3 | import math
|
||||
4 4 | from math import inf
|
||||
5 |+from trio import sleep_forever
|
||||
5 6 |
|
||||
6 7 |
|
||||
7 8 | async def import_trio():
|
||||
--------------------------------------------------------------------------------
|
||||
54 55 | from trio import sleep
|
||||
55 56 |
|
||||
56 57 | # catch from import
|
||||
57 |- await sleep(86401) # error: 116, "async"
|
||||
58 |+ await sleep_forever() # error: 116, "async"
|
||||
@@ -58,7 +58,7 @@ pub(crate) fn request_without_timeout(checker: &mut Checker, call: &ast::ExprCal
|
||||
qualified_name.segments(),
|
||||
[
|
||||
"requests",
|
||||
"get" | "options" | "head" | "post" | "put" | "patch" | "delete"
|
||||
"get" | "options" | "head" | "post" | "put" | "patch" | "delete" | "request"
|
||||
]
|
||||
)
|
||||
})
|
||||
|
||||
@@ -7,7 +7,6 @@ use ruff_python_ast::Expr;
|
||||
use ruff_text_size::{Ranged, TextSize};
|
||||
|
||||
use crate::checkers::ast::Checker;
|
||||
use crate::importer::Importer;
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for uses of PEP 585- and PEP 604-style type annotations in Python
|
||||
@@ -87,13 +86,11 @@ impl AlwaysFixableViolation for FutureRequiredTypeAnnotation {
|
||||
/// FA102
|
||||
pub(crate) fn future_required_type_annotation(checker: &mut Checker, expr: &Expr, reason: Reason) {
|
||||
let mut diagnostic = Diagnostic::new(FutureRequiredTypeAnnotation { reason }, expr.range());
|
||||
if let Some(python_ast) = checker.semantic().definitions.python_ast() {
|
||||
let required_import =
|
||||
AnyImport::ImportFrom(ImportFrom::member("__future__", "annotations"));
|
||||
diagnostic.set_fix(Fix::unsafe_edit(
|
||||
Importer::new(python_ast, checker.locator(), checker.stylist())
|
||||
.add_import(&required_import, TextSize::default()),
|
||||
));
|
||||
}
|
||||
let required_import = AnyImport::ImportFrom(ImportFrom::member("__future__", "annotations"));
|
||||
diagnostic.set_fix(Fix::unsafe_edit(
|
||||
checker
|
||||
.importer()
|
||||
.add_import(&required_import, TextSize::default()),
|
||||
));
|
||||
checker.diagnostics.push(diagnostic);
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ G010.py:6:9: G010 [*] Logging statement uses `warn` instead of `warning`
|
||||
7 | log.warn("Hello world!") # This shouldn't be considered as a logger candidate
|
||||
8 | logger.warn("Hello world!")
|
||||
|
|
||||
= help: Convert to `warn`
|
||||
= help: Convert to `warning`
|
||||
|
||||
ℹ Safe fix
|
||||
3 3 |
|
||||
@@ -31,7 +31,7 @@ G010.py:8:8: G010 [*] Logging statement uses `warn` instead of `warning`
|
||||
9 |
|
||||
10 | logging . warn("Hello World!")
|
||||
|
|
||||
= help: Convert to `warn`
|
||||
= help: Convert to `warning`
|
||||
|
||||
ℹ Safe fix
|
||||
5 5 |
|
||||
@@ -52,7 +52,7 @@ G010.py:10:11: G010 [*] Logging statement uses `warn` instead of `warning`
|
||||
11 |
|
||||
12 | from logging import warn, warning, exception
|
||||
|
|
||||
= help: Convert to `warn`
|
||||
= help: Convert to `warning`
|
||||
|
||||
ℹ Safe fix
|
||||
7 7 | log.warn("Hello world!") # This shouldn't be considered as a logger candidate
|
||||
@@ -72,7 +72,7 @@ G010.py:13:1: G010 [*] Logging statement uses `warn` instead of `warning`
|
||||
14 | warning("foo")
|
||||
15 | exception("foo")
|
||||
|
|
||||
= help: Convert to `warn`
|
||||
= help: Convert to `warning`
|
||||
|
||||
ℹ Safe fix
|
||||
10 10 | logging . warn("Hello World!")
|
||||
|
||||
@@ -383,7 +383,7 @@ impl AlwaysFixableViolation for LoggingWarn {
|
||||
}
|
||||
|
||||
fn fix_title(&self) -> String {
|
||||
"Convert to `warn`".to_string()
|
||||
"Convert to `warning`".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use ruff_python_ast::AnyStringFlags;
|
||||
use ruff_python_ast::{AnyStringFlags, StringFlags};
|
||||
use ruff_text_size::TextLen;
|
||||
|
||||
/// Returns the raw contents of the string given the string's contents and flags.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::visitor::{walk_f_string, Visitor};
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, StringLike};
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, StringFlags, StringLike};
|
||||
use ruff_source_file::Locator;
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, StringLike};
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, StringFlags, StringLike};
|
||||
use ruff_source_file::Locator;
|
||||
use ruff_text_size::{Ranged, TextRange};
|
||||
|
||||
|
||||
@@ -20,6 +20,9 @@ use crate::checkers::ast::Checker;
|
||||
/// versions, that it will have the same type, or that it will have the same
|
||||
/// behavior. Instead, use the class's public interface.
|
||||
///
|
||||
/// This rule ignores accesses on dunder methods (e.g., `__init__`) and sunder
|
||||
/// methods (e.g., `_missing_`).
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// class Class:
|
||||
@@ -70,128 +73,143 @@ pub(crate) fn private_member_access(checker: &mut Checker, expr: &Expr) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (attr.starts_with("__") && !attr.ends_with("__"))
|
||||
|| (attr.starts_with('_') && !attr.starts_with("__"))
|
||||
// Ignore non-private accesses.
|
||||
if !attr.starts_with('_') {
|
||||
return;
|
||||
}
|
||||
|
||||
// Ignore dunder accesses.
|
||||
let is_dunder = attr.starts_with("__") && attr.ends_with("__");
|
||||
if is_dunder {
|
||||
return;
|
||||
}
|
||||
|
||||
// Ignore sunder accesses.
|
||||
let is_sunder = attr.starts_with('_')
|
||||
&& attr.ends_with('_')
|
||||
&& !attr.starts_with("__")
|
||||
&& !attr.ends_with("__");
|
||||
if is_sunder {
|
||||
return;
|
||||
}
|
||||
|
||||
if checker
|
||||
.settings
|
||||
.flake8_self
|
||||
.ignore_names
|
||||
.contains(attr.as_ref())
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
// Ignore accesses on instances within special methods (e.g., `__eq__`).
|
||||
if let ScopeKind::Function(ast::StmtFunctionDef { name, .. }) =
|
||||
checker.semantic().current_scope().kind
|
||||
{
|
||||
if matches!(
|
||||
name.as_str(),
|
||||
"__lt__"
|
||||
| "__le__"
|
||||
| "__eq__"
|
||||
| "__ne__"
|
||||
| "__gt__"
|
||||
| "__ge__"
|
||||
| "__add__"
|
||||
| "__sub__"
|
||||
| "__mul__"
|
||||
| "__matmul__"
|
||||
| "__truediv__"
|
||||
| "__floordiv__"
|
||||
| "__mod__"
|
||||
| "__divmod__"
|
||||
| "__pow__"
|
||||
| "__lshift__"
|
||||
| "__rshift__"
|
||||
| "__and__"
|
||||
| "__xor__"
|
||||
| "__or__"
|
||||
| "__radd__"
|
||||
| "__rsub__"
|
||||
| "__rmul__"
|
||||
| "__rmatmul__"
|
||||
| "__rtruediv__"
|
||||
| "__rfloordiv__"
|
||||
| "__rmod__"
|
||||
| "__rdivmod__"
|
||||
| "__rpow__"
|
||||
| "__rlshift__"
|
||||
| "__rrshift__"
|
||||
| "__rand__"
|
||||
| "__rxor__"
|
||||
| "__ror__"
|
||||
| "__iadd__"
|
||||
| "__isub__"
|
||||
| "__imul__"
|
||||
| "__imatmul__"
|
||||
| "__itruediv__"
|
||||
| "__ifloordiv__"
|
||||
| "__imod__"
|
||||
| "__ipow__"
|
||||
| "__ilshift__"
|
||||
| "__irshift__"
|
||||
| "__iand__"
|
||||
| "__ixor__"
|
||||
| "__ior__"
|
||||
) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Allow some documented private methods, like `os._exit()`.
|
||||
if let Some(qualified_name) = checker.semantic().resolve_qualified_name(expr) {
|
||||
if matches!(qualified_name.segments(), ["os", "_exit"]) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() {
|
||||
// Ignore `super()` calls.
|
||||
if let Some(name) = UnqualifiedName::from_expr(func) {
|
||||
if matches!(name.segments(), ["super"]) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(name) = UnqualifiedName::from_expr(value) {
|
||||
// Ignore `self` and `cls` accesses.
|
||||
if matches!(name.segments(), ["self" | "cls" | "mcs"]) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if let Expr::Name(name) = value.as_ref() {
|
||||
// Ignore accesses on class members from _within_ the class.
|
||||
if checker
|
||||
.settings
|
||||
.flake8_self
|
||||
.ignore_names
|
||||
.contains(attr.as_ref())
|
||||
.semantic()
|
||||
.resolve_name(name)
|
||||
.and_then(|id| {
|
||||
if let BindingKind::ClassDefinition(scope) = checker.semantic().binding(id).kind {
|
||||
Some(scope)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.is_some_and(|scope| {
|
||||
checker
|
||||
.semantic()
|
||||
.current_scope_ids()
|
||||
.any(|parent| scope == parent)
|
||||
})
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
// Ignore accesses on instances within special methods (e.g., `__eq__`).
|
||||
if let ScopeKind::Function(ast::StmtFunctionDef { name, .. }) =
|
||||
checker.semantic().current_scope().kind
|
||||
{
|
||||
if matches!(
|
||||
name.as_str(),
|
||||
"__lt__"
|
||||
| "__le__"
|
||||
| "__eq__"
|
||||
| "__ne__"
|
||||
| "__gt__"
|
||||
| "__ge__"
|
||||
| "__add__"
|
||||
| "__sub__"
|
||||
| "__mul__"
|
||||
| "__matmul__"
|
||||
| "__truediv__"
|
||||
| "__floordiv__"
|
||||
| "__mod__"
|
||||
| "__divmod__"
|
||||
| "__pow__"
|
||||
| "__lshift__"
|
||||
| "__rshift__"
|
||||
| "__and__"
|
||||
| "__xor__"
|
||||
| "__or__"
|
||||
| "__radd__"
|
||||
| "__rsub__"
|
||||
| "__rmul__"
|
||||
| "__rmatmul__"
|
||||
| "__rtruediv__"
|
||||
| "__rfloordiv__"
|
||||
| "__rmod__"
|
||||
| "__rdivmod__"
|
||||
| "__rpow__"
|
||||
| "__rlshift__"
|
||||
| "__rrshift__"
|
||||
| "__rand__"
|
||||
| "__rxor__"
|
||||
| "__ror__"
|
||||
| "__iadd__"
|
||||
| "__isub__"
|
||||
| "__imul__"
|
||||
| "__imatmul__"
|
||||
| "__itruediv__"
|
||||
| "__ifloordiv__"
|
||||
| "__imod__"
|
||||
| "__ipow__"
|
||||
| "__ilshift__"
|
||||
| "__irshift__"
|
||||
| "__iand__"
|
||||
| "__ixor__"
|
||||
| "__ior__"
|
||||
) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Allow some documented private methods, like `os._exit()`.
|
||||
if let Some(qualified_name) = checker.semantic().resolve_qualified_name(expr) {
|
||||
if matches!(qualified_name.segments(), ["os", "_exit"]) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if let Expr::Call(ast::ExprCall { func, .. }) = value.as_ref() {
|
||||
// Ignore `super()` calls.
|
||||
if let Some(name) = UnqualifiedName::from_expr(func) {
|
||||
if matches!(name.segments(), ["super"]) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(name) = UnqualifiedName::from_expr(value) {
|
||||
// Ignore `self` and `cls` accesses.
|
||||
if matches!(name.segments(), ["self" | "cls" | "mcs"]) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if let Expr::Name(name) = value.as_ref() {
|
||||
// Ignore accesses on class members from _within_ the class.
|
||||
if checker
|
||||
.semantic()
|
||||
.resolve_name(name)
|
||||
.and_then(|id| {
|
||||
if let BindingKind::ClassDefinition(scope) = checker.semantic().binding(id).kind
|
||||
{
|
||||
Some(scope)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.is_some_and(|scope| {
|
||||
checker
|
||||
.semantic()
|
||||
.current_scope_ids()
|
||||
.any(|parent| scope == parent)
|
||||
})
|
||||
{
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
PrivateMemberAccess {
|
||||
access: attr.to_string(),
|
||||
},
|
||||
expr.range(),
|
||||
));
|
||||
}
|
||||
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
PrivateMemberAccess {
|
||||
access: attr.to_string(),
|
||||
},
|
||||
expr.range(),
|
||||
));
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@ use ruff_python_ast::{self as ast, Expr, Stmt};
|
||||
|
||||
use ruff_diagnostics::{Diagnostic, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_semantic::SemanticModel;
|
||||
use ruff_python_semantic::{ScopeKind, SemanticModel};
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::checkers::ast::Checker;
|
||||
@@ -114,24 +114,27 @@ fn match_exit_stack(semantic: &SemanticModel) -> bool {
|
||||
|
||||
/// Return `true` if `func` is the builtin `open` or `pathlib.Path(...).open`.
|
||||
fn is_open(semantic: &SemanticModel, func: &Expr) -> bool {
|
||||
// open(...)
|
||||
// Ex) `open(...)`
|
||||
if semantic.match_builtin_expr(func, "open") {
|
||||
return true;
|
||||
}
|
||||
|
||||
// pathlib.Path(...).open()
|
||||
// Ex) `pathlib.Path(...).open()`
|
||||
let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = func else {
|
||||
return false;
|
||||
};
|
||||
|
||||
if attr != "open" {
|
||||
return false;
|
||||
}
|
||||
|
||||
let Expr::Call(ast::ExprCall {
|
||||
func: value_func, ..
|
||||
}) = &**value
|
||||
else {
|
||||
return false;
|
||||
};
|
||||
|
||||
semantic
|
||||
.resolve_qualified_name(value_func)
|
||||
.is_some_and(|qualified_name| matches!(qualified_name.segments(), ["pathlib", "Path"]))
|
||||
@@ -189,6 +192,15 @@ pub(crate) fn open_file_with_context_handler(checker: &mut Checker, func: &Expr)
|
||||
return;
|
||||
}
|
||||
|
||||
// Ex) `def __enter__(self): ...`
|
||||
if let ScopeKind::Function(ast::StmtFunctionDef { name, .. }) =
|
||||
&checker.semantic().current_scope().kind
|
||||
{
|
||||
if name == "__enter__" {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
checker
|
||||
.diagnostics
|
||||
.push(Diagnostic::new(OpenFileWithContextHandler, func.range()));
|
||||
|
||||
@@ -1,27 +1,25 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_type_checking/mod.rs
|
||||
---
|
||||
singledispatch.py:10:20: TCH002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
||||
singledispatch.py:11:20: TCH002 [*] Move third-party import `pandas.DataFrame` into a type-checking block
|
||||
|
|
||||
8 | from numpy.typing import ArrayLike
|
||||
9 | from scipy.sparse import spmatrix
|
||||
10 | from pandas import DataFrame
|
||||
9 | from numpy.typing import ArrayLike
|
||||
10 | from scipy.sparse import spmatrix
|
||||
11 | from pandas import DataFrame
|
||||
| ^^^^^^^^^ TCH002
|
||||
11 |
|
||||
12 | if TYPE_CHECKING:
|
||||
12 |
|
||||
13 | if TYPE_CHECKING:
|
||||
|
|
||||
= help: Move into type-checking block
|
||||
|
||||
ℹ Unsafe fix
|
||||
7 7 | from numpy import asarray
|
||||
8 8 | from numpy.typing import ArrayLike
|
||||
9 9 | from scipy.sparse import spmatrix
|
||||
10 |-from pandas import DataFrame
|
||||
11 10 |
|
||||
12 11 | if TYPE_CHECKING:
|
||||
12 |+ from pandas import DataFrame
|
||||
13 13 | from numpy import ndarray
|
||||
14 14 |
|
||||
8 8 | from numpy import asarray
|
||||
9 9 | from numpy.typing import ArrayLike
|
||||
10 10 | from scipy.sparse import spmatrix
|
||||
11 |-from pandas import DataFrame
|
||||
12 11 |
|
||||
13 12 | if TYPE_CHECKING:
|
||||
13 |+ from pandas import DataFrame
|
||||
14 14 | from numpy import ndarray
|
||||
15 15 |
|
||||
|
||||
|
||||
16 16 |
|
||||
|
||||
@@ -383,26 +383,6 @@ impl KnownModules {
|
||||
Some((section, reason))
|
||||
}
|
||||
|
||||
/// Return the list of modules that are known to be of a given type.
|
||||
pub fn modules_for_known_type(
|
||||
&self,
|
||||
import_type: ImportType,
|
||||
) -> impl Iterator<Item = &glob::Pattern> {
|
||||
self.known
|
||||
.iter()
|
||||
.filter_map(move |(module, known_section)| {
|
||||
if let ImportSection::Known(section) = known_section {
|
||||
if *section == import_type {
|
||||
Some(module)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Return the list of user-defined modules, indexed by section.
|
||||
pub fn user_defined(&self) -> FxHashMap<&str, Vec<&glob::Pattern>> {
|
||||
let mut user_defined: FxHashMap<&str, Vec<&glob::Pattern>> = FxHashMap::default();
|
||||
|
||||
@@ -96,10 +96,27 @@ fn get_complexity_number(stmts: &[Stmt]) -> usize {
|
||||
complexity += get_complexity_number(orelse);
|
||||
}
|
||||
Stmt::Match(ast::StmtMatch { cases, .. }) => {
|
||||
complexity += 1;
|
||||
for case in cases {
|
||||
complexity += 1;
|
||||
complexity += get_complexity_number(&case.body);
|
||||
}
|
||||
if let Some(last_case) = cases.last() {
|
||||
// The complexity of an irrefutable pattern is similar to an `else` block of an `if` statement.
|
||||
//
|
||||
// For example:
|
||||
// ```python
|
||||
// match subject:
|
||||
// case 1: ...
|
||||
// case _: ...
|
||||
//
|
||||
// match subject:
|
||||
// case 1: ...
|
||||
// case foo: ...
|
||||
// ```
|
||||
if last_case.guard.is_none() && last_case.pattern.is_irrefutable() {
|
||||
complexity -= 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
Stmt::Try(ast::StmtTry {
|
||||
body,
|
||||
@@ -424,6 +441,68 @@ def with_lock():
|
||||
with lock:
|
||||
if foo:
|
||||
print('bar')
|
||||
";
|
||||
let stmts = parse_suite(source)?;
|
||||
assert_eq!(get_complexity_number(&stmts), 2);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn simple_match_case() -> Result<()> {
|
||||
let source = r"
|
||||
def f():
|
||||
match subject:
|
||||
case 2:
|
||||
print('foo')
|
||||
case _:
|
||||
print('bar')
|
||||
";
|
||||
let stmts = parse_suite(source)?;
|
||||
assert_eq!(get_complexity_number(&stmts), 2);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn multiple_match_case() -> Result<()> {
|
||||
let source = r"
|
||||
def f():
|
||||
match subject:
|
||||
case 2:
|
||||
print('foo')
|
||||
case 2:
|
||||
print('bar')
|
||||
case _:
|
||||
print('baz')
|
||||
";
|
||||
let stmts = parse_suite(source)?;
|
||||
assert_eq!(get_complexity_number(&stmts), 3);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn named_catch_all_match_case() -> Result<()> {
|
||||
let source = r"
|
||||
def f():
|
||||
match subject:
|
||||
case 2:
|
||||
print('hello')
|
||||
case x:
|
||||
print(x)
|
||||
";
|
||||
let stmts = parse_suite(source)?;
|
||||
assert_eq!(get_complexity_number(&stmts), 2);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn match_case_catch_all_with_seuqnece() -> Result<()> {
|
||||
let source = r"
|
||||
def f():
|
||||
match subject:
|
||||
case 2:
|
||||
print('hello')
|
||||
case 5 | _:
|
||||
print(x)
|
||||
";
|
||||
let stmts = parse_suite(source)?;
|
||||
assert_eq!(get_complexity_number(&stmts), 2);
|
||||
|
||||
@@ -184,6 +184,12 @@ pub(crate) fn numpy_2_0_deprecation(checker: &mut Checker, expr: &Expr) {
|
||||
guideline: Some("`add_newdoc_ufunc` is an internal function."),
|
||||
},
|
||||
}),
|
||||
["numpy", "alltrue"] => Some(Replacement {
|
||||
existing: "alltrue",
|
||||
details: Details::AutoPurePython {
|
||||
python_expr: "all",
|
||||
},
|
||||
}),
|
||||
["numpy", "asfarray"] => Some(Replacement {
|
||||
existing: "asfarray",
|
||||
details: Details::Manual {
|
||||
@@ -234,6 +240,14 @@ pub(crate) fn numpy_2_0_deprecation(checker: &mut Checker, expr: &Expr) {
|
||||
compatibility: Compatibility::BackwardsCompatible,
|
||||
},
|
||||
}),
|
||||
["numpy", "cumproduct"] => Some(Replacement {
|
||||
existing: "cumproduct",
|
||||
details: Details::AutoImport {
|
||||
path: "numpy",
|
||||
name: "cumprod",
|
||||
compatibility: Compatibility::BackwardsCompatible,
|
||||
},
|
||||
}),
|
||||
["numpy", "DataSource"] => Some(Replacement {
|
||||
existing: "DataSource",
|
||||
details: Details::AutoImport {
|
||||
@@ -420,6 +434,14 @@ pub(crate) fn numpy_2_0_deprecation(checker: &mut Checker, expr: &Expr) {
|
||||
compatibility: Compatibility::BackwardsCompatible,
|
||||
},
|
||||
}),
|
||||
["numpy", "product"] => Some(Replacement {
|
||||
existing: "product",
|
||||
details: Details::AutoImport {
|
||||
path: "numpy",
|
||||
name: "prod",
|
||||
compatibility: Compatibility::BackwardsCompatible,
|
||||
},
|
||||
}),
|
||||
["numpy", "PZERO"] => Some(Replacement {
|
||||
existing: "PZERO",
|
||||
details: Details::AutoPurePython { python_expr: "0.0" },
|
||||
@@ -492,6 +514,12 @@ pub(crate) fn numpy_2_0_deprecation(checker: &mut Checker, expr: &Expr) {
|
||||
compatibility: Compatibility::BackwardsCompatible,
|
||||
},
|
||||
}),
|
||||
["numpy", "sometrue"] => Some(Replacement {
|
||||
existing: "sometrue",
|
||||
details: Details::AutoPurePython {
|
||||
python_expr: "any",
|
||||
},
|
||||
}),
|
||||
["numpy", "source"] => Some(Replacement {
|
||||
existing: "source",
|
||||
details: Details::AutoImport {
|
||||
|
||||
@@ -854,6 +854,8 @@ NPY201.py:108:5: NPY201 [*] `np.row_stack` will be removed in NumPy 2.0. Use `nu
|
||||
107 |
|
||||
108 | np.row_stack(([1,2], [3,4]))
|
||||
| ^^^^^^^^^^^^ NPY201
|
||||
109 |
|
||||
110 | np.alltrue([True, True])
|
||||
|
|
||||
= help: Replace with `numpy.vstack`
|
||||
|
||||
@@ -863,5 +865,65 @@ NPY201.py:108:5: NPY201 [*] `np.row_stack` will be removed in NumPy 2.0. Use `nu
|
||||
107 107 |
|
||||
108 |- np.row_stack(([1,2], [3,4]))
|
||||
108 |+ np.vstack(([1,2], [3,4]))
|
||||
109 109 |
|
||||
110 110 | np.alltrue([True, True])
|
||||
111 111 |
|
||||
|
||||
NPY201.py:110:5: NPY201 [*] `np.alltrue` will be removed in NumPy 2.0. Use `all` instead.
|
||||
|
|
||||
108 | np.row_stack(([1,2], [3,4]))
|
||||
109 |
|
||||
110 | np.alltrue([True, True])
|
||||
| ^^^^^^^^^^ NPY201
|
||||
111 |
|
||||
112 | np.anytrue([True, False])
|
||||
|
|
||||
= help: Replace with `all`
|
||||
|
||||
ℹ Safe fix
|
||||
107 107 |
|
||||
108 108 | np.row_stack(([1,2], [3,4]))
|
||||
109 109 |
|
||||
110 |- np.alltrue([True, True])
|
||||
110 |+ all([True, True])
|
||||
111 111 |
|
||||
112 112 | np.anytrue([True, False])
|
||||
113 113 |
|
||||
|
||||
NPY201.py:114:5: NPY201 [*] `np.cumproduct` will be removed in NumPy 2.0. Use `numpy.cumprod` instead.
|
||||
|
|
||||
112 | np.anytrue([True, False])
|
||||
113 |
|
||||
114 | np.cumproduct([1, 2, 3])
|
||||
| ^^^^^^^^^^^^^ NPY201
|
||||
115 |
|
||||
116 | np.product([1, 2, 3])
|
||||
|
|
||||
= help: Replace with `numpy.cumprod`
|
||||
|
||||
ℹ Safe fix
|
||||
111 111 |
|
||||
112 112 | np.anytrue([True, False])
|
||||
113 113 |
|
||||
114 |- np.cumproduct([1, 2, 3])
|
||||
114 |+ np.cumprod([1, 2, 3])
|
||||
115 115 |
|
||||
116 116 | np.product([1, 2, 3])
|
||||
|
||||
NPY201.py:116:5: NPY201 [*] `np.product` will be removed in NumPy 2.0. Use `numpy.prod` instead.
|
||||
|
|
||||
114 | np.cumproduct([1, 2, 3])
|
||||
115 |
|
||||
116 | np.product([1, 2, 3])
|
||||
| ^^^^^^^^^^ NPY201
|
||||
|
|
||||
= help: Replace with `numpy.prod`
|
||||
|
||||
ℹ Safe fix
|
||||
113 113 |
|
||||
114 114 | np.cumproduct([1, 2, 3])
|
||||
115 115 |
|
||||
116 |- np.product([1, 2, 3])
|
||||
116 |+ np.prod([1, 2, 3])
|
||||
|
||||
|
||||
|
||||
@@ -48,8 +48,8 @@ pub(super) fn test_expression(expr: &Expr, semantic: &SemanticModel) -> Resoluti
|
||||
| BindingKind::NamedExprAssignment
|
||||
| BindingKind::LoopVar
|
||||
| BindingKind::ComprehensionVar
|
||||
| BindingKind::Global
|
||||
| BindingKind::Nonlocal(_) => Resolution::RelevantLocal,
|
||||
| BindingKind::Global(_)
|
||||
| BindingKind::Nonlocal(_, _) => Resolution::RelevantLocal,
|
||||
BindingKind::Import(import)
|
||||
if matches!(import.qualified_name().segments(), ["pandas"]) =>
|
||||
{
|
||||
|
||||
@@ -2,7 +2,7 @@ use std::str::FromStr;
|
||||
|
||||
use ruff_diagnostics::{Diagnostic, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::{AnyStringFlags, Expr, ExprStringLiteral};
|
||||
use ruff_python_ast::{Expr, ExprStringLiteral, StringFlags, StringLiteral};
|
||||
use ruff_python_literal::{
|
||||
cformat::{CFormatErrorType, CFormatString},
|
||||
format::FormatPart,
|
||||
@@ -90,9 +90,13 @@ pub(crate) fn call(checker: &mut Checker, string: &str, range: TextRange) {
|
||||
/// PLE1300
|
||||
/// Ex) `"%z" % "1"`
|
||||
pub(crate) fn percent(checker: &mut Checker, expr: &Expr, format_string: &ExprStringLiteral) {
|
||||
for string_literal in &format_string.value {
|
||||
let string = checker.locator().slice(string_literal);
|
||||
let flags = AnyStringFlags::from(string_literal.flags);
|
||||
for StringLiteral {
|
||||
value: _,
|
||||
range,
|
||||
flags,
|
||||
} in &format_string.value
|
||||
{
|
||||
let string = checker.locator().slice(range);
|
||||
let string = &string
|
||||
[usize::from(flags.opener_len())..(string.len() - usize::from(flags.closer_len()))];
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use std::str::FromStr;
|
||||
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, Expr};
|
||||
use ruff_python_ast::{self as ast, Expr, StringFlags, StringLiteral};
|
||||
use ruff_python_literal::cformat::{CFormatPart, CFormatSpec, CFormatStrOrBytes, CFormatString};
|
||||
use ruff_text_size::Ranged;
|
||||
use rustc_hash::FxHashMap;
|
||||
@@ -217,12 +217,15 @@ pub(crate) fn bad_string_format_type(
|
||||
) {
|
||||
// Parse each string segment.
|
||||
let mut format_strings = vec![];
|
||||
for string_literal in &format_string.value {
|
||||
let string = checker.locator().slice(string_literal);
|
||||
let flags = AnyStringFlags::from(string_literal.flags);
|
||||
let quote_len = usize::from(flags.quote_len());
|
||||
let string =
|
||||
&string[(usize::from(flags.prefix_len()) + quote_len)..(string.len() - quote_len)];
|
||||
for StringLiteral {
|
||||
value: _,
|
||||
range,
|
||||
flags,
|
||||
} in &format_string.value
|
||||
{
|
||||
let string = checker.locator().slice(range);
|
||||
let string = &string
|
||||
[usize::from(flags.opener_len())..(string.len() - usize::from(flags.closer_len()))];
|
||||
|
||||
// Parse the format string (e.g. `"%s"`) into a list of `PercentFormat`.
|
||||
if let Ok(format_string) = CFormatString::from_str(string) {
|
||||
|
||||
@@ -54,8 +54,8 @@ pub(crate) fn non_ascii_name(binding: &Binding, locator: &Locator) -> Option<Dia
|
||||
BindingKind::LoopVar => Kind::LoopVar,
|
||||
BindingKind::ComprehensionVar => Kind::ComprenhensionVar,
|
||||
BindingKind::WithItemVar => Kind::WithItemVar,
|
||||
BindingKind::Global => Kind::Global,
|
||||
BindingKind::Nonlocal(_) => Kind::Nonlocal,
|
||||
BindingKind::Global(_) => Kind::Global,
|
||||
BindingKind::Nonlocal(_, _) => Kind::Nonlocal,
|
||||
BindingKind::ClassDefinition(_) => Kind::ClassDefinition,
|
||||
BindingKind::FunctionDefinition(_) => Kind::FunctionDefinition,
|
||||
BindingKind::BoundException => Kind::BoundException,
|
||||
|
||||
@@ -176,10 +176,11 @@ fn num_branches(stmts: &[Stmt]) -> usize {
|
||||
.sum::<usize>()
|
||||
}
|
||||
Stmt::Match(ast::StmtMatch { cases, .. }) => {
|
||||
1 + cases
|
||||
.iter()
|
||||
.map(|case| num_branches(&case.body))
|
||||
.sum::<usize>()
|
||||
cases.len()
|
||||
+ cases
|
||||
.iter()
|
||||
.map(|case| num_branches(&case.body))
|
||||
.sum::<usize>()
|
||||
}
|
||||
// The `with` statement is not considered a branch but the statements inside the `with` should be counted.
|
||||
Stmt::With(ast::StmtWith { body, .. }) => num_branches(body),
|
||||
@@ -278,6 +279,19 @@ else:
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn match_case() -> Result<()> {
|
||||
let source: &str = r"
|
||||
match x: # 2
|
||||
case 0:
|
||||
pass
|
||||
case 1:
|
||||
pass
|
||||
";
|
||||
test_helper(source, 2)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn for_else() -> Result<()> {
|
||||
let source: &str = r"
|
||||
|
||||
@@ -90,6 +90,7 @@ fn num_statements(stmts: &[Stmt]) -> usize {
|
||||
Stmt::Match(ast::StmtMatch { cases, .. }) => {
|
||||
count += 1;
|
||||
for case in cases {
|
||||
count += 1;
|
||||
count += num_statements(&case.body);
|
||||
}
|
||||
}
|
||||
@@ -233,6 +234,21 @@ def f():
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn match_case() -> Result<()> {
|
||||
let source: &str = r"
|
||||
def f():
|
||||
match x:
|
||||
case 3:
|
||||
pass
|
||||
case _:
|
||||
pass
|
||||
";
|
||||
let stmts = parse_suite(source)?;
|
||||
assert_eq!(num_statements(&stmts), 6);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn many_statements() -> Result<()> {
|
||||
let source: &str = r"
|
||||
|
||||
@@ -216,8 +216,10 @@ fn formatted_expr<'a>(expr: &Expr, context: FormatContext, locator: &Locator<'a>
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
enum FStringConversion {
|
||||
/// The format string only contains literal parts.
|
||||
Literal,
|
||||
/// The format string only contains literal parts and is empty.
|
||||
EmptyLiteral,
|
||||
/// The format string only contains literal parts and is non-empty.
|
||||
NonEmptyLiteral,
|
||||
/// The format call uses arguments with side effects which are repeated within the
|
||||
/// format string. For example: `"{x} {x}".format(x=foo())`.
|
||||
SideEffects,
|
||||
@@ -263,7 +265,7 @@ impl FStringConversion {
|
||||
|
||||
// If the format string is empty, it doesn't need to be converted.
|
||||
if contents.is_empty() {
|
||||
return Ok(Self::Literal);
|
||||
return Ok(Self::EmptyLiteral);
|
||||
}
|
||||
|
||||
// Parse the format string.
|
||||
@@ -275,7 +277,7 @@ impl FStringConversion {
|
||||
.iter()
|
||||
.all(|part| matches!(part, FormatPart::Literal(..)))
|
||||
{
|
||||
return Ok(Self::Literal);
|
||||
return Ok(Self::NonEmptyLiteral);
|
||||
}
|
||||
|
||||
let mut converted = String::with_capacity(contents.len());
|
||||
@@ -406,7 +408,7 @@ pub(crate) fn f_strings(checker: &mut Checker, call: &ast::ExprCall, summary: &F
|
||||
return;
|
||||
};
|
||||
|
||||
let mut patches: Vec<(TextRange, String)> = vec![];
|
||||
let mut patches: Vec<(TextRange, FStringConversion)> = vec![];
|
||||
let mut lex = lexer::lex_starts_at(
|
||||
checker.locator().slice(call.func.range()),
|
||||
Mode::Expression,
|
||||
@@ -431,18 +433,14 @@ pub(crate) fn f_strings(checker: &mut Checker, call: &ast::ExprCall, summary: &F
|
||||
}
|
||||
Some((Tok::String { .. }, range)) => {
|
||||
match FStringConversion::try_convert(range, &mut summary, checker.locator()) {
|
||||
Ok(FStringConversion::Convert(fstring)) => patches.push((range, fstring)),
|
||||
// Convert escaped curly brackets e.g. `{{` to `{` in literal string parts
|
||||
Ok(FStringConversion::Literal) => patches.push((
|
||||
range,
|
||||
curly_unescape(checker.locator().slice(range)).to_string(),
|
||||
)),
|
||||
// If the format string contains side effects that would need to be repeated,
|
||||
// we can't convert it to an f-string.
|
||||
Ok(FStringConversion::SideEffects) => return,
|
||||
// If any of the segments fail to convert, then we can't convert the entire
|
||||
// expression.
|
||||
Err(_) => return,
|
||||
// Otherwise, push the conversion to be processed later.
|
||||
Ok(conversion) => patches.push((range, conversion)),
|
||||
}
|
||||
}
|
||||
Some(_) => continue,
|
||||
@@ -455,30 +453,28 @@ pub(crate) fn f_strings(checker: &mut Checker, call: &ast::ExprCall, summary: &F
|
||||
|
||||
let mut contents = String::with_capacity(checker.locator().slice(call).len());
|
||||
let mut prev_end = call.start();
|
||||
for (range, fstring) in patches {
|
||||
contents.push_str(
|
||||
checker
|
||||
.locator()
|
||||
.slice(TextRange::new(prev_end, range.start())),
|
||||
);
|
||||
contents.push_str(&fstring);
|
||||
for (range, conversion) in patches {
|
||||
let fstring = match conversion {
|
||||
FStringConversion::Convert(fstring) => Some(fstring),
|
||||
FStringConversion::EmptyLiteral => None,
|
||||
FStringConversion::NonEmptyLiteral => {
|
||||
// Convert escaped curly brackets e.g. `{{` to `{` in literal string parts
|
||||
Some(curly_unescape(checker.locator().slice(range)).to_string())
|
||||
}
|
||||
// We handled this in the previous loop.
|
||||
FStringConversion::SideEffects => unreachable!(),
|
||||
};
|
||||
if let Some(fstring) = fstring {
|
||||
contents.push_str(
|
||||
checker
|
||||
.locator()
|
||||
.slice(TextRange::new(prev_end, range.start())),
|
||||
);
|
||||
contents.push_str(&fstring);
|
||||
}
|
||||
prev_end = range.end();
|
||||
}
|
||||
|
||||
// If the remainder is non-empty, add it to the contents.
|
||||
let rest = checker.locator().slice(TextRange::new(prev_end, end));
|
||||
if !lexer::lex_starts_at(rest, Mode::Expression, prev_end)
|
||||
.flatten()
|
||||
.all(|(token, _)| match token {
|
||||
Tok::Comment(_) | Tok::Newline | Tok::NonLogicalNewline | Tok::Indent | Tok::Dedent => {
|
||||
true
|
||||
}
|
||||
Tok::String { value, .. } => value.is_empty(),
|
||||
_ => false,
|
||||
})
|
||||
{
|
||||
contents.push_str(rest);
|
||||
}
|
||||
contents.push_str(checker.locator().slice(TextRange::new(prev_end, end)));
|
||||
|
||||
// If necessary, add a space between any leading keyword (`return`, `yield`, `assert`, etc.)
|
||||
// and the string. For example, `return"foo"` is valid, but `returnf"foo"` is not.
|
||||
|
||||
@@ -3,8 +3,7 @@ use std::str::FromStr;
|
||||
|
||||
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::whitespace::indentation;
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, Expr};
|
||||
use ruff_python_ast::{self as ast, whitespace::indentation, AnyStringFlags, Expr, StringFlags};
|
||||
use ruff_python_codegen::Stylist;
|
||||
use ruff_python_literal::cformat::{
|
||||
CConversionFlags, CFormatPart, CFormatPrecision, CFormatQuantity, CFormatString,
|
||||
|
||||
@@ -1,13 +1,14 @@
|
||||
use itertools::Itertools;
|
||||
|
||||
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
|
||||
use ruff_diagnostics::{Applicability, Diagnostic, Edit, Fix, FixAvailability, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::{
|
||||
self as ast,
|
||||
visitor::{self, Visitor},
|
||||
Expr, ExprCall, ExprName, ExprSubscript, Identifier, Stmt, StmtAnnAssign, StmtAssign,
|
||||
Expr, ExprCall, ExprName, ExprSubscript, Identifier, Keyword, Stmt, StmtAnnAssign, StmtAssign,
|
||||
StmtTypeAlias, TypeParam, TypeParamTypeVar,
|
||||
};
|
||||
use ruff_python_codegen::Generator;
|
||||
use ruff_python_semantic::SemanticModel;
|
||||
use ruff_text_size::{Ranged, TextRange};
|
||||
|
||||
@@ -15,7 +16,8 @@ use crate::checkers::ast::Checker;
|
||||
use crate::settings::types::PythonVersion;
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for use of `TypeAlias` annotation for declaring type aliases.
|
||||
/// Checks for use of `TypeAlias` annotations and `TypeAliasType` assignments
|
||||
/// for declaring type aliases.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// The `type` keyword was introduced in Python 3.12 by [PEP 695] for defining
|
||||
@@ -36,17 +38,26 @@ use crate::settings::types::PythonVersion;
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// ListOfInt: TypeAlias = list[int]
|
||||
/// PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// type ListOfInt = list[int]
|
||||
/// type PositiveInt = Annotated[int, Gt(0)]
|
||||
/// ```
|
||||
///
|
||||
/// [PEP 695]: https://peps.python.org/pep-0695/
|
||||
#[violation]
|
||||
pub struct NonPEP695TypeAlias {
|
||||
name: String,
|
||||
type_alias_kind: TypeAliasKind,
|
||||
}
|
||||
|
||||
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
|
||||
enum TypeAliasKind {
|
||||
TypeAlias,
|
||||
TypeAliasType,
|
||||
}
|
||||
|
||||
impl Violation for NonPEP695TypeAlias {
|
||||
@@ -54,8 +65,15 @@ impl Violation for NonPEP695TypeAlias {
|
||||
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
let NonPEP695TypeAlias { name } = self;
|
||||
format!("Type alias `{name}` uses `TypeAlias` annotation instead of the `type` keyword")
|
||||
let NonPEP695TypeAlias {
|
||||
name,
|
||||
type_alias_kind,
|
||||
} = self;
|
||||
let type_alias_method = match type_alias_kind {
|
||||
TypeAliasKind::TypeAlias => "`TypeAlias` annotation",
|
||||
TypeAliasKind::TypeAliasType => "`TypeAliasType` assignment",
|
||||
};
|
||||
format!("Type alias `{name}` uses {type_alias_method} instead of the `type` keyword")
|
||||
}
|
||||
|
||||
fn fix_title(&self) -> Option<String> {
|
||||
@@ -63,8 +81,82 @@ impl Violation for NonPEP695TypeAlias {
|
||||
}
|
||||
}
|
||||
|
||||
/// UP040
|
||||
pub(crate) fn non_pep695_type_alias_type(checker: &mut Checker, stmt: &StmtAssign) {
|
||||
if checker.settings.target_version < PythonVersion::Py312 {
|
||||
return;
|
||||
}
|
||||
|
||||
let StmtAssign { targets, value, .. } = stmt;
|
||||
|
||||
let Expr::Call(ExprCall {
|
||||
func, arguments, ..
|
||||
}) = value.as_ref()
|
||||
else {
|
||||
return;
|
||||
};
|
||||
|
||||
let [Expr::Name(target_name)] = targets.as_slice() else {
|
||||
return;
|
||||
};
|
||||
|
||||
let [Expr::StringLiteral(name), value] = arguments.args.as_ref() else {
|
||||
return;
|
||||
};
|
||||
|
||||
if name.value.to_str() != target_name.id {
|
||||
return;
|
||||
}
|
||||
|
||||
let type_params = match arguments.keywords.as_ref() {
|
||||
[] => &[],
|
||||
[Keyword {
|
||||
arg: Some(name),
|
||||
value: Expr::Tuple(type_params),
|
||||
..
|
||||
}] if name.as_str() == "type_params" => type_params.elts.as_slice(),
|
||||
_ => return,
|
||||
};
|
||||
|
||||
if !checker
|
||||
.semantic()
|
||||
.match_typing_expr(func.as_ref(), "TypeAliasType")
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
let Some(vars) = type_params
|
||||
.iter()
|
||||
.map(|expr| {
|
||||
expr.as_name_expr().map(|name| {
|
||||
expr_name_to_type_var(checker.semantic(), name).unwrap_or(TypeVar {
|
||||
name,
|
||||
restriction: None,
|
||||
})
|
||||
})
|
||||
})
|
||||
.collect::<Option<Vec<_>>>()
|
||||
else {
|
||||
return;
|
||||
};
|
||||
|
||||
checker.diagnostics.push(create_diagnostic(
|
||||
checker.generator(),
|
||||
stmt.range(),
|
||||
&target_name.id,
|
||||
value,
|
||||
&vars,
|
||||
Applicability::Safe,
|
||||
TypeAliasKind::TypeAliasType,
|
||||
));
|
||||
}
|
||||
|
||||
/// UP040
|
||||
pub(crate) fn non_pep695_type_alias(checker: &mut Checker, stmt: &StmtAnnAssign) {
|
||||
if checker.settings.target_version < PythonVersion::Py312 {
|
||||
return;
|
||||
}
|
||||
|
||||
let StmtAnnAssign {
|
||||
target,
|
||||
annotation,
|
||||
@@ -72,11 +164,6 @@ pub(crate) fn non_pep695_type_alias(checker: &mut Checker, stmt: &StmtAnnAssign)
|
||||
..
|
||||
} = stmt;
|
||||
|
||||
// Syntax only available in 3.12+
|
||||
if checker.settings.target_version < PythonVersion::Py312 {
|
||||
return;
|
||||
}
|
||||
|
||||
if !checker
|
||||
.semantic()
|
||||
.match_typing_expr(annotation, "TypeAlias")
|
||||
@@ -109,23 +196,52 @@ pub(crate) fn non_pep695_type_alias(checker: &mut Checker, stmt: &StmtAnnAssign)
|
||||
.unique_by(|TypeVar { name, .. }| name.id.as_str())
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
checker.diagnostics.push(create_diagnostic(
|
||||
checker.generator(),
|
||||
stmt.range(),
|
||||
name,
|
||||
value,
|
||||
&vars,
|
||||
// The fix is only safe in a type stub because new-style aliases have different runtime behavior
|
||||
// See https://github.com/astral-sh/ruff/issues/6434
|
||||
if checker.source_type.is_stub() {
|
||||
Applicability::Safe
|
||||
} else {
|
||||
Applicability::Unsafe
|
||||
},
|
||||
TypeAliasKind::TypeAlias,
|
||||
));
|
||||
}
|
||||
|
||||
/// Generate a [`Diagnostic`] for a non-PEP 695 type alias or type alias type.
|
||||
fn create_diagnostic(
|
||||
generator: Generator,
|
||||
stmt_range: TextRange,
|
||||
name: &str,
|
||||
value: &Expr,
|
||||
vars: &[TypeVar],
|
||||
applicability: Applicability,
|
||||
type_alias_kind: TypeAliasKind,
|
||||
) -> Diagnostic {
|
||||
let type_params = if vars.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(ast::TypeParams {
|
||||
range: TextRange::default(),
|
||||
type_params: vars
|
||||
.into_iter()
|
||||
.iter()
|
||||
.map(|TypeVar { name, restriction }| {
|
||||
TypeParam::TypeVar(TypeParamTypeVar {
|
||||
range: TextRange::default(),
|
||||
name: Identifier::new(name.id.clone(), TextRange::default()),
|
||||
bound: match restriction {
|
||||
Some(TypeVarRestriction::Bound(bound)) => Some(Box::new(bound.clone())),
|
||||
Some(TypeVarRestriction::Bound(bound)) => {
|
||||
Some(Box::new((*bound).clone()))
|
||||
}
|
||||
Some(TypeVarRestriction::Constraint(constraints)) => {
|
||||
Some(Box::new(Expr::Tuple(ast::ExprTuple {
|
||||
range: TextRange::default(),
|
||||
elts: constraints.into_iter().cloned().collect(),
|
||||
elts: constraints.iter().map(|expr| (*expr).clone()).collect(),
|
||||
ctx: ast::ExprContext::Load,
|
||||
parenthesized: true,
|
||||
})))
|
||||
@@ -141,27 +257,29 @@ pub(crate) fn non_pep695_type_alias(checker: &mut Checker, stmt: &StmtAnnAssign)
|
||||
})
|
||||
};
|
||||
|
||||
let mut diagnostic = Diagnostic::new(NonPEP695TypeAlias { name: name.clone() }, stmt.range());
|
||||
|
||||
let edit = Edit::range_replacement(
|
||||
checker.generator().stmt(&Stmt::from(StmtTypeAlias {
|
||||
range: TextRange::default(),
|
||||
name: target.clone(),
|
||||
type_params,
|
||||
value: value.clone(),
|
||||
})),
|
||||
stmt.range(),
|
||||
);
|
||||
// The fix is only safe in a type stub because new-style aliases have different runtime behavior
|
||||
// See https://github.com/astral-sh/ruff/issues/6434
|
||||
let fix = if checker.source_type.is_stub() {
|
||||
Fix::safe_edit(edit)
|
||||
} else {
|
||||
Fix::unsafe_edit(edit)
|
||||
};
|
||||
diagnostic.set_fix(fix);
|
||||
|
||||
checker.diagnostics.push(diagnostic);
|
||||
Diagnostic::new(
|
||||
NonPEP695TypeAlias {
|
||||
name: name.to_string(),
|
||||
type_alias_kind,
|
||||
},
|
||||
stmt_range,
|
||||
)
|
||||
.with_fix(Fix::applicable_edit(
|
||||
Edit::range_replacement(
|
||||
generator.stmt(&Stmt::from(StmtTypeAlias {
|
||||
range: TextRange::default(),
|
||||
name: Box::new(Expr::Name(ExprName {
|
||||
range: TextRange::default(),
|
||||
id: name.to_string(),
|
||||
ctx: ast::ExprContext::Load,
|
||||
})),
|
||||
type_params,
|
||||
value: Box::new(value.clone()),
|
||||
})),
|
||||
stmt_range,
|
||||
),
|
||||
applicability,
|
||||
))
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
@@ -188,57 +306,64 @@ impl<'a> Visitor<'a> for TypeVarReferenceVisitor<'a> {
|
||||
fn visit_expr(&mut self, expr: &'a Expr) {
|
||||
match expr {
|
||||
Expr::Name(name) if name.ctx.is_load() => {
|
||||
let Some(Stmt::Assign(StmtAssign { value, .. })) = self
|
||||
.semantic
|
||||
.lookup_symbol(name.id.as_str())
|
||||
.and_then(|binding_id| {
|
||||
self.semantic
|
||||
.binding(binding_id)
|
||||
.source
|
||||
.map(|node_id| self.semantic.statement(node_id))
|
||||
})
|
||||
else {
|
||||
return;
|
||||
};
|
||||
|
||||
match value.as_ref() {
|
||||
Expr::Subscript(ExprSubscript {
|
||||
value: ref subscript_value,
|
||||
..
|
||||
}) => {
|
||||
if self.semantic.match_typing_expr(subscript_value, "TypeVar") {
|
||||
self.vars.push(TypeVar {
|
||||
name,
|
||||
restriction: None,
|
||||
});
|
||||
}
|
||||
}
|
||||
Expr::Call(ExprCall {
|
||||
func, arguments, ..
|
||||
}) => {
|
||||
if self.semantic.match_typing_expr(func, "TypeVar")
|
||||
&& arguments
|
||||
.args
|
||||
.first()
|
||||
.is_some_and(Expr::is_string_literal_expr)
|
||||
{
|
||||
let restriction = if let Some(bound) = arguments.find_keyword("bound") {
|
||||
Some(TypeVarRestriction::Bound(&bound.value))
|
||||
} else if arguments.args.len() > 1 {
|
||||
Some(TypeVarRestriction::Constraint(
|
||||
arguments.args.iter().skip(1).collect(),
|
||||
))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
self.vars.push(TypeVar { name, restriction });
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
self.vars.extend(expr_name_to_type_var(self.semantic, name));
|
||||
}
|
||||
_ => visitor::walk_expr(self, expr),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn expr_name_to_type_var<'a>(
|
||||
semantic: &'a SemanticModel,
|
||||
name: &'a ExprName,
|
||||
) -> Option<TypeVar<'a>> {
|
||||
let Some(Stmt::Assign(StmtAssign { value, .. })) = semantic
|
||||
.lookup_symbol(name.id.as_str())
|
||||
.and_then(|binding_id| {
|
||||
semantic
|
||||
.binding(binding_id)
|
||||
.source
|
||||
.map(|node_id| semantic.statement(node_id))
|
||||
})
|
||||
else {
|
||||
return None;
|
||||
};
|
||||
|
||||
match value.as_ref() {
|
||||
Expr::Subscript(ExprSubscript {
|
||||
value: ref subscript_value,
|
||||
..
|
||||
}) => {
|
||||
if semantic.match_typing_expr(subscript_value, "TypeVar") {
|
||||
return Some(TypeVar {
|
||||
name,
|
||||
restriction: None,
|
||||
});
|
||||
}
|
||||
}
|
||||
Expr::Call(ExprCall {
|
||||
func, arguments, ..
|
||||
}) => {
|
||||
if semantic.match_typing_expr(func, "TypeVar")
|
||||
&& arguments
|
||||
.args
|
||||
.first()
|
||||
.is_some_and(Expr::is_string_literal_expr)
|
||||
{
|
||||
let restriction = if let Some(bound) = arguments.find_keyword("bound") {
|
||||
Some(TypeVarRestriction::Bound(&bound.value))
|
||||
} else if arguments.args.len() > 1 {
|
||||
Some(TypeVarRestriction::Constraint(
|
||||
arguments.args.iter().skip(1).collect(),
|
||||
))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
return Some(TypeVar { name, restriction });
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
@@ -767,16 +767,16 @@ UP032_0.py:86:1: UP032 [*] Use f-string instead of `format` call
|
||||
85 85 |
|
||||
86 86 | (
|
||||
87 |- "{a}"
|
||||
87 |+ f"{1}"
|
||||
88 88 | ""
|
||||
88 |- ""
|
||||
89 |- "{b}"
|
||||
89 |+ f"{1}"
|
||||
90 90 | ""
|
||||
90 |- ""
|
||||
91 |-).format(a=1, b=1)
|
||||
91 |+)
|
||||
92 92 |
|
||||
93 93 | (
|
||||
94 94 | (
|
||||
87 |+ f"{1}"
|
||||
88 |+ f"{1}"
|
||||
89 |+)
|
||||
92 90 |
|
||||
93 91 | (
|
||||
94 92 | (
|
||||
|
||||
UP032_0.py:94:5: UP032 [*] Use f-string instead of `format` call
|
||||
|
|
||||
@@ -1089,11 +1089,10 @@ UP032_0.py:212:18: UP032 [*] Use f-string instead of `format` call
|
||||
211 211 | # When fixing, trim the trailing empty string.
|
||||
212 |-raise ValueError("Conflicting configuration dicts: {!r} {!r}"
|
||||
213 |- "".format(new_dict, d))
|
||||
212 |+raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
|
||||
213 |+ "")
|
||||
214 214 |
|
||||
215 215 | # When fixing, trim the trailing empty string.
|
||||
216 216 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
|
||||
212 |+raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
|
||||
214 213 |
|
||||
215 214 | # When fixing, trim the trailing empty string.
|
||||
216 215 | raise ValueError("Conflicting configuration dicts: {!r} {!r}"
|
||||
|
||||
UP032_0.py:216:18: UP032 [*] Use f-string instead of `format` call
|
||||
|
|
||||
@@ -1113,10 +1112,11 @@ UP032_0.py:216:18: UP032 [*] Use f-string instead of `format` call
|
||||
215 215 | # When fixing, trim the trailing empty string.
|
||||
216 |-raise ValueError("Conflicting configuration dicts: {!r} {!r}"
|
||||
217 |- .format(new_dict, d))
|
||||
216 |+raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}")
|
||||
218 217 |
|
||||
219 218 | raise ValueError(
|
||||
220 219 | "Conflicting configuration dicts: {!r} {!r}"
|
||||
216 |+raise ValueError(f"Conflicting configuration dicts: {new_dict!r} {d!r}"
|
||||
217 |+ )
|
||||
218 218 |
|
||||
219 219 | raise ValueError(
|
||||
220 220 | "Conflicting configuration dicts: {!r} {!r}"
|
||||
|
||||
UP032_0.py:220:5: UP032 [*] Use f-string instead of `format` call
|
||||
|
|
||||
@@ -1136,10 +1136,9 @@ UP032_0.py:220:5: UP032 [*] Use f-string instead of `format` call
|
||||
220 |- "Conflicting configuration dicts: {!r} {!r}"
|
||||
221 |- "".format(new_dict, d)
|
||||
220 |+ f"Conflicting configuration dicts: {new_dict!r} {d!r}"
|
||||
221 |+ ""
|
||||
222 222 | )
|
||||
223 223 |
|
||||
224 224 | raise ValueError(
|
||||
222 221 | )
|
||||
223 222 |
|
||||
224 223 | raise ValueError(
|
||||
|
||||
UP032_0.py:225:5: UP032 [*] Use f-string instead of `format` call
|
||||
|
|
||||
@@ -1160,10 +1159,9 @@ UP032_0.py:225:5: UP032 [*] Use f-string instead of `format` call
|
||||
225 |- "Conflicting configuration dicts: {!r} {!r}"
|
||||
226 |- "".format(new_dict, d)
|
||||
225 |+ f"Conflicting configuration dicts: {new_dict!r} {d!r}"
|
||||
226 |+ ""
|
||||
227 227 |
|
||||
228 228 | )
|
||||
229 229 |
|
||||
227 226 |
|
||||
228 227 | )
|
||||
229 228 |
|
||||
|
||||
UP032_0.py:231:1: UP032 [*] Use f-string instead of `format` call
|
||||
|
|
||||
|
||||
@@ -245,5 +245,117 @@ UP040.py:53:1: UP040 [*] Type alias `Decorator` uses `TypeAlias` annotation inst
|
||||
52 52 | T = typing.TypeVar["T"]
|
||||
53 |-Decorator: TypeAlias = typing.Callable[[T], T]
|
||||
53 |+type Decorator[T] = typing.Callable[[T], T]
|
||||
54 54 |
|
||||
55 55 |
|
||||
56 56 | from typing import TypeVar, Annotated, TypeAliasType
|
||||
|
||||
UP040.py:63:1: UP040 [*] Type alias `PositiveList` uses `TypeAliasType` assignment instead of the `type` keyword
|
||||
|
|
||||
61 | # https://github.com/astral-sh/ruff/issues/11422
|
||||
62 | T = TypeVar("T")
|
||||
63 | / PositiveList = TypeAliasType(
|
||||
64 | | "PositiveList", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
65 | | )
|
||||
| |_^ UP040
|
||||
66 |
|
||||
67 | # Bound
|
||||
|
|
||||
= help: Use the `type` keyword
|
||||
|
||||
ℹ Safe fix
|
||||
60 60 |
|
||||
61 61 | # https://github.com/astral-sh/ruff/issues/11422
|
||||
62 62 | T = TypeVar("T")
|
||||
63 |-PositiveList = TypeAliasType(
|
||||
64 |- "PositiveList", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
65 |-)
|
||||
63 |+type PositiveList[T] = list[Annotated[T, Gt(0)]]
|
||||
66 64 |
|
||||
67 65 | # Bound
|
||||
68 66 | T = TypeVar("T", bound=SupportGt)
|
||||
|
||||
UP040.py:69:1: UP040 [*] Type alias `PositiveList` uses `TypeAliasType` assignment instead of the `type` keyword
|
||||
|
|
||||
67 | # Bound
|
||||
68 | T = TypeVar("T", bound=SupportGt)
|
||||
69 | / PositiveList = TypeAliasType(
|
||||
70 | | "PositiveList", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
71 | | )
|
||||
| |_^ UP040
|
||||
72 |
|
||||
73 | # Multiple bounds
|
||||
|
|
||||
= help: Use the `type` keyword
|
||||
|
||||
ℹ Safe fix
|
||||
66 66 |
|
||||
67 67 | # Bound
|
||||
68 68 | T = TypeVar("T", bound=SupportGt)
|
||||
69 |-PositiveList = TypeAliasType(
|
||||
70 |- "PositiveList", list[Annotated[T, Gt(0)]], type_params=(T,)
|
||||
71 |-)
|
||||
69 |+type PositiveList[T: SupportGt] = list[Annotated[T, Gt(0)]]
|
||||
72 70 |
|
||||
73 71 | # Multiple bounds
|
||||
74 72 | T1 = TypeVar("T1", bound=SupportGt)
|
||||
|
||||
UP040.py:77:1: UP040 [*] Type alias `Tuple3` uses `TypeAliasType` assignment instead of the `type` keyword
|
||||
|
|
||||
75 | T2 = TypeVar("T2")
|
||||
76 | T3 = TypeVar("T3")
|
||||
77 | Tuple3 = TypeAliasType("Tuple3", tuple[T1, T2, T3], type_params=(T1, T2, T3))
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP040
|
||||
78 |
|
||||
79 | # No type_params
|
||||
|
|
||||
= help: Use the `type` keyword
|
||||
|
||||
ℹ Safe fix
|
||||
74 74 | T1 = TypeVar("T1", bound=SupportGt)
|
||||
75 75 | T2 = TypeVar("T2")
|
||||
76 76 | T3 = TypeVar("T3")
|
||||
77 |-Tuple3 = TypeAliasType("Tuple3", tuple[T1, T2, T3], type_params=(T1, T2, T3))
|
||||
77 |+type Tuple3[T1: SupportGt, T2, T3] = tuple[T1, T2, T3]
|
||||
78 78 |
|
||||
79 79 | # No type_params
|
||||
80 80 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
|
||||
UP040.py:80:1: UP040 [*] Type alias `PositiveInt` uses `TypeAliasType` assignment instead of the `type` keyword
|
||||
|
|
||||
79 | # No type_params
|
||||
80 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP040
|
||||
81 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)], type_params=())
|
||||
|
|
||||
= help: Use the `type` keyword
|
||||
|
||||
ℹ Safe fix
|
||||
77 77 | Tuple3 = TypeAliasType("Tuple3", tuple[T1, T2, T3], type_params=(T1, T2, T3))
|
||||
78 78 |
|
||||
79 79 | # No type_params
|
||||
80 |-PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
80 |+type PositiveInt = Annotated[int, Gt(0)]
|
||||
81 81 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)], type_params=())
|
||||
82 82 |
|
||||
83 83 | # OK: Other name
|
||||
|
||||
UP040.py:81:1: UP040 [*] Type alias `PositiveInt` uses `TypeAliasType` assignment instead of the `type` keyword
|
||||
|
|
||||
79 | # No type_params
|
||||
80 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
81 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)], type_params=())
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP040
|
||||
82 |
|
||||
83 | # OK: Other name
|
||||
|
|
||||
= help: Use the `type` keyword
|
||||
|
||||
ℹ Safe fix
|
||||
78 78 |
|
||||
79 79 | # No type_params
|
||||
80 80 | PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)])
|
||||
81 |-PositiveInt = TypeAliasType("PositiveInt", Annotated[int, Gt(0)], type_params=())
|
||||
81 |+type PositiveInt = Annotated[int, Gt(0)]
|
||||
82 82 |
|
||||
83 83 | # OK: Other name
|
||||
84 84 | T = TypeVar("T", bound=SupportGt)
|
||||
|
||||
@@ -5,6 +5,7 @@ use anyhow::Result;
|
||||
|
||||
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::helpers::any_over_expr;
|
||||
use ruff_python_ast::identifier::Identifier;
|
||||
use ruff_python_ast::{self as ast, Expr, ExprSlice, ExprSubscript, ExprTuple, Parameters, Stmt};
|
||||
use ruff_python_semantic::SemanticModel;
|
||||
@@ -221,9 +222,17 @@ fn itemgetter_op(expr: &ExprSubscript, params: &Parameters, locator: &Locator) -
|
||||
let [arg] = params.args.as_slice() else {
|
||||
return None;
|
||||
};
|
||||
|
||||
// The argument to the lambda must match the subscripted value, as in: `lambda x: x[1]`.
|
||||
if !is_same_expression(arg, &expr.value) {
|
||||
return None;
|
||||
};
|
||||
|
||||
// The subscripted expression can't contain references to the argument, as in: `lambda x: x[x]`.
|
||||
if any_over_expr(expr.slice.as_ref(), &|expr| is_same_expression(arg, expr)) {
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(Operator {
|
||||
name: "itemgetter",
|
||||
args: vec![subscript_slice_to_string(expr.slice.as_ref(), locator).to_string()],
|
||||
|
||||
@@ -318,13 +318,13 @@ impl<'a> SortClassification<'a> {
|
||||
// An instance of this struct encapsulates an analysis
|
||||
/// of a multiline Python tuple/list that represents an
|
||||
/// `__all__`/`__slots__`/etc. definition or augmentation.
|
||||
pub(super) struct MultilineStringSequenceValue {
|
||||
items: Vec<StringSequenceItem>,
|
||||
pub(super) struct MultilineStringSequenceValue<'a> {
|
||||
items: Vec<StringSequenceItem<'a>>,
|
||||
range: TextRange,
|
||||
ends_with_trailing_comma: bool,
|
||||
}
|
||||
|
||||
impl MultilineStringSequenceValue {
|
||||
impl<'a> MultilineStringSequenceValue<'a> {
|
||||
pub(super) fn len(&self) -> usize {
|
||||
self.items.len()
|
||||
}
|
||||
@@ -336,14 +336,15 @@ impl MultilineStringSequenceValue {
|
||||
range: TextRange,
|
||||
kind: SequenceKind,
|
||||
locator: &Locator,
|
||||
) -> Option<MultilineStringSequenceValue> {
|
||||
string_items: &[&'a str],
|
||||
) -> Option<MultilineStringSequenceValue<'a>> {
|
||||
// Parse the multiline string sequence using the raw tokens.
|
||||
// See the docs for `collect_string_sequence_lines()` for why we have to
|
||||
// use the raw tokens, rather than just the AST, to do this parsing.
|
||||
//
|
||||
// Step (1). Start by collecting information on each line individually:
|
||||
let (lines, ends_with_trailing_comma) =
|
||||
collect_string_sequence_lines(range, kind, locator)?;
|
||||
collect_string_sequence_lines(range, kind, locator, string_items)?;
|
||||
|
||||
// Step (2). Group lines together into sortable "items":
|
||||
// - Any "item" contains a single element of the list/tuple
|
||||
@@ -447,7 +448,7 @@ impl MultilineStringSequenceValue {
|
||||
.map_or(true, |tok| tok.kind() != SimpleTokenKind::Comma);
|
||||
|
||||
self.items
|
||||
.sort_by(|a, b| sorting_style.compare(&a.value, &b.value));
|
||||
.sort_by(|a, b| sorting_style.compare(a.value, b.value));
|
||||
let joined_items = join_multiline_string_sequence_items(
|
||||
&self.items,
|
||||
locator,
|
||||
@@ -460,7 +461,7 @@ impl MultilineStringSequenceValue {
|
||||
}
|
||||
}
|
||||
|
||||
impl Ranged for MultilineStringSequenceValue {
|
||||
impl Ranged for MultilineStringSequenceValue<'_> {
|
||||
fn range(&self) -> TextRange {
|
||||
self.range
|
||||
}
|
||||
@@ -484,11 +485,12 @@ impl Ranged for MultilineStringSequenceValue {
|
||||
/// stage if we're to sort items without doing unnecessary
|
||||
/// brutality to the comments and pre-existing style choices
|
||||
/// in the original source code.
|
||||
fn collect_string_sequence_lines(
|
||||
fn collect_string_sequence_lines<'a>(
|
||||
range: TextRange,
|
||||
kind: SequenceKind,
|
||||
locator: &Locator,
|
||||
) -> Option<(Vec<StringSequenceLine>, bool)> {
|
||||
string_items: &[&'a str],
|
||||
) -> Option<(Vec<StringSequenceLine<'a>>, bool)> {
|
||||
// These first two variables are used for keeping track of state
|
||||
// regarding the entirety of the string sequence...
|
||||
let mut ends_with_trailing_comma = false;
|
||||
@@ -496,6 +498,8 @@ fn collect_string_sequence_lines(
|
||||
// ... all state regarding a single line of a string sequence
|
||||
// is encapsulated in this variable
|
||||
let mut line_state = LineState::default();
|
||||
// An iterator over the string values in the sequence.
|
||||
let mut string_items_iter = string_items.iter();
|
||||
|
||||
// `lex_starts_at()` gives us absolute ranges rather than relative ranges,
|
||||
// but (surprisingly) we still need to pass in the slice of code we want it to lex,
|
||||
@@ -518,8 +522,11 @@ fn collect_string_sequence_lines(
|
||||
Tok::Comment(_) => {
|
||||
line_state.visit_comment_token(subrange);
|
||||
}
|
||||
Tok::String { value, .. } => {
|
||||
line_state.visit_string_token(value, subrange);
|
||||
Tok::String { .. } => {
|
||||
let Some(string_value) = string_items_iter.next() else {
|
||||
unreachable!("Expected the number of string tokens to be equal to the number of string items in the sequence");
|
||||
};
|
||||
line_state.visit_string_token(string_value, subrange);
|
||||
ends_with_trailing_comma = false;
|
||||
}
|
||||
Tok::Comma => {
|
||||
@@ -558,15 +565,15 @@ fn collect_string_sequence_lines(
|
||||
/// `into_string_sequence_line()` is called, which consumes
|
||||
/// `self` and produces the classification for the line.
|
||||
#[derive(Debug, Default)]
|
||||
struct LineState {
|
||||
first_item_in_line: Option<(Box<str>, TextRange)>,
|
||||
following_items_in_line: Vec<(Box<str>, TextRange)>,
|
||||
struct LineState<'a> {
|
||||
first_item_in_line: Option<(&'a str, TextRange)>,
|
||||
following_items_in_line: Vec<(&'a str, TextRange)>,
|
||||
comment_range_start: Option<TextSize>,
|
||||
comment_in_line: Option<TextRange>,
|
||||
}
|
||||
|
||||
impl LineState {
|
||||
fn visit_string_token(&mut self, token_value: Box<str>, token_range: TextRange) {
|
||||
impl<'a> LineState<'a> {
|
||||
fn visit_string_token(&mut self, token_value: &'a str, token_range: TextRange) {
|
||||
if self.first_item_in_line.is_none() {
|
||||
self.first_item_in_line = Some((token_value, token_range));
|
||||
} else {
|
||||
@@ -600,7 +607,7 @@ impl LineState {
|
||||
}
|
||||
}
|
||||
|
||||
fn into_string_sequence_line(self) -> StringSequenceLine {
|
||||
fn into_string_sequence_line(self) -> StringSequenceLine<'a> {
|
||||
if let Some(first_item) = self.first_item_in_line {
|
||||
StringSequenceLine::OneOrMoreItems(LineWithItems {
|
||||
first_item,
|
||||
@@ -627,17 +634,17 @@ struct LineWithJustAComment(TextRange);
|
||||
/// 1 element of the sequence. The line may contain > 1 element of the
|
||||
/// sequence, and may also have a trailing comment after the element(s).
|
||||
#[derive(Debug)]
|
||||
struct LineWithItems {
|
||||
struct LineWithItems<'a> {
|
||||
// For elements in the list, we keep track of the value of the
|
||||
// value of the element as well as the source-code range of the element.
|
||||
// (We need to know the actual value so that we can sort the items.)
|
||||
first_item: (Box<str>, TextRange),
|
||||
following_items: Vec<(Box<str>, TextRange)>,
|
||||
first_item: (&'a str, TextRange),
|
||||
following_items: Vec<(&'a str, TextRange)>,
|
||||
// For comments, we only need to keep track of the source-code range.
|
||||
trailing_comment_range: Option<TextRange>,
|
||||
}
|
||||
|
||||
impl LineWithItems {
|
||||
impl LineWithItems<'_> {
|
||||
fn num_items(&self) -> usize {
|
||||
self.following_items.len() + 1
|
||||
}
|
||||
@@ -651,9 +658,9 @@ impl LineWithItems {
|
||||
/// and may also have a trailing comment.
|
||||
/// - An entirely empty line.
|
||||
#[derive(Debug)]
|
||||
enum StringSequenceLine {
|
||||
enum StringSequenceLine<'a> {
|
||||
JustAComment(LineWithJustAComment),
|
||||
OneOrMoreItems(LineWithItems),
|
||||
OneOrMoreItems(LineWithItems<'a>),
|
||||
Empty,
|
||||
}
|
||||
|
||||
@@ -667,11 +674,11 @@ enum StringSequenceLine {
|
||||
/// Note that any comments following the last item are discarded here,
|
||||
/// but that doesn't matter: we add them back in `into_sorted_source_code()`
|
||||
/// as part of the `postlude` (see comments in that function)
|
||||
fn collect_string_sequence_items(
|
||||
lines: Vec<StringSequenceLine>,
|
||||
fn collect_string_sequence_items<'a>(
|
||||
lines: Vec<StringSequenceLine<'a>>,
|
||||
dunder_all_range: TextRange,
|
||||
locator: &Locator,
|
||||
) -> Vec<StringSequenceItem> {
|
||||
) -> Vec<StringSequenceItem<'a>> {
|
||||
let mut all_items = Vec::with_capacity(match lines.as_slice() {
|
||||
[StringSequenceLine::OneOrMoreItems(single)] => single.num_items(),
|
||||
_ => lines.len(),
|
||||
@@ -752,8 +759,8 @@ fn collect_string_sequence_items(
|
||||
/// of `# comment1` does not form a contiguous range with the
|
||||
/// source-code range of `"a"`.
|
||||
#[derive(Debug)]
|
||||
struct StringSequenceItem {
|
||||
value: Box<str>,
|
||||
struct StringSequenceItem<'a> {
|
||||
value: &'a str,
|
||||
preceding_comment_ranges: Vec<TextRange>,
|
||||
element_range: TextRange,
|
||||
// total_range incorporates the ranges of preceding comments
|
||||
@@ -764,9 +771,9 @@ struct StringSequenceItem {
|
||||
end_of_line_comments: Option<TextRange>,
|
||||
}
|
||||
|
||||
impl StringSequenceItem {
|
||||
impl<'a> StringSequenceItem<'a> {
|
||||
fn new(
|
||||
value: Box<str>,
|
||||
value: &'a str,
|
||||
preceding_comment_ranges: Vec<TextRange>,
|
||||
element_range: TextRange,
|
||||
end_of_line_comments: Option<TextRange>,
|
||||
@@ -787,12 +794,12 @@ impl StringSequenceItem {
|
||||
}
|
||||
}
|
||||
|
||||
fn with_no_comments(value: Box<str>, element_range: TextRange) -> Self {
|
||||
fn with_no_comments(value: &'a str, element_range: TextRange) -> Self {
|
||||
Self::new(value, vec![], element_range, None)
|
||||
}
|
||||
}
|
||||
|
||||
impl Ranged for StringSequenceItem {
|
||||
impl Ranged for StringSequenceItem<'_> {
|
||||
fn range(&self) -> TextRange {
|
||||
self.total_range
|
||||
}
|
||||
|
||||
@@ -212,7 +212,12 @@ fn create_fix(
|
||||
// bare minimum of token-processing for single-line `__all__`
|
||||
// definitions:
|
||||
if is_multiline {
|
||||
let value = MultilineStringSequenceValue::from_source_range(range, kind, locator)?;
|
||||
let value = MultilineStringSequenceValue::from_source_range(
|
||||
range,
|
||||
kind,
|
||||
locator,
|
||||
string_items,
|
||||
)?;
|
||||
assert_eq!(value.len(), elts.len());
|
||||
value.into_sorted_source_code(SORTING_STYLE, locator, checker.stylist())
|
||||
} else {
|
||||
|
||||
@@ -210,6 +210,7 @@ impl<'a> StringLiteralDisplay<'a> {
|
||||
self.range(),
|
||||
*sequence_kind,
|
||||
locator,
|
||||
elements,
|
||||
)?;
|
||||
assert_eq!(analyzed_sequence.len(), self.elts.len());
|
||||
analyzed_sequence.into_sorted_source_code(SORTING_STYLE, locator, checker.stylist())
|
||||
|
||||
@@ -59,10 +59,6 @@ impl<'a> QualifiedName<'a> {
|
||||
matches!(self.segments(), ["", ..])
|
||||
}
|
||||
|
||||
pub fn is_user_defined(&self) -> bool {
|
||||
!self.is_builtin()
|
||||
}
|
||||
|
||||
/// If the call path is dot-prefixed, it's an unresolved relative import.
|
||||
/// Ex) `[".foo", "bar"]` -> `".foo.bar"`
|
||||
pub fn is_unresolved_import(&self) -> bool {
|
||||
|
||||
@@ -1351,6 +1351,64 @@ impl Ranged for FStringPart {
|
||||
}
|
||||
}
|
||||
|
||||
pub trait StringFlags: Copy {
|
||||
/// Does the string use single or double quotes in its opener and closer?
|
||||
fn quote_style(self) -> Quote;
|
||||
|
||||
/// Is the string triple-quoted, i.e.,
|
||||
/// does it begin and end with three consecutive quote characters?
|
||||
fn is_triple_quoted(self) -> bool;
|
||||
|
||||
fn prefix(self) -> AnyStringPrefix;
|
||||
|
||||
/// A `str` representation of the quotes used to start and close.
|
||||
/// This does not include any prefixes the string has in its opener.
|
||||
fn quote_str(self) -> &'static str {
|
||||
if self.is_triple_quoted() {
|
||||
match self.quote_style() {
|
||||
Quote::Single => "'''",
|
||||
Quote::Double => r#"""""#,
|
||||
}
|
||||
} else {
|
||||
match self.quote_style() {
|
||||
Quote::Single => "'",
|
||||
Quote::Double => "\"",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// The length of the quotes used to start and close the string.
|
||||
/// This does not include the length of any prefixes the string has
|
||||
/// in its opener.
|
||||
fn quote_len(self) -> TextSize {
|
||||
if self.is_triple_quoted() {
|
||||
TextSize::new(3)
|
||||
} else {
|
||||
TextSize::new(1)
|
||||
}
|
||||
}
|
||||
|
||||
/// The total length of the string's opener,
|
||||
/// i.e., the length of the prefixes plus the length
|
||||
/// of the quotes used to open the string.
|
||||
fn opener_len(self) -> TextSize {
|
||||
self.prefix().as_str().text_len() + self.quote_len()
|
||||
}
|
||||
|
||||
/// The total length of the string's closer.
|
||||
/// This is always equal to `self.quote_len()`,
|
||||
/// but is provided here for symmetry with the `opener_len()` method.
|
||||
fn closer_len(self) -> TextSize {
|
||||
self.quote_len()
|
||||
}
|
||||
|
||||
fn format_string_contents(self, contents: &str) -> String {
|
||||
let prefix = self.prefix();
|
||||
let quote_str = self.quote_str();
|
||||
format!("{prefix}{quote_str}{contents}{quote_str}")
|
||||
}
|
||||
}
|
||||
|
||||
bitflags! {
|
||||
#[derive(Default, Copy, Clone, PartialEq, Eq, Hash)]
|
||||
struct FStringFlagsInner: u8 {
|
||||
@@ -1420,11 +1478,13 @@ impl FStringFlags {
|
||||
FStringPrefix::Regular
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl StringFlags for FStringFlags {
|
||||
/// Return `true` if the f-string is triple-quoted, i.e.,
|
||||
/// it begins and ends with three consecutive quote characters.
|
||||
/// For example: `f"""{bar}"""`
|
||||
pub const fn is_triple_quoted(self) -> bool {
|
||||
fn is_triple_quoted(self) -> bool {
|
||||
self.0.contains(FStringFlagsInner::TRIPLE_QUOTED)
|
||||
}
|
||||
|
||||
@@ -1432,13 +1492,17 @@ impl FStringFlags {
|
||||
/// used by the f-string's opener and closer:
|
||||
/// - `f"{"a"}"` -> `QuoteStyle::Double`
|
||||
/// - `f'{"a"}'` -> `QuoteStyle::Single`
|
||||
pub const fn quote_style(self) -> Quote {
|
||||
fn quote_style(self) -> Quote {
|
||||
if self.0.contains(FStringFlagsInner::DOUBLE) {
|
||||
Quote::Double
|
||||
} else {
|
||||
Quote::Single
|
||||
}
|
||||
}
|
||||
|
||||
fn prefix(self) -> AnyStringPrefix {
|
||||
AnyStringPrefix::Format(self.prefix())
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for FStringFlags {
|
||||
@@ -1516,7 +1580,7 @@ impl<'a> IntoIterator for &'a mut FStringElements {
|
||||
}
|
||||
|
||||
impl Deref for FStringElements {
|
||||
type Target = Vec<FStringElement>;
|
||||
type Target = [FStringElement];
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
@@ -1830,12 +1894,14 @@ impl StringLiteralFlags {
|
||||
StringLiteralPrefix::Empty
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl StringFlags for StringLiteralFlags {
|
||||
/// Return the quoting style (single or double quotes)
|
||||
/// used by the string's opener and closer:
|
||||
/// - `"a"` -> `QuoteStyle::Double`
|
||||
/// - `'a'` -> `QuoteStyle::Single`
|
||||
pub const fn quote_style(self) -> Quote {
|
||||
fn quote_style(self) -> Quote {
|
||||
if self.0.contains(StringLiteralFlagsInner::DOUBLE) {
|
||||
Quote::Double
|
||||
} else {
|
||||
@@ -1846,9 +1912,13 @@ impl StringLiteralFlags {
|
||||
/// Return `true` if the string is triple-quoted, i.e.,
|
||||
/// it begins and ends with three consecutive quote characters.
|
||||
/// For example: `"""bar"""`
|
||||
pub const fn is_triple_quoted(self) -> bool {
|
||||
fn is_triple_quoted(self) -> bool {
|
||||
self.0.contains(StringLiteralFlagsInner::TRIPLE_QUOTED)
|
||||
}
|
||||
|
||||
fn prefix(self) -> AnyStringPrefix {
|
||||
AnyStringPrefix::Regular(self.prefix())
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for StringLiteralFlags {
|
||||
@@ -2171,11 +2241,13 @@ impl BytesLiteralFlags {
|
||||
ByteStringPrefix::Regular
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl StringFlags for BytesLiteralFlags {
|
||||
/// Return `true` if the bytestring is triple-quoted, i.e.,
|
||||
/// it begins and ends with three consecutive quote characters.
|
||||
/// For example: `b"""{bar}"""`
|
||||
pub const fn is_triple_quoted(self) -> bool {
|
||||
fn is_triple_quoted(self) -> bool {
|
||||
self.0.contains(BytesLiteralFlagsInner::TRIPLE_QUOTED)
|
||||
}
|
||||
|
||||
@@ -2183,13 +2255,17 @@ impl BytesLiteralFlags {
|
||||
/// used by the bytestring's opener and closer:
|
||||
/// - `b"a"` -> `QuoteStyle::Double`
|
||||
/// - `b'a'` -> `QuoteStyle::Single`
|
||||
pub const fn quote_style(self) -> Quote {
|
||||
fn quote_style(self) -> Quote {
|
||||
if self.0.contains(BytesLiteralFlagsInner::DOUBLE) {
|
||||
Quote::Double
|
||||
} else {
|
||||
Quote::Single
|
||||
}
|
||||
}
|
||||
|
||||
fn prefix(self) -> AnyStringPrefix {
|
||||
AnyStringPrefix::Bytes(self.prefix())
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for BytesLiteralFlags {
|
||||
@@ -2340,7 +2416,70 @@ impl AnyStringFlags {
|
||||
self
|
||||
}
|
||||
|
||||
pub const fn prefix(self) -> AnyStringPrefix {
|
||||
pub fn new(prefix: AnyStringPrefix, quotes: Quote, triple_quoted: bool) -> Self {
|
||||
let new = Self::default().with_prefix(prefix).with_quote_style(quotes);
|
||||
if triple_quoted {
|
||||
new.with_triple_quotes()
|
||||
} else {
|
||||
new
|
||||
}
|
||||
}
|
||||
|
||||
/// Does the string have a `u` or `U` prefix?
|
||||
pub const fn is_u_string(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::U_PREFIX)
|
||||
}
|
||||
|
||||
/// Does the string have an `r` or `R` prefix?
|
||||
pub const fn is_raw_string(self) -> bool {
|
||||
self.0.intersects(
|
||||
AnyStringFlagsInner::R_PREFIX_LOWER.union(AnyStringFlagsInner::R_PREFIX_UPPER),
|
||||
)
|
||||
}
|
||||
|
||||
/// Does the string have an `f` or `F` prefix?
|
||||
pub const fn is_f_string(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::F_PREFIX)
|
||||
}
|
||||
|
||||
/// Does the string have a `b` or `B` prefix?
|
||||
pub const fn is_byte_string(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::B_PREFIX)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_quote_style(mut self, quotes: Quote) -> Self {
|
||||
match quotes {
|
||||
Quote::Double => self.0 |= AnyStringFlagsInner::DOUBLE,
|
||||
Quote::Single => self.0 -= AnyStringFlagsInner::DOUBLE,
|
||||
};
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_triple_quotes(mut self) -> Self {
|
||||
self.0 |= AnyStringFlagsInner::TRIPLE_QUOTED;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl StringFlags for AnyStringFlags {
|
||||
/// Does the string use single or double quotes in its opener and closer?
|
||||
fn quote_style(self) -> Quote {
|
||||
if self.0.contains(AnyStringFlagsInner::DOUBLE) {
|
||||
Quote::Double
|
||||
} else {
|
||||
Quote::Single
|
||||
}
|
||||
}
|
||||
|
||||
/// Is the string triple-quoted, i.e.,
|
||||
/// does it begin and end with three consecutive quote characters?
|
||||
fn is_triple_quoted(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::TRIPLE_QUOTED)
|
||||
}
|
||||
|
||||
fn prefix(self) -> AnyStringPrefix {
|
||||
let AnyStringFlags(flags) = self;
|
||||
|
||||
// f-strings
|
||||
@@ -2377,123 +2516,6 @@ impl AnyStringFlags {
|
||||
}
|
||||
AnyStringPrefix::Regular(StringLiteralPrefix::Empty)
|
||||
}
|
||||
|
||||
pub fn new(prefix: AnyStringPrefix, quotes: Quote, triple_quoted: bool) -> Self {
|
||||
let new = Self::default().with_prefix(prefix).with_quote_style(quotes);
|
||||
if triple_quoted {
|
||||
new.with_triple_quotes()
|
||||
} else {
|
||||
new
|
||||
}
|
||||
}
|
||||
|
||||
/// Does the string have a `u` or `U` prefix?
|
||||
pub const fn is_u_string(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::U_PREFIX)
|
||||
}
|
||||
|
||||
/// Does the string have an `r` or `R` prefix?
|
||||
pub const fn is_raw_string(self) -> bool {
|
||||
self.0.intersects(
|
||||
AnyStringFlagsInner::R_PREFIX_LOWER.union(AnyStringFlagsInner::R_PREFIX_UPPER),
|
||||
)
|
||||
}
|
||||
|
||||
/// Does the string have an `f` or `F` prefix?
|
||||
pub const fn is_f_string(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::F_PREFIX)
|
||||
}
|
||||
|
||||
/// Does the string have a `b` or `B` prefix?
|
||||
pub const fn is_byte_string(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::B_PREFIX)
|
||||
}
|
||||
|
||||
/// Does the string use single or double quotes in its opener and closer?
|
||||
pub const fn quote_style(self) -> Quote {
|
||||
if self.0.contains(AnyStringFlagsInner::DOUBLE) {
|
||||
Quote::Double
|
||||
} else {
|
||||
Quote::Single
|
||||
}
|
||||
}
|
||||
|
||||
/// Is the string triple-quoted, i.e.,
|
||||
/// does it begin and end with three consecutive quote characters?
|
||||
pub const fn is_triple_quoted(self) -> bool {
|
||||
self.0.contains(AnyStringFlagsInner::TRIPLE_QUOTED)
|
||||
}
|
||||
|
||||
/// A `str` representation of the quotes used to start and close.
|
||||
/// This does not include any prefixes the string has in its opener.
|
||||
pub const fn quote_str(self) -> &'static str {
|
||||
if self.is_triple_quoted() {
|
||||
match self.quote_style() {
|
||||
Quote::Single => "'''",
|
||||
Quote::Double => r#"""""#,
|
||||
}
|
||||
} else {
|
||||
match self.quote_style() {
|
||||
Quote::Single => "'",
|
||||
Quote::Double => "\"",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// The length of the prefixes used (if any) in the string's opener.
|
||||
pub fn prefix_len(self) -> TextSize {
|
||||
self.prefix().as_str().text_len()
|
||||
}
|
||||
|
||||
/// The length of the quotes used to start and close the string.
|
||||
/// This does not include the length of any prefixes the string has
|
||||
/// in its opener.
|
||||
pub const fn quote_len(self) -> TextSize {
|
||||
if self.is_triple_quoted() {
|
||||
TextSize::new(3)
|
||||
} else {
|
||||
TextSize::new(1)
|
||||
}
|
||||
}
|
||||
|
||||
/// The total length of the string's opener,
|
||||
/// i.e., the length of the prefixes plus the length
|
||||
/// of the quotes used to open the string.
|
||||
pub fn opener_len(self) -> TextSize {
|
||||
self.prefix_len() + self.quote_len()
|
||||
}
|
||||
|
||||
/// The total length of the string's closer.
|
||||
/// This is always equal to `self.quote_len()`,
|
||||
/// but is provided here for symmetry with the `opener_len()` method.
|
||||
pub const fn closer_len(self) -> TextSize {
|
||||
self.quote_len()
|
||||
}
|
||||
|
||||
pub fn format_string_contents(self, contents: &str) -> String {
|
||||
format!(
|
||||
"{}{}{}{}",
|
||||
self.prefix(),
|
||||
self.quote_str(),
|
||||
contents,
|
||||
self.quote_str()
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_quote_style(mut self, quotes: Quote) -> Self {
|
||||
match quotes {
|
||||
Quote::Double => self.0 |= AnyStringFlagsInner::DOUBLE,
|
||||
Quote::Single => self.0 -= AnyStringFlagsInner::DOUBLE,
|
||||
};
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_triple_quotes(mut self) -> Self {
|
||||
self.0 |= AnyStringFlagsInner::TRIPLE_QUOTED;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for AnyStringFlags {
|
||||
@@ -2998,6 +3020,21 @@ pub enum Pattern {
|
||||
MatchOr(PatternMatchOr),
|
||||
}
|
||||
|
||||
impl Pattern {
|
||||
/// Checks if the [`Pattern`] is an [irrefutable pattern].
|
||||
///
|
||||
/// [irrefutable pattern]: https://peps.python.org/pep-0634/#irrefutable-case-blocks
|
||||
pub fn is_irrefutable(&self) -> bool {
|
||||
match self {
|
||||
Pattern::MatchAs(PatternMatchAs { pattern: None, .. }) => true,
|
||||
Pattern::MatchOr(PatternMatchOr { patterns, .. }) => {
|
||||
patterns.iter().any(Pattern::is_irrefutable)
|
||||
}
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// See also [MatchValue](https://docs.python.org/3/library/ast.html#ast.MatchValue)
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub struct PatternMatchValue {
|
||||
@@ -3692,20 +3729,6 @@ impl fmt::Display for IpyEscapeKind {
|
||||
}
|
||||
|
||||
impl IpyEscapeKind {
|
||||
/// Returns the length of the escape kind token.
|
||||
pub fn prefix_len(self) -> TextSize {
|
||||
let len = match self {
|
||||
IpyEscapeKind::Shell
|
||||
| IpyEscapeKind::Magic
|
||||
| IpyEscapeKind::Help
|
||||
| IpyEscapeKind::Quote
|
||||
| IpyEscapeKind::Quote2
|
||||
| IpyEscapeKind::Paren => 1,
|
||||
IpyEscapeKind::ShCap | IpyEscapeKind::Magic2 | IpyEscapeKind::Help2 => 2,
|
||||
};
|
||||
len.into()
|
||||
}
|
||||
|
||||
/// Returns `true` if the escape kind is help i.e., `?` or `??`.
|
||||
pub const fn is_help(self) -> bool {
|
||||
matches!(self, IpyEscapeKind::Help | IpyEscapeKind::Help2)
|
||||
|
||||
@@ -79,15 +79,3 @@ pub fn next_sibling<'a>(stmt: &'a Stmt, suite: &'a Suite) -> Option<&'a Stmt> {
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Given a [`Stmt`] and its containing [`Suite`], return the previous [`Stmt`] in the [`Suite`].
|
||||
pub fn prev_sibling<'a>(stmt: &'a Stmt, suite: &'a Suite) -> Option<&'a Stmt> {
|
||||
let mut prev = None;
|
||||
for sibling in suite {
|
||||
if sibling == stmt {
|
||||
return prev;
|
||||
}
|
||||
prev = Some(sibling);
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ use std::ops::Deref;
|
||||
|
||||
use once_cell::unsync::OnceCell;
|
||||
|
||||
use ruff_python_ast::str::Quote;
|
||||
use ruff_python_ast::{str::Quote, StringFlags};
|
||||
use ruff_python_parser::lexer::LexResult;
|
||||
use ruff_python_parser::Tok;
|
||||
use ruff_source_file::{find_newline, LineEnding, Locator};
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use ruff_formatter::write;
|
||||
use ruff_python_ast::{AnyStringFlags, FString};
|
||||
use ruff_python_ast::{AnyStringFlags, FString, StringFlags};
|
||||
use ruff_source_file::Locator;
|
||||
|
||||
use crate::prelude::*;
|
||||
|
||||
@@ -3,6 +3,7 @@ use std::borrow::Cow;
|
||||
use ruff_formatter::{format_args, write, Buffer, RemoveSoftLinesBuffer};
|
||||
use ruff_python_ast::{
|
||||
ConversionFlag, Expr, FStringElement, FStringExpressionElement, FStringLiteralElement,
|
||||
StringFlags,
|
||||
};
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ use memchr::memchr2;
|
||||
|
||||
use ruff_python_ast::{
|
||||
self as ast, AnyNodeRef, AnyStringFlags, Expr, ExprBytesLiteral, ExprFString,
|
||||
ExprStringLiteral, ExpressionRef, StringLiteral,
|
||||
ExprStringLiteral, ExpressionRef, StringFlags, StringLiteral,
|
||||
};
|
||||
use ruff_source_file::Locator;
|
||||
use ruff_text_size::{Ranged, TextRange};
|
||||
|
||||
@@ -8,7 +8,7 @@ use std::{borrow::Cow, collections::VecDeque};
|
||||
use itertools::Itertools;
|
||||
|
||||
use ruff_formatter::printer::SourceMapGeneration;
|
||||
use ruff_python_ast::str::Quote;
|
||||
use ruff_python_ast::{str::Quote, StringFlags};
|
||||
use ruff_python_parser::ParseError;
|
||||
use {once_cell::sync::Lazy, regex::Regex};
|
||||
use {
|
||||
|
||||
@@ -5,7 +5,7 @@ use ruff_python_ast::str::Quote;
|
||||
use ruff_python_ast::{
|
||||
self as ast,
|
||||
str_prefix::{AnyStringPrefix, StringLiteralPrefix},
|
||||
AnyStringFlags,
|
||||
AnyStringFlags, StringFlags,
|
||||
};
|
||||
use ruff_text_size::{Ranged, TextRange};
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ use std::borrow::Cow;
|
||||
use std::iter::FusedIterator;
|
||||
|
||||
use ruff_formatter::FormatContext;
|
||||
use ruff_python_ast::{str::Quote, AnyStringFlags};
|
||||
use ruff_python_ast::{str::Quote, AnyStringFlags, StringFlags};
|
||||
use ruff_source_file::Locator;
|
||||
use ruff_text_size::{Ranged, TextRange};
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
use ruff_python_ast::StringFlags;
|
||||
use ruff_python_parser::Tok;
|
||||
use ruff_text_size::TextRange;
|
||||
|
||||
|
||||
@@ -18,9 +18,7 @@ doctest = false
|
||||
ruff_python_ast = { workspace = true }
|
||||
|
||||
bitflags = { workspace = true }
|
||||
hexf-parse = { workspace = true }
|
||||
itertools = { workspace = true }
|
||||
lexical-parse-float = { workspace = true, features = ["format"] }
|
||||
unic-ucd-category = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
//! Implementation of Printf-Style string formatting
|
||||
//! as per the [Python Docs](https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting).
|
||||
use bitflags::bitflags;
|
||||
use std::{
|
||||
fmt,
|
||||
iter::{Enumerate, Peekable},
|
||||
str::FromStr,
|
||||
};
|
||||
|
||||
use bitflags::bitflags;
|
||||
|
||||
use crate::Case;
|
||||
|
||||
#[derive(Debug, PartialEq)]
|
||||
@@ -96,19 +97,6 @@ bitflags! {
|
||||
}
|
||||
}
|
||||
|
||||
impl CConversionFlags {
|
||||
#[inline]
|
||||
pub fn sign_string(&self) -> &'static str {
|
||||
if self.contains(CConversionFlags::SIGN_CHAR) {
|
||||
"+"
|
||||
} else if self.contains(CConversionFlags::BLANK_SIGN) {
|
||||
" "
|
||||
} else {
|
||||
""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, PartialEq)]
|
||||
pub enum CFormatQuantity {
|
||||
Amount(usize),
|
||||
@@ -337,44 +325,12 @@ pub enum CFormatPart<T> {
|
||||
Spec(CFormatSpec),
|
||||
}
|
||||
|
||||
impl<T> CFormatPart<T> {
|
||||
#[inline]
|
||||
pub fn is_specifier(&self) -> bool {
|
||||
matches!(self, CFormatPart::Spec(_))
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn has_key(&self) -> bool {
|
||||
match self {
|
||||
CFormatPart::Spec(s) => s.mapping_key.is_some(),
|
||||
CFormatPart::Literal(_) => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, PartialEq)]
|
||||
pub struct CFormatStrOrBytes<S> {
|
||||
parts: Vec<(usize, CFormatPart<S>)>,
|
||||
}
|
||||
|
||||
impl<S> CFormatStrOrBytes<S> {
|
||||
pub fn check_specifiers(&self) -> Option<(usize, bool)> {
|
||||
let mut count = 0;
|
||||
let mut mapping_required = false;
|
||||
for (_, part) in &self.parts {
|
||||
if part.is_specifier() {
|
||||
let has_key = part.has_key();
|
||||
if count == 0 {
|
||||
mapping_required = has_key;
|
||||
} else if mapping_required != has_key {
|
||||
return None;
|
||||
}
|
||||
count += 1;
|
||||
}
|
||||
}
|
||||
Some((count, mapping_required))
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn iter(&self) -> impl Iterator<Item = &(usize, CFormatPart<S>)> {
|
||||
self.parts.iter()
|
||||
@@ -430,11 +386,6 @@ impl CFormatBytes {
|
||||
}
|
||||
Ok(Self { parts })
|
||||
}
|
||||
|
||||
pub fn parse_from_bytes(bytes: &[u8]) -> Result<Self, CFormatError> {
|
||||
let mut iter = bytes.iter().copied().enumerate().peekable();
|
||||
Self::parse(&mut iter)
|
||||
}
|
||||
}
|
||||
|
||||
pub type CFormatString = CFormatStrOrBytes<String>;
|
||||
|
||||
@@ -50,11 +50,6 @@ pub struct UnicodeEscape<'a> {
|
||||
}
|
||||
|
||||
impl<'a> UnicodeEscape<'a> {
|
||||
#[inline]
|
||||
pub fn with_forced_quote(source: &'a str, quote: Quote) -> Self {
|
||||
let layout = EscapeLayout { quote, len: None };
|
||||
Self { source, layout }
|
||||
}
|
||||
#[inline]
|
||||
pub fn with_preferred_quote(source: &'a str, quote: Quote) -> Self {
|
||||
let layout = Self::repr_layout(source, quote);
|
||||
@@ -240,11 +235,6 @@ impl<'a> AsciiEscape<'a> {
|
||||
Self { source, layout }
|
||||
}
|
||||
#[inline]
|
||||
pub fn with_forced_quote(source: &'a [u8], quote: Quote) -> Self {
|
||||
let layout = EscapeLayout { quote, len: None };
|
||||
Self { source, layout }
|
||||
}
|
||||
#[inline]
|
||||
pub fn with_preferred_quote(source: &'a [u8], quote: Quote) -> Self {
|
||||
let layout = Self::repr_layout(source, quote);
|
||||
Self { source, layout }
|
||||
@@ -271,17 +261,6 @@ impl AsciiEscape<'_> {
|
||||
})
|
||||
}
|
||||
|
||||
#[allow(
|
||||
clippy::cast_possible_wrap,
|
||||
clippy::cast_possible_truncation,
|
||||
clippy::cast_sign_loss
|
||||
)]
|
||||
pub fn named_repr_layout(source: &[u8], name: &str) -> EscapeLayout {
|
||||
Self::output_layout_with_checker(source, Quote::Single, name.len() + 2 + 3, |a, b| {
|
||||
Some((a as isize).checked_add(b as isize)? as usize)
|
||||
})
|
||||
}
|
||||
|
||||
fn output_layout_with_checker(
|
||||
source: &[u8],
|
||||
preferred_quote: Quote,
|
||||
|
||||
@@ -1,178 +1,9 @@
|
||||
use std::f64;
|
||||
|
||||
use crate::Case;
|
||||
|
||||
pub fn parse_str(literal: &str) -> Option<f64> {
|
||||
parse_inner(literal.trim().as_bytes())
|
||||
}
|
||||
|
||||
pub fn parse_bytes(literal: &[u8]) -> Option<f64> {
|
||||
parse_inner(trim_slice(literal, u8::is_ascii_whitespace))
|
||||
}
|
||||
|
||||
fn trim_slice<T>(v: &[T], mut trim: impl FnMut(&T) -> bool) -> &[T] {
|
||||
let mut it = v.iter();
|
||||
// it.take_while_ref(&mut trim).for_each(drop);
|
||||
// hmm.. `&mut slice::Iter<_>` is not `Clone`
|
||||
// it.by_ref().rev().take_while_ref(&mut trim).for_each(drop);
|
||||
while it.clone().next().is_some_and(&mut trim) {
|
||||
it.next();
|
||||
}
|
||||
while it.clone().next_back().is_some_and(&mut trim) {
|
||||
it.next_back();
|
||||
}
|
||||
it.as_slice()
|
||||
}
|
||||
|
||||
fn parse_inner(literal: &[u8]) -> Option<f64> {
|
||||
use lexical_parse_float::{
|
||||
format::PYTHON3_LITERAL, FromLexicalWithOptions, NumberFormatBuilder, Options,
|
||||
};
|
||||
// lexical-core's format::PYTHON_STRING is inaccurate
|
||||
const PYTHON_STRING: u128 = NumberFormatBuilder::rebuild(PYTHON3_LITERAL)
|
||||
.no_special(false)
|
||||
.build();
|
||||
f64::from_lexical_with_options::<PYTHON_STRING>(literal, &Options::new()).ok()
|
||||
}
|
||||
|
||||
pub fn is_integer(v: f64) -> bool {
|
||||
fn is_integer(v: f64) -> bool {
|
||||
(v - v.round()).abs() < f64::EPSILON
|
||||
}
|
||||
|
||||
fn format_nan(case: Case) -> String {
|
||||
let nan = match case {
|
||||
Case::Lower => "nan",
|
||||
Case::Upper => "NAN",
|
||||
};
|
||||
|
||||
nan.to_string()
|
||||
}
|
||||
|
||||
fn format_inf(case: Case) -> String {
|
||||
let inf = match case {
|
||||
Case::Lower => "inf",
|
||||
Case::Upper => "INF",
|
||||
};
|
||||
|
||||
inf.to_string()
|
||||
}
|
||||
|
||||
pub fn decimal_point_or_empty(precision: usize, alternate_form: bool) -> &'static str {
|
||||
match (precision, alternate_form) {
|
||||
(0, true) => ".",
|
||||
_ => "",
|
||||
}
|
||||
}
|
||||
|
||||
pub fn format_fixed(precision: usize, magnitude: f64, case: Case, alternate_form: bool) -> String {
|
||||
match magnitude {
|
||||
magnitude if magnitude.is_finite() => {
|
||||
let point = decimal_point_or_empty(precision, alternate_form);
|
||||
format!("{magnitude:.precision$}{point}")
|
||||
}
|
||||
magnitude if magnitude.is_nan() => format_nan(case),
|
||||
magnitude if magnitude.is_infinite() => format_inf(case),
|
||||
_ => String::new(),
|
||||
}
|
||||
}
|
||||
|
||||
// Formats floats into Python style exponent notation, by first formatting in Rust style
|
||||
// exponent notation (`1.0000e0`), then convert to Python style (`1.0000e+00`).
|
||||
pub fn format_exponent(
|
||||
precision: usize,
|
||||
magnitude: f64,
|
||||
case: Case,
|
||||
alternate_form: bool,
|
||||
) -> String {
|
||||
match magnitude {
|
||||
magnitude if magnitude.is_finite() => {
|
||||
let r_exp = format!("{magnitude:.precision$e}");
|
||||
let mut parts = r_exp.splitn(2, 'e');
|
||||
let base = parts.next().unwrap();
|
||||
let exponent = parts.next().unwrap().parse::<i64>().unwrap();
|
||||
let e = match case {
|
||||
Case::Lower => 'e',
|
||||
Case::Upper => 'E',
|
||||
};
|
||||
let point = decimal_point_or_empty(precision, alternate_form);
|
||||
format!("{base}{point}{e}{exponent:+#03}")
|
||||
}
|
||||
magnitude if magnitude.is_nan() => format_nan(case),
|
||||
magnitude if magnitude.is_infinite() => format_inf(case),
|
||||
_ => String::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// If s represents a floating point value, trailing zeros and a possibly trailing
|
||||
/// decimal point will be removed.
|
||||
/// This function does NOT work with decimal commas.
|
||||
fn maybe_remove_trailing_redundant_chars(s: String, alternate_form: bool) -> String {
|
||||
if !alternate_form && s.contains('.') {
|
||||
// only truncate floating point values when not in alternate form
|
||||
let s = remove_trailing_zeros(s);
|
||||
remove_trailing_decimal_point(s)
|
||||
} else {
|
||||
s
|
||||
}
|
||||
}
|
||||
|
||||
fn remove_trailing_zeros(s: String) -> String {
|
||||
let mut s = s;
|
||||
while s.ends_with('0') {
|
||||
s.pop();
|
||||
}
|
||||
s
|
||||
}
|
||||
|
||||
fn remove_trailing_decimal_point(s: String) -> String {
|
||||
let mut s = s;
|
||||
if s.ends_with('.') {
|
||||
s.pop();
|
||||
}
|
||||
s
|
||||
}
|
||||
|
||||
#[allow(
|
||||
clippy::cast_sign_loss,
|
||||
clippy::cast_possible_truncation,
|
||||
clippy::cast_possible_wrap
|
||||
)]
|
||||
pub fn format_general(
|
||||
precision: usize,
|
||||
magnitude: f64,
|
||||
case: Case,
|
||||
alternate_form: bool,
|
||||
always_shows_fract: bool,
|
||||
) -> String {
|
||||
match magnitude {
|
||||
magnitude if magnitude.is_finite() => {
|
||||
let r_exp = format!("{:.*e}", precision.saturating_sub(1), magnitude);
|
||||
let mut parts = r_exp.splitn(2, 'e');
|
||||
let base = parts.next().unwrap();
|
||||
let exponent = parts.next().unwrap().parse::<i64>().unwrap();
|
||||
if exponent < -4 || exponent + i64::from(always_shows_fract) >= (precision as i64) {
|
||||
let e = match case {
|
||||
Case::Lower => 'e',
|
||||
Case::Upper => 'E',
|
||||
};
|
||||
let magnitude = format!("{:.*}", precision + 1, base);
|
||||
let base = maybe_remove_trailing_redundant_chars(magnitude, alternate_form);
|
||||
let point = decimal_point_or_empty(precision.saturating_sub(1), alternate_form);
|
||||
format!("{base}{point}{e}{exponent:+#03}")
|
||||
} else {
|
||||
let precision = ((precision as i64) - 1 - exponent) as usize;
|
||||
let magnitude = format!("{magnitude:.precision$}");
|
||||
let base = maybe_remove_trailing_redundant_chars(magnitude, alternate_form);
|
||||
let point = decimal_point_or_empty(precision, alternate_form);
|
||||
format!("{base}{point}")
|
||||
}
|
||||
}
|
||||
magnitude if magnitude.is_nan() => format_nan(case),
|
||||
magnitude if magnitude.is_infinite() => format_inf(case),
|
||||
_ => String::new(),
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: rewrite using format_general
|
||||
pub fn to_string(value: f64) -> String {
|
||||
let lit = format!("{value:e}");
|
||||
@@ -195,83 +26,3 @@ pub fn to_string(value: f64) -> String {
|
||||
s
|
||||
}
|
||||
}
|
||||
|
||||
pub fn from_hex(s: &str) -> Option<f64> {
|
||||
if let Ok(f) = hexf_parse::parse_hexf64(s, false) {
|
||||
return Some(f);
|
||||
}
|
||||
match s.to_ascii_lowercase().as_str() {
|
||||
"nan" | "+nan" | "-nan" => Some(f64::NAN),
|
||||
"inf" | "infinity" | "+inf" | "+infinity" => Some(f64::INFINITY),
|
||||
"-inf" | "-infinity" => Some(f64::NEG_INFINITY),
|
||||
value => {
|
||||
let mut hex = String::with_capacity(value.len());
|
||||
let has_0x = value.contains("0x");
|
||||
let has_p = value.contains('p');
|
||||
let has_dot = value.contains('.');
|
||||
let mut start = 0;
|
||||
|
||||
if !has_0x && value.starts_with('-') {
|
||||
hex.push_str("-0x");
|
||||
start += 1;
|
||||
} else if !has_0x {
|
||||
hex.push_str("0x");
|
||||
if value.starts_with('+') {
|
||||
start += 1;
|
||||
}
|
||||
}
|
||||
|
||||
for (index, ch) in value.chars().enumerate() {
|
||||
if ch == 'p' {
|
||||
if has_dot {
|
||||
hex.push('p');
|
||||
} else {
|
||||
hex.push_str(".p");
|
||||
}
|
||||
} else if index >= start {
|
||||
hex.push(ch);
|
||||
}
|
||||
}
|
||||
|
||||
if !has_p && has_dot {
|
||||
hex.push_str("p0");
|
||||
} else if !has_p && !has_dot {
|
||||
hex.push_str(".p0");
|
||||
}
|
||||
|
||||
hexf_parse::parse_hexf64(hex.as_str(), false).ok()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_remove_trailing_zeros() {
|
||||
assert!(remove_trailing_zeros(String::from("100")) == *"1");
|
||||
assert!(remove_trailing_zeros(String::from("100.00")) == *"100.");
|
||||
|
||||
// leave leading zeros untouched
|
||||
assert!(remove_trailing_zeros(String::from("001")) == *"001");
|
||||
|
||||
// leave strings untouched if they don't end with 0
|
||||
assert!(remove_trailing_zeros(String::from("101")) == *"101");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_remove_trailing_decimal_point() {
|
||||
assert!(remove_trailing_decimal_point(String::from("100.")) == *"100");
|
||||
assert!(remove_trailing_decimal_point(String::from("1.")) == *"1");
|
||||
|
||||
// leave leading decimal points untouched
|
||||
assert!(remove_trailing_decimal_point(String::from(".5")) == *".5");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_maybe_remove_trailing_redundant_chars() {
|
||||
assert!(maybe_remove_trailing_redundant_chars(String::from("100."), true) == *"100.");
|
||||
assert!(maybe_remove_trailing_redundant_chars(String::from("100."), false) == *"100");
|
||||
assert!(maybe_remove_trailing_redundant_chars(String::from("1."), false) == *"1");
|
||||
assert!(maybe_remove_trailing_redundant_chars(String::from("10.0"), false) == *"10");
|
||||
|
||||
// don't truncate integers
|
||||
assert!(maybe_remove_trailing_redundant_chars(String::from("1000"), false) == *"1000");
|
||||
}
|
||||
|
||||
@@ -37,7 +37,7 @@ use unicode_normalization::UnicodeNormalization;
|
||||
use ruff_python_ast::{
|
||||
str::Quote,
|
||||
str_prefix::{AnyStringPrefix, FStringPrefix},
|
||||
AnyStringFlags, Int, IpyEscapeKind,
|
||||
AnyStringFlags, Int, IpyEscapeKind, StringFlags,
|
||||
};
|
||||
use ruff_text_size::{TextLen, TextRange, TextSize};
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use ruff_python_ast::AnyStringFlags;
|
||||
use ruff_python_ast::{AnyStringFlags, StringFlags};
|
||||
|
||||
/// The context representing the current f-string that the lexer is in.
|
||||
#[derive(Debug)]
|
||||
@@ -36,13 +36,13 @@ impl FStringContext {
|
||||
}
|
||||
|
||||
/// Returns the quote character for the current f-string.
|
||||
pub(crate) const fn quote_char(&self) -> char {
|
||||
pub(crate) fn quote_char(&self) -> char {
|
||||
self.flags.quote_style().as_char()
|
||||
}
|
||||
|
||||
/// Returns the triple quotes for the current f-string if it is a triple-quoted
|
||||
/// f-string, `None` otherwise.
|
||||
pub(crate) const fn triple_quotes(&self) -> Option<&'static str> {
|
||||
pub(crate) fn triple_quotes(&self) -> Option<&'static str> {
|
||||
if self.is_triple_quoted() {
|
||||
Some(self.flags.quote_str())
|
||||
} else {
|
||||
@@ -56,7 +56,7 @@ impl FStringContext {
|
||||
}
|
||||
|
||||
/// Returns `true` if the current f-string is a triple-quoted f-string.
|
||||
pub(crate) const fn is_triple_quoted(&self) -> bool {
|
||||
pub(crate) fn is_triple_quoted(&self) -> bool {
|
||||
self.flags.is_triple_quoted()
|
||||
}
|
||||
|
||||
|
||||
@@ -113,15 +113,14 @@
|
||||
use std::iter::FusedIterator;
|
||||
use std::ops::Deref;
|
||||
|
||||
use crate::lexer::{lex, lex_starts_at, LexResult};
|
||||
use ruff_python_ast::{Expr, Mod, ModModule, PySourceType, Suite};
|
||||
use ruff_text_size::{TextRange, TextSize};
|
||||
|
||||
pub use crate::error::{FStringErrorType, ParseError, ParseErrorType};
|
||||
use crate::lexer::{lex, lex_starts_at, LexResult};
|
||||
pub use crate::parser::Program;
|
||||
pub use crate::token::{Tok, TokenKind};
|
||||
|
||||
use ruff_python_ast::{Expr, Mod, ModModule, PySourceType, Suite};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
mod error;
|
||||
pub mod lexer;
|
||||
mod parser;
|
||||
@@ -355,44 +354,6 @@ impl Tokens {
|
||||
TokenKindIter::new(&self.0)
|
||||
}
|
||||
|
||||
/// Returns an iterator over the [`TokenKind`] and its range for all the tokens that are
|
||||
/// within the given `range`.
|
||||
///
|
||||
/// The start and end position of the given range should correspond to the start position of
|
||||
/// the first token and the end position of the last token in the returned iterator.
|
||||
///
|
||||
/// For example, if the struct contains the following tokens:
|
||||
/// ```txt
|
||||
/// (Def, 0..3)
|
||||
/// (Name, 4..7)
|
||||
/// (Lpar, 7..8)
|
||||
/// (Rpar, 8..9)
|
||||
/// (Colon, 9..10)
|
||||
/// (Ellipsis, 11..14)
|
||||
/// (Newline, 14..14)
|
||||
/// ```
|
||||
///
|
||||
/// Then, the range `4..10` returns an iterator which yields `Name`, `Lpar`, `Rpar`, and
|
||||
/// `Colon` token. But, if the given position doesn't match any of the tokens, an empty
|
||||
/// iterator is returned.
|
||||
pub fn kinds_within_range<T: Ranged>(&self, ranged: T) -> TokenKindIter {
|
||||
let Ok(start_index) = self.binary_search_by_key(&ranged.start(), |result| match result {
|
||||
Ok((_, range)) => range.start(),
|
||||
Err(error) => error.location().start(),
|
||||
}) else {
|
||||
return TokenKindIter::default();
|
||||
};
|
||||
|
||||
let Ok(end_index) = self.binary_search_by_key(&ranged.end(), |result| match result {
|
||||
Ok((_, range)) => range.end(),
|
||||
Err(error) => error.location().end(),
|
||||
}) else {
|
||||
return TokenKindIter::default();
|
||||
};
|
||||
|
||||
TokenKindIter::new(self.get(start_index..=end_index).unwrap_or(&[]))
|
||||
}
|
||||
|
||||
/// Consumes the [`Tokens`], returning the underlying vector of [`LexResult`].
|
||||
pub fn into_inner(self) -> Vec<LexResult> {
|
||||
self.0
|
||||
|
||||
@@ -1298,7 +1298,7 @@ impl<'src> Parser<'src> {
|
||||
///
|
||||
/// If the parser isn't positioned at a `{` or `FStringMiddle` token.
|
||||
fn parse_fstring_elements(&mut self) -> FStringElements {
|
||||
let mut elements = FStringElements::default();
|
||||
let mut elements = vec![];
|
||||
|
||||
self.parse_list(RecoveryContextKind::FStringElements, |parser| {
|
||||
let element = match parser.current_token_kind() {
|
||||
@@ -1348,7 +1348,7 @@ impl<'src> Parser<'src> {
|
||||
elements.push(element);
|
||||
});
|
||||
|
||||
elements
|
||||
FStringElements::from(elements)
|
||||
}
|
||||
|
||||
/// Parses a f-string expression element.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
use bstr::ByteSlice;
|
||||
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, Expr};
|
||||
use ruff_python_ast::{self as ast, AnyStringFlags, Expr, StringFlags};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
use crate::lexer::{LexicalError, LexicalErrorType};
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
use std::fmt;
|
||||
|
||||
use ruff_python_ast::{AnyStringFlags, BoolOp, Int, IpyEscapeKind, Operator, UnaryOp};
|
||||
use ruff_python_ast::{AnyStringFlags, BoolOp, Int, IpyEscapeKind, Operator, StringFlags, UnaryOp};
|
||||
|
||||
/// The set of tokens the Python source code can be tokenized in.
|
||||
#[derive(Clone, Debug, PartialEq, is_macro::Is)]
|
||||
|
||||
@@ -470,14 +470,14 @@ pub enum BindingKind<'a> {
|
||||
/// def foo():
|
||||
/// global x
|
||||
/// ```
|
||||
Global,
|
||||
Global(Option<BindingId>),
|
||||
|
||||
/// A binding for a nonlocal variable, like `x` in:
|
||||
/// ```python
|
||||
/// def foo():
|
||||
/// nonlocal x
|
||||
/// ```
|
||||
Nonlocal(ScopeId),
|
||||
Nonlocal(BindingId, ScopeId),
|
||||
|
||||
/// A binding for a builtin, like `print` or `bool`.
|
||||
Builtin,
|
||||
@@ -670,3 +670,14 @@ impl<'a, 'ast> Imported<'ast> for AnyImport<'a, 'ast> {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::BindingKind;
|
||||
|
||||
#[test]
|
||||
#[cfg(target_pointer_width = "64")]
|
||||
fn size() {
|
||||
assert!(std::mem::size_of::<BindingKind>() <= 24);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -540,6 +540,23 @@ impl<'a> SemanticModel<'a> {
|
||||
return ReadResult::Resolved(binding_id);
|
||||
}
|
||||
|
||||
BindingKind::Global(Some(binding_id))
|
||||
| BindingKind::Nonlocal(binding_id, _) => {
|
||||
// Mark the shadowed binding as used.
|
||||
let reference_id = self.resolved_references.push(
|
||||
self.scope_id,
|
||||
self.node_id,
|
||||
ExprContext::Load,
|
||||
self.flags,
|
||||
name.range,
|
||||
);
|
||||
self.bindings[binding_id].references.push(reference_id);
|
||||
|
||||
// Treat it as resolved.
|
||||
self.resolved_names.insert(name.into(), binding_id);
|
||||
return ReadResult::Resolved(binding_id);
|
||||
}
|
||||
|
||||
_ => {
|
||||
// Otherwise, treat it as resolved.
|
||||
self.resolved_names.insert(name.into(), binding_id);
|
||||
|
||||
@@ -107,14 +107,6 @@ impl<'a> Scope<'a> {
|
||||
})
|
||||
}
|
||||
|
||||
/// Like [`Scope::binding_ids`], but returns all bindings that were added to the scope,
|
||||
/// including those that were shadowed by later bindings.
|
||||
pub fn all_binding_ids(&self) -> impl Iterator<Item = BindingId> + '_ {
|
||||
self.bindings.values().copied().flat_map(|id| {
|
||||
std::iter::successors(Some(id), |id| self.shadowed_bindings.get(id).copied())
|
||||
})
|
||||
}
|
||||
|
||||
/// Like [`Scope::bindings`], but returns all bindings added to the scope, including those that
|
||||
/// were shadowed by later bindings.
|
||||
pub fn all_bindings(&self) -> impl Iterator<Item = (&str, BindingId)> + '_ {
|
||||
@@ -144,11 +136,6 @@ impl<'a> Scope<'a> {
|
||||
!self.star_imports.is_empty()
|
||||
}
|
||||
|
||||
/// Returns an iterator over all star imports (e.g., `from sys import *`) in this scope.
|
||||
pub fn star_imports(&self) -> impl Iterator<Item = &StarImport<'a>> {
|
||||
self.star_imports.iter()
|
||||
}
|
||||
|
||||
/// Set the globals pointer for this scope.
|
||||
pub(crate) fn set_globals_id(&mut self, globals: GlobalsId) {
|
||||
self.globals_id = Some(globals);
|
||||
|
||||
@@ -16,18 +16,19 @@ license = { workspace = true }
|
||||
ruff_diagnostics = { workspace = true }
|
||||
ruff_formatter = { workspace = true }
|
||||
ruff_linter = { workspace = true }
|
||||
ruff_notebook = { workspace = true }
|
||||
ruff_python_ast = { workspace = true }
|
||||
ruff_python_codegen = { workspace = true }
|
||||
ruff_python_formatter = { workspace = true }
|
||||
ruff_python_index = { workspace = true }
|
||||
ruff_python_parser = { workspace = true }
|
||||
ruff_notebook = { path = "../ruff_notebook" }
|
||||
ruff_source_file = { workspace = true }
|
||||
ruff_text_size = { workspace = true }
|
||||
ruff_workspace = { workspace = true }
|
||||
|
||||
anyhow = { workspace = true }
|
||||
crossbeam = { workspace = true }
|
||||
globset = { workspace = true }
|
||||
jod-thread = { workspace = true }
|
||||
lsp-server = { workspace = true }
|
||||
lsp-types = { workspace = true }
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
{
|
||||
"codeAction": {
|
||||
"disableRuleComment": {
|
||||
"enable": false
|
||||
}
|
||||
},
|
||||
"lint": {
|
||||
"ignore": ["RUF001"],
|
||||
"run": "onSave"
|
||||
},
|
||||
"fixAll": false,
|
||||
"logLevel": "warn",
|
||||
"lineLength": 80,
|
||||
"exclude": ["third_party"]
|
||||
"settings": {
|
||||
"codeAction": {
|
||||
"disableRuleComment": {
|
||||
"enable": false
|
||||
}
|
||||
},
|
||||
"lint": {
|
||||
"ignore": ["RUF001"],
|
||||
"run": "onSave"
|
||||
},
|
||||
"fixAll": false,
|
||||
"logLevel": "warn",
|
||||
"lineLength": 80,
|
||||
"exclude": ["third_party"]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
//! Types and utilities for working with text, modifying source files, and `Ruff <-> LSP` type conversion.
|
||||
|
||||
mod document;
|
||||
mod notebook;
|
||||
mod range;
|
||||
mod replacement;
|
||||
mod text_document;
|
||||
|
||||
use std::{collections::HashMap, path::PathBuf};
|
||||
|
||||
pub(crate) use document::DocumentVersion;
|
||||
pub use document::TextDocument;
|
||||
use lsp_types::PositionEncodingKind;
|
||||
pub(crate) use notebook::NotebookDocument;
|
||||
pub(crate) use range::{NotebookRange, RangeExt, ToRangeExt};
|
||||
pub(crate) use replacement::Replacement;
|
||||
pub(crate) use text_document::DocumentVersion;
|
||||
pub use text_document::TextDocument;
|
||||
|
||||
use crate::{fix::Fixes, session::ResolvedClientCapabilities};
|
||||
|
||||
|
||||
@@ -45,30 +45,27 @@ impl super::BackgroundDocumentRequestHandler for CodeActions {
|
||||
response.extend(noqa_comments(&snapshot, &fixes));
|
||||
}
|
||||
|
||||
if snapshot.client_settings().fix_all()
|
||||
&& supported_code_actions.contains(&SupportedCodeAction::SourceFixAll)
|
||||
{
|
||||
response.push(fix_all(&snapshot).with_failure_code(ErrorCode::InternalError)?);
|
||||
if snapshot.client_settings().fix_all() {
|
||||
if supported_code_actions.contains(&SupportedCodeAction::SourceFixAll) {
|
||||
response.push(fix_all(&snapshot).with_failure_code(ErrorCode::InternalError)?);
|
||||
} else if supported_code_actions.contains(&SupportedCodeAction::NotebookSourceFixAll) {
|
||||
response
|
||||
.push(notebook_fix_all(&snapshot).with_failure_code(ErrorCode::InternalError)?);
|
||||
}
|
||||
}
|
||||
|
||||
if snapshot.client_settings().organize_imports()
|
||||
&& supported_code_actions.contains(&SupportedCodeAction::SourceOrganizeImports)
|
||||
{
|
||||
response.push(organize_imports(&snapshot).with_failure_code(ErrorCode::InternalError)?);
|
||||
}
|
||||
|
||||
if snapshot.client_settings().fix_all()
|
||||
&& supported_code_actions.contains(&SupportedCodeAction::NotebookSourceFixAll)
|
||||
{
|
||||
response.push(notebook_fix_all(&snapshot).with_failure_code(ErrorCode::InternalError)?);
|
||||
}
|
||||
|
||||
if snapshot.client_settings().organize_imports()
|
||||
&& supported_code_actions.contains(&SupportedCodeAction::NotebookSourceOrganizeImports)
|
||||
{
|
||||
response.push(
|
||||
notebook_organize_imports(&snapshot).with_failure_code(ErrorCode::InternalError)?,
|
||||
);
|
||||
if snapshot.client_settings().organize_imports() {
|
||||
if supported_code_actions.contains(&SupportedCodeAction::SourceOrganizeImports) {
|
||||
response
|
||||
.push(organize_imports(&snapshot).with_failure_code(ErrorCode::InternalError)?);
|
||||
} else if supported_code_actions
|
||||
.contains(&SupportedCodeAction::NotebookSourceOrganizeImports)
|
||||
{
|
||||
response.push(
|
||||
notebook_organize_imports(&snapshot)
|
||||
.with_failure_code(ErrorCode::InternalError)?,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Some(response))
|
||||
|
||||
@@ -465,7 +465,7 @@ impl DocumentQuery {
|
||||
/// Get the source type of the document associated with this query.
|
||||
pub(crate) fn source_type(&self) -> ruff_python_ast::PySourceType {
|
||||
match self {
|
||||
Self::Text { .. } => ruff_python_ast::PySourceType::Python,
|
||||
Self::Text { .. } => ruff_python_ast::PySourceType::from(self.file_path()),
|
||||
Self::Notebook { .. } => ruff_python_ast::PySourceType::Ipynb,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
use globset::Candidate;
|
||||
use ruff_linter::{
|
||||
display_settings, fs::normalize_path_to, settings::types::FilePattern,
|
||||
settings::types::PreviewMode,
|
||||
};
|
||||
use ruff_workspace::resolver::match_candidate_exclusion;
|
||||
use ruff_workspace::{
|
||||
configuration::{Configuration, FormatConfiguration, LintConfiguration, RuleSelection},
|
||||
pyproject::{find_user_settings_toml, settings_toml},
|
||||
@@ -12,12 +14,13 @@ use std::{
|
||||
path::{Path, PathBuf},
|
||||
sync::Arc,
|
||||
};
|
||||
use walkdir::{DirEntry, WalkDir};
|
||||
use walkdir::WalkDir;
|
||||
|
||||
use crate::session::settings::{ConfigurationPreference, ResolvedEditorSettings};
|
||||
|
||||
#[derive(Default)]
|
||||
pub(crate) struct RuffSettings {
|
||||
/// Settings used to manage file inclusion and exclusion.
|
||||
file_resolver: ruff_workspace::FileResolverSettings,
|
||||
/// Settings to pass into the Ruff linter.
|
||||
linter: ruff_linter::settings::LinterSettings,
|
||||
/// Settings to pass into the Ruff formatter.
|
||||
@@ -54,7 +57,7 @@ impl RuffSettings {
|
||||
.ok()
|
||||
})
|
||||
.unwrap_or_else(|| {
|
||||
let default_configuration = ruff_workspace::configuration::Configuration::default();
|
||||
let default_configuration = Configuration::default();
|
||||
EditorConfigurationTransformer(editor_settings, root)
|
||||
.transform(default_configuration)
|
||||
.into_settings(root)
|
||||
@@ -64,6 +67,7 @@ impl RuffSettings {
|
||||
});
|
||||
|
||||
RuffSettings {
|
||||
file_resolver: fallback.file_resolver,
|
||||
formatter: fallback.formatter,
|
||||
linter: fallback.linter,
|
||||
}
|
||||
@@ -82,12 +86,77 @@ impl RuffSettingsIndex {
|
||||
pub(super) fn new(root: &Path, editor_settings: &ResolvedEditorSettings) -> Self {
|
||||
let mut index = BTreeMap::default();
|
||||
|
||||
for directory in WalkDir::new(root)
|
||||
.into_iter()
|
||||
.filter_map(Result::ok)
|
||||
.filter(|entry| entry.file_type().is_dir())
|
||||
.map(DirEntry::into_path)
|
||||
{
|
||||
// Add any settings from above the workspace root.
|
||||
for directory in root.ancestors() {
|
||||
if let Some(pyproject) = settings_toml(directory).ok().flatten() {
|
||||
let Ok(settings) = ruff_workspace::resolver::resolve_root_settings(
|
||||
&pyproject,
|
||||
Relativity::Parent,
|
||||
&EditorConfigurationTransformer(editor_settings, root),
|
||||
) else {
|
||||
continue;
|
||||
};
|
||||
|
||||
index.insert(
|
||||
directory.to_path_buf(),
|
||||
Arc::new(RuffSettings {
|
||||
file_resolver: settings.file_resolver,
|
||||
linter: settings.linter,
|
||||
formatter: settings.formatter,
|
||||
}),
|
||||
);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Add any settings within the workspace itself
|
||||
let mut walker = WalkDir::new(root).into_iter();
|
||||
|
||||
while let Some(entry) = walker.next() {
|
||||
let Ok(entry) = entry else {
|
||||
continue;
|
||||
};
|
||||
|
||||
// Skip non-directories.
|
||||
if !entry.file_type().is_dir() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let directory = entry.into_path();
|
||||
|
||||
// If the directory is excluded from the workspace, skip it.
|
||||
if let Some(file_name) = directory.file_name() {
|
||||
if let Some((_, settings)) = index
|
||||
.range(..directory.clone())
|
||||
.rfind(|(path, _)| directory.starts_with(path))
|
||||
{
|
||||
let candidate = Candidate::new(&directory);
|
||||
let basename = Candidate::new(file_name);
|
||||
if match_candidate_exclusion(
|
||||
&candidate,
|
||||
&basename,
|
||||
&settings.file_resolver.exclude,
|
||||
) {
|
||||
tracing::debug!("Ignored path via `exclude`: {}", directory.display());
|
||||
|
||||
walker.skip_current_dir();
|
||||
continue;
|
||||
} else if match_candidate_exclusion(
|
||||
&candidate,
|
||||
&basename,
|
||||
&settings.file_resolver.extend_exclude,
|
||||
) {
|
||||
tracing::debug!(
|
||||
"Ignored path via `extend-exclude`: {}",
|
||||
directory.display()
|
||||
);
|
||||
|
||||
walker.skip_current_dir();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(pyproject) = settings_toml(&directory).ok().flatten() {
|
||||
let Ok(settings) = ruff_workspace::resolver::resolve_root_settings(
|
||||
&pyproject,
|
||||
@@ -99,6 +168,7 @@ impl RuffSettingsIndex {
|
||||
index.insert(
|
||||
directory,
|
||||
Arc::new(RuffSettings {
|
||||
file_resolver: settings.file_resolver,
|
||||
linter: settings.linter,
|
||||
formatter: settings.formatter,
|
||||
}),
|
||||
@@ -115,8 +185,7 @@ impl RuffSettingsIndex {
|
||||
if let Some((_, settings)) = self
|
||||
.index
|
||||
.range(..document_path.to_path_buf())
|
||||
.rev()
|
||||
.find(|(path, _)| document_path.starts_with(path))
|
||||
.rfind(|(path, _)| document_path.starts_with(path))
|
||||
{
|
||||
return settings.clone();
|
||||
}
|
||||
|
||||
@@ -130,7 +130,7 @@ enum InitializationOptions {
|
||||
workspace_settings: Vec<WorkspaceSettings>,
|
||||
},
|
||||
GlobalOnly {
|
||||
#[serde(flatten)]
|
||||
#[serde(default)]
|
||||
settings: ClientSettings,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ Ruff can be used as a [pre-commit](https://pre-commit.com) hook via [`ruff-pre-c
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.4.5
|
||||
rev: v0.4.6
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
@@ -27,7 +27,7 @@ To enable lint fixes, add the `--fix` argument to the lint hook:
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.4.5
|
||||
rev: v0.4.6
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
@@ -41,7 +41,7 @@ To run the hooks over Jupyter Notebooks too, add `jupyter` to the list of allowe
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.4.5
|
||||
rev: v0.4.6
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "maturin"
|
||||
|
||||
[project]
|
||||
name = "ruff"
|
||||
version = "0.4.5"
|
||||
version = "0.4.6"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
authors = [{ name = "Astral Software Inc.", email = "hey@astral.sh" }]
|
||||
readme = "README.md"
|
||||
|
||||
@@ -36,7 +36,6 @@ if TYPE_CHECKING:
|
||||
Project,
|
||||
)
|
||||
|
||||
|
||||
# Matches lines that are summaries rather than diagnostics
|
||||
CHECK_SUMMARY_LINE_RE = re.compile(r"^(Found \d+ error.*)|(.* fixable with .*)$")
|
||||
|
||||
|
||||
2
ruff.schema.json
generated
2
ruff.schema.json
generated
@@ -2685,6 +2685,8 @@
|
||||
"ASYNC100",
|
||||
"ASYNC101",
|
||||
"ASYNC102",
|
||||
"ASYNC11",
|
||||
"ASYNC116",
|
||||
"B",
|
||||
"B0",
|
||||
"B00",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "scripts"
|
||||
version = "0.4.5"
|
||||
version = "0.4.6"
|
||||
description = ""
|
||||
authors = ["Charles Marsh <charlie.r.marsh@gmail.com>"]
|
||||
|
||||
|
||||
Reference in New Issue
Block a user