Compare commits

...

27 Commits

Author SHA1 Message Date
Dhruv Manilawala
9059d24d96 Commit the lockfile for fuzz crate 2024-07-29 10:10:17 +05:30
renovate[bot]
1986c9e8e2 Update NPM Development dependencies (#12556) 2024-07-28 22:17:44 -04:00
renovate[bot]
d7e80dc955 Update pre-commit dependencies (#12555) 2024-07-28 22:17:34 -04:00
renovate[bot]
87d09f77cd Update Rust crate imperative to v1.0.6 (#12552) 2024-07-28 22:17:28 -04:00
renovate[bot]
bd37ef13b8 Update Rust crate bstr to v1.10.0 (#12557) 2024-07-28 22:17:11 -04:00
renovate[bot]
ec23c974db Update Rust crate toml to v0.8.16 (#12554) 2024-07-28 22:17:01 -04:00
renovate[bot]
122e5ab428 Update Rust crate serde_json to v1.0.121 (#12553) 2024-07-28 22:16:55 -04:00
renovate[bot]
2f2149aca8 Update Rust crate env_logger to v0.11.5 (#12550) 2024-07-28 22:16:49 -04:00
renovate[bot]
9d5c31e7da Update Rust crate imara-diff to v0.1.7 (#12551) 2024-07-28 22:16:42 -04:00
renovate[bot]
25f3ad6238 Update Rust crate clap to v4.5.11 (#12549) 2024-07-28 22:16:36 -04:00
renovate[bot]
79926329a4 Update Rust crate argfile to v0.2.1 (#12548) 2024-07-28 22:16:31 -04:00
Aleksei Latyshev
9cdc578dd9 [flake8-builtins] Implement import, lambda, and module shadowing (#12546)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Extend `flake8-builtins` to imports, lambda-arguments, and modules to be
consistent with original checker
[flake8_builtins](https://github.com/gforcada/flake8-builtins/blob/main/flake8_builtins.py).

closes #12540 

## Details

- Implement builtin-import-shadowing (A004)
- Stop tracking imports shadowing in builtin-variable-shadowing (A001)
in preview mode.
- Implement builtin-lambda-argument-shadowing (A005)
- Implement builtin-module-shadowing (A006)
  - Add new option `linter.flake8_builtins.builtins_allowed_modules`

## Test Plan

cargo test
2024-07-29 01:42:42 +00:00
Charlie Marsh
665c75f7ab Add document for executable determination (#12547)
Closes https://github.com/astral-sh/ruff/issues/12505.
2024-07-28 16:23:00 -04:00
Micha Reiser
f37b39d6cc Allow downloading ecosystem results from forks (#12544) 2024-07-27 19:57:19 +02:00
Charlie Marsh
e18c45c310 Avoid marking required imports as unused (#12537)
## Summary

If an import is marked as "required", we should never flag it as unused.
In practice, this is rare, since required imports are typically used for
`__future__` annotations, which are always considered "used".

Closes https://github.com/astral-sh/ruff/issues/12458.
2024-07-26 14:23:43 -04:00
Charlie Marsh
d930052de8 Move required import parsing out of lint rule (#12536)
## Summary

Instead, make it part of the serialization and deserialization itself.
This makes it _much_ easier to reuse when solving
https://github.com/astral-sh/ruff/issues/12458.
2024-07-26 13:35:45 -04:00
Sigurd Spieckermann
7ad4df9e9f Complete FBT002 example with Enum argument (#12525)
## Summary

I've completed `FBT002` rule example with an `Enum` argument to show the
full usage in this case.
2024-07-26 11:50:19 -04:00
Charlie Marsh
425761e960 Use colon rather than dot formatting for integer-only types (#12534)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12421.
2024-07-26 15:48:19 +00:00
Carl Meyer
4b69271809 [red-knot] resolve int/list/dict/set/tuple to builtin type (#12521)
Now that we have builtins available, resolve some simple cases to the
right builtin type.

We should also adjust the display for types to include their module
name; that's not done yet here.
2024-07-26 08:21:31 -07:00
Micha Reiser
bf23d38a21 Remove unnecessary clone in workspace API (#12529) 2024-07-26 17:19:05 +02:00
Charlie Marsh
49f51583fa Always allow explicit multi-line concatenations when implicit are banned (#12532)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11582.
2024-07-26 10:36:35 -04:00
Charlie Marsh
1fe4a5faed Avoid recommending __slots__ for classes that inherit from more than namedtuple (#12531)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11887.
2024-07-26 14:24:40 +00:00
Charlie Marsh
998bfe0847 Avoid recommending no-argument super in slots=True dataclasses (#12530)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12506.
2024-07-26 10:09:51 -04:00
Dhruv Manilawala
6f4db8675b [red-knot] Add support for untitled files (#12492)
## Summary

This PR adds support for untitled files in the Red Knot project.

Refer to the [design
discussion](https://github.com/astral-sh/ruff/discussions/12336) for
more details.

### Changes
* The `parsed_module` always assumes that the `SystemVirtual` path is of
`PySourceType::Python`.
* For the module resolver, as suggested, I went ahead by adding a new
`SystemOrVendoredPath` enum and renamed `FilePathRef` to
`SystemOrVendoredPathRef` (happy to consider better names here).
* The `file_to_module` query would return if it's a
`FilePath::SystemVirtual` variant because a virtual file doesn't belong
to any module.
* The sync implementation for the system virtual path is basically the
same as that of system path except that it uses the
`virtual_path_metadata`. The reason for this is that the system
(language server) would provide the metadata on whether it still exists
or not and if it exists, the corresponding metadata.

For point (1), VS Code would use `Untitled-1` for Python files and
`Untitled-1.ipynb` for Jupyter Notebooks. We could use this distinction
to determine whether the source type is `Python` or `Ipynb`.

## Test Plan

Added test cases in #12526
2024-07-26 18:13:31 +05:30
Micha Reiser
71f7aa4971 Remove criterion/codspeed compat layer (#12524) 2024-07-26 12:22:16 +02:00
Auguste Lalande
9f72f474e6 [pydoclint] Add docstring-missing-returns amd docstring-extraneous-returns (DOC201, DOC202) (#12485)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-26 06:36:00 +00:00
Carl Meyer
10c993e21a [red-knot] remove wrong __init__.py from file-watching tests (#12519) 2024-07-26 07:14:01 +01:00
120 changed files with 4687 additions and 688 deletions

View File

@@ -616,7 +616,7 @@ jobs:
- uses: Swatinem/rust-cache@v2
- name: "Build benchmarks"
run: cargo codspeed build --features codspeed -p ruff_benchmark
run: cargo codspeed build -p ruff_benchmark
- name: "Run benchmarks"
uses: CodSpeedHQ/action@v2

View File

@@ -23,6 +23,7 @@ jobs:
name: pr-number
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- name: Parse pull request number
id: pr-number
@@ -43,6 +44,7 @@ jobs:
path: pr/ecosystem
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- name: Generate comment content
id: generate-comment

View File

@@ -43,7 +43,7 @@ repos:
)$
- repo: https://github.com/crate-ci/typos
rev: v1.23.2
rev: v1.23.5
hooks:
- id: typos
@@ -57,7 +57,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.4
rev: v0.5.5
hooks:
- id: ruff-format
- id: ruff

63
Cargo.lock generated
View File

@@ -141,9 +141,9 @@ checksum = "69f7f8c3906b62b754cd5326047894316021dcfe5a194c8ea52bdd94934a3457"
[[package]]
name = "argfile"
version = "0.2.0"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7c5c8e418080ef8aa932039d12eda7b6f5043baf48f1523c166fbc32d004534"
checksum = "0a1cc0ba69de57db40674c66f7cf2caee3981ddef084388482c95c0e2133e5e8"
dependencies = [
"fs-err",
"os_str_bytes",
@@ -190,9 +190,9 @@ checksum = "b048fb63fd8b5923fc5aa7b340d8e156aec7ec02f0c78fa8a6ddc2613f6f71de"
[[package]]
name = "bstr"
version = "1.9.1"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05efc5cfd9110c8416e471df0e96702d58690178e206e61b7173706673c93706"
checksum = "40723b8fb387abc38f4f4a37c09073622e41dd12327033091ef8950659e6dc0c"
dependencies = [
"memchr",
"regex-automata 0.4.6",
@@ -314,9 +314,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.9"
version = "4.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "64acc1846d54c1fe936a78dc189c34e28d3f5afc348403f28ecf53660b9b8462"
checksum = "35723e6a11662c2afb578bcf0b88bf6ea8e21282a953428f240574fcc3a2b5b3"
dependencies = [
"clap_builder",
"clap_derive",
@@ -324,9 +324,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.9"
version = "4.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6fb8393d67ba2e7bfaf28a23458e4e2b543cc73a99595511eb207fdb8aede942"
checksum = "49eb96cbfa7cfa35017b7cd548c75b14c3118c98b423041d70562665e07fb0fa"
dependencies = [
"anstream",
"anstyle",
@@ -367,9 +367,9 @@ dependencies = [
[[package]]
name = "clap_derive"
version = "4.5.8"
version = "4.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2bac35c6dafb060fd4d275d9a4ffae97917c13a6327903a8be2153cd964f7085"
checksum = "5d029b67f89d30bbb547c89fd5161293c0aec155fc691d7924b64550662db93e"
dependencies = [
"heck",
"proc-macro2",
@@ -759,9 +759,9 @@ dependencies = [
[[package]]
name = "env_logger"
version = "0.11.3"
version = "0.11.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38b35839ba51819680ba087cd351788c9a3c476841207e0b8cee0b04722343b9"
checksum = "e13fa619b91fb2381732789fc5de83b45675e882f66623b7d8cb4f643017018d"
dependencies = [
"anstream",
"anstyle",
@@ -1021,9 +1021,9 @@ dependencies = [
[[package]]
name = "imara-diff"
version = "0.1.6"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af13c8ceb376860ff0c6a66d83a8cdd4ecd9e464da24621bbffcd02b49619434"
checksum = "fc9da1a252bd44cd341657203722352efc9bc0c847d06ea6d2dc1cd1135e0a01"
dependencies = [
"ahash",
"hashbrown",
@@ -1031,9 +1031,9 @@ dependencies = [
[[package]]
name = "imperative"
version = "1.0.5"
version = "1.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b70798296d538cdaa6d652941fcc795963f8b9878b9e300c9fab7a522bd2fc0"
checksum = "29a1f6526af721f9aec9ceed7ab8ebfca47f3399d08b80056c2acca3fcb694a9"
dependencies = [
"phf",
"rust-stemmers",
@@ -1527,9 +1527,9 @@ dependencies = [
[[package]]
name = "os_str_bytes"
version = "6.6.1"
version = "7.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2355d85b9a3786f481747ced0e0ff2ba35213a1f9bd406ed906554d7af805a1"
checksum = "7ac44c994af577c799b1b4bd80dc214701e349873ad894d6cdf96f4f7526e0b9"
dependencies = [
"memchr",
]
@@ -2048,7 +2048,6 @@ name = "ruff_benchmark"
version = "0.0.0"
dependencies = [
"codspeed-criterion-compat",
"criterion",
"mimalloc",
"once_cell",
"red_knot",
@@ -2401,13 +2400,17 @@ version = "0.0.0"
dependencies = [
"bitflags 2.6.0",
"is-macro",
"ruff_cache",
"ruff_index",
"ruff_macros",
"ruff_python_ast",
"ruff_python_parser",
"ruff_python_stdlib",
"ruff_source_file",
"ruff_text_size",
"rustc-hash 2.0.0",
"schemars",
"serde",
]
[[package]]
@@ -2540,6 +2543,7 @@ dependencies = [
"ruff_macros",
"ruff_python_ast",
"ruff_python_formatter",
"ruff_python_semantic",
"ruff_source_file",
"rustc-hash 2.0.0",
"schemars",
@@ -2752,11 +2756,12 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.120"
version = "1.0.121"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e0d21c9a8cae1235ad58a00c11cb40d4b1e5c784f1ef2c537876ed6ffd8b7c5"
checksum = "4ab380d7d9f22ef3f21ad3e6c1ebe8e4fc7a2000ccba2e4d71fc96f15b2cb609"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
@@ -2774,9 +2779,9 @@ dependencies = [
[[package]]
name = "serde_spanned"
version = "0.6.6"
version = "0.6.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "79e674e01f999af37c49f70a6ede167a8a60b2503e56c5599532a65baa5969a0"
checksum = "eb5b1b31579f3811bf615c144393417496f152e12ac8b7663bf664f4a815306d"
dependencies = [
"serde",
]
@@ -3077,9 +3082,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "toml"
version = "0.8.15"
version = "0.8.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac2caab0bf757388c6c0ae23b3293fdb463fee59434529014f85e3263b995c28"
checksum = "81967dd0dd2c1ab0bc3468bd7caecc32b8a4aa47d0c8c695d8c2b2108168d62c"
dependencies = [
"serde",
"serde_spanned",
@@ -3089,18 +3094,18 @@ dependencies = [
[[package]]
name = "toml_datetime"
version = "0.6.6"
version = "0.6.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4badfd56924ae69bcc9039335b2e017639ce3f9b001c393c1b2d1ef846ce2cbf"
checksum = "f8fb9f64314842840f1d940ac544da178732128f1c78c21772e876579e0da1db"
dependencies = [
"serde",
]
[[package]]
name = "toml_edit"
version = "0.22.16"
version = "0.22.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "278f3d518e152219c994ce877758516bca5e118eaed6996192a774fb9fbf0788"
checksum = "8d9f8729f5aea9562aac1cc0441f5d6de3cff1ee0c5d67293eeca5eb36ee7c16"
dependencies = [
"indexmap",
"serde",

View File

@@ -58,7 +58,6 @@ console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
compact_str = "0.8.0"
criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
drop_bomb = { version = "0.1.5" }

View File

@@ -229,13 +229,11 @@ impl Workspace {
///
/// This changes the behavior of `check` to check all files in the workspace instead of just the open files.
pub fn take_open_files(self, db: &mut dyn Db) -> FxHashSet<File> {
let open_files = self.open_file_set(db).clone();
// Salsa will cancel any pending queries and remove its own reference to `open_files`
// so that the reference counter to `open_files` now drops to 1.
let open_files = self.set_open_file_set(db).to(None);
if let Some(open_files) = open_files {
// Salsa will cancel any pending queries and remove its own reference to `open_files`
// so that the reference counter to `open_files` now drops to 1.
self.set_open_file_set(db).to(None);
Arc::try_unwrap(open_files).unwrap()
} else {
FxHashSet::default()

View File

@@ -655,7 +655,6 @@ fn search_path() -> anyhow::Result<()> {
);
std::fs::write(site_packages.join("a.py").as_std_path(), "class A: ...")?;
std::fs::write(site_packages.join("__init__.py").as_std_path(), "")?;
let changes = case.stop_watch();
@@ -686,7 +685,6 @@ fn add_search_path() -> anyhow::Result<()> {
});
std::fs::write(site_packages.join("a.py").as_std_path(), "class A: ...")?;
std::fs::write(site_packages.join("__init__.py").as_std_path(), "")?;
let changes = case.stop_watch();

View File

@@ -5,7 +5,7 @@ use std::sync::Arc;
use camino::{Utf8Path, Utf8PathBuf};
use ruff_db::files::{system_path_to_file, vendored_path_to_file, File, FilePath};
use ruff_db::files::{system_path_to_file, vendored_path_to_file, File};
use ruff_db::system::{System, SystemPath, SystemPathBuf};
use ruff_db::vendored::{VendoredPath, VendoredPathBuf};
@@ -474,18 +474,21 @@ impl SearchPath {
matches!(&*self.0, SearchPathInner::SitePackages(_))
}
#[must_use]
pub(crate) fn relativize_path(&self, path: &FilePath) -> Option<ModulePath> {
let extension = path.extension();
fn is_valid_extension(&self, extension: &str) -> bool {
if self.is_standard_library() {
if extension.is_some_and(|extension| extension != "pyi") {
return None;
}
extension == "pyi"
} else {
if extension.is_some_and(|extension| !matches!(extension, "pyi" | "py")) {
return None;
}
matches!(extension, "pyi" | "py")
}
}
#[must_use]
pub(crate) fn relativize_system_path(&self, path: &SystemPath) -> Option<ModulePath> {
if path
.extension()
.is_some_and(|extension| !self.is_valid_extension(extension))
{
return None;
}
match &*self.0 {
@@ -493,16 +496,36 @@ impl SearchPath {
| SearchPathInner::FirstParty(search_path)
| SearchPathInner::StandardLibraryCustom(search_path)
| SearchPathInner::SitePackages(search_path)
| SearchPathInner::Editable(search_path) => path
.as_system_path()
.and_then(|absolute_path| absolute_path.strip_prefix(search_path).ok())
.map(|relative_path| ModulePath {
search_path: self.clone(),
relative_path: relative_path.as_utf8_path().to_path_buf(),
}),
| SearchPathInner::Editable(search_path) => {
path.strip_prefix(search_path)
.ok()
.map(|relative_path| ModulePath {
search_path: self.clone(),
relative_path: relative_path.as_utf8_path().to_path_buf(),
})
}
SearchPathInner::StandardLibraryVendored(_) => None,
}
}
#[must_use]
pub(crate) fn relativize_vendored_path(&self, path: &VendoredPath) -> Option<ModulePath> {
if path
.extension()
.is_some_and(|extension| !self.is_valid_extension(extension))
{
return None;
}
match &*self.0 {
SearchPathInner::Extra(_)
| SearchPathInner::FirstParty(_)
| SearchPathInner::StandardLibraryCustom(_)
| SearchPathInner::SitePackages(_)
| SearchPathInner::Editable(_) => None,
SearchPathInner::StandardLibraryVendored(search_path) => path
.as_vendored_path()
.and_then(|absolute_path| absolute_path.strip_prefix(search_path).ok())
.strip_prefix(search_path)
.ok()
.map(|relative_path| ModulePath {
search_path: self.clone(),
relative_path: relative_path.as_utf8_path().to_path_buf(),
@@ -792,14 +815,14 @@ mod tests {
let root = SearchPath::custom_stdlib(&db, stdlib.parent().unwrap().to_path_buf()).unwrap();
// Must have a `.pyi` extension or no extension:
let bad_absolute_path = FilePath::system("foo/stdlib/x.py");
assert_eq!(root.relativize_path(&bad_absolute_path), None);
let second_bad_absolute_path = FilePath::system("foo/stdlib/x.rs");
assert_eq!(root.relativize_path(&second_bad_absolute_path), None);
let bad_absolute_path = SystemPath::new("foo/stdlib/x.py");
assert_eq!(root.relativize_system_path(bad_absolute_path), None);
let second_bad_absolute_path = SystemPath::new("foo/stdlib/x.rs");
assert_eq!(root.relativize_system_path(second_bad_absolute_path), None);
// Must be a path that is a child of `root`:
let third_bad_absolute_path = FilePath::system("bar/stdlib/x.pyi");
assert_eq!(root.relativize_path(&third_bad_absolute_path), None);
let third_bad_absolute_path = SystemPath::new("bar/stdlib/x.pyi");
assert_eq!(root.relativize_system_path(third_bad_absolute_path), None);
}
#[test]
@@ -808,19 +831,21 @@ mod tests {
let root = SearchPath::extra(db.system(), src.clone()).unwrap();
// Must have a `.py` extension, a `.pyi` extension, or no extension:
let bad_absolute_path = FilePath::System(src.join("x.rs"));
assert_eq!(root.relativize_path(&bad_absolute_path), None);
let bad_absolute_path = src.join("x.rs");
assert_eq!(root.relativize_system_path(&bad_absolute_path), None);
// Must be a path that is a child of `root`:
let second_bad_absolute_path = FilePath::system("bar/src/x.pyi");
assert_eq!(root.relativize_path(&second_bad_absolute_path), None);
let second_bad_absolute_path = SystemPath::new("bar/src/x.pyi");
assert_eq!(root.relativize_system_path(second_bad_absolute_path), None);
}
#[test]
fn relativize_path() {
let TestCase { db, src, .. } = TestCaseBuilder::new().build();
let src_search_path = SearchPath::first_party(db.system(), src.clone()).unwrap();
let eggs_package = FilePath::System(src.join("eggs/__init__.pyi"));
let module_path = src_search_path.relativize_path(&eggs_package).unwrap();
let eggs_package = src.join("eggs/__init__.pyi");
let module_path = src_search_path
.relativize_system_path(&eggs_package)
.unwrap();
assert_eq!(
&module_path.relative_path,
Utf8Path::new("eggs/__init__.pyi")

View File

@@ -7,6 +7,7 @@ use rustc_hash::{FxBuildHasher, FxHashSet};
use ruff_db::files::{File, FilePath};
use ruff_db::program::{Program, SearchPathSettings, TargetVersion};
use ruff_db::system::{DirectoryEntry, System, SystemPath, SystemPathBuf};
use ruff_db::vendored::VendoredPath;
use crate::db::Db;
use crate::module::{Module, ModuleKind};
@@ -57,6 +58,12 @@ pub(crate) fn path_to_module(db: &dyn Db, path: &FilePath) -> Option<Module> {
file_to_module(db, file)
}
#[derive(Debug, Clone, Copy)]
enum SystemOrVendoredPathRef<'a> {
System(&'a SystemPath),
Vendored(&'a VendoredPath),
}
/// Resolves the module for the file with the given id.
///
/// Returns `None` if the file is not a module locatable via any of the known search paths.
@@ -64,7 +71,11 @@ pub(crate) fn path_to_module(db: &dyn Db, path: &FilePath) -> Option<Module> {
pub(crate) fn file_to_module(db: &dyn Db, file: File) -> Option<Module> {
let _span = tracing::trace_span!("file_to_module", ?file).entered();
let path = file.path(db.upcast());
let path = match file.path(db.upcast()) {
FilePath::System(system) => SystemOrVendoredPathRef::System(system),
FilePath::Vendored(vendored) => SystemOrVendoredPathRef::Vendored(vendored),
FilePath::SystemVirtual(_) => return None,
};
let settings = module_resolution_settings(db);
@@ -72,7 +83,11 @@ pub(crate) fn file_to_module(db: &dyn Db, file: File) -> Option<Module> {
let module_name = loop {
let candidate = search_paths.next()?;
if let Some(relative_path) = candidate.relativize_path(path) {
let relative_path = match path {
SystemOrVendoredPathRef::System(path) => candidate.relativize_system_path(path),
SystemOrVendoredPathRef::Vendored(path) => candidate.relativize_vendored_path(path),
};
if let Some(relative_path) = relative_path {
break relative_path.to_module_name()?;
}
};

View File

@@ -43,14 +43,14 @@ pub(crate) fn symbol_ty_by_name<'db>(
.unwrap_or(Type::Unbound)
}
/// Shorthand for `symbol_ty` that looks up a module-global symbol in a file.
/// Shorthand for `symbol_ty` that looks up a module-global symbol by name in a file.
pub(crate) fn global_symbol_ty_by_name<'db>(db: &'db dyn Db, file: File, name: &str) -> Type<'db> {
symbol_ty_by_name(db, global_scope(db, file), name)
}
/// Shorthand for `symbol_ty` that looks up a symbol in the builtins.
///
/// Returns `None` if the builtins module isn't available for some reason.
/// Returns `Unbound` if the builtins module isn't available for some reason.
pub(crate) fn builtins_symbol_ty_by_name<'db>(db: &'db dyn Db, name: &str) -> Type<'db> {
builtins_scope(db)
.map(|builtins| symbol_ty_by_name(db, builtins, name))

View File

@@ -553,7 +553,6 @@ impl<'db> TypeInferenceBuilder<'db> {
pattern,
guard,
} = case;
// TODO infer case patterns; they aren't normal expressions
self.infer_match_pattern(pattern);
self.infer_optional_expression(guard.as_deref());
self.infer_body(body);
@@ -920,10 +919,10 @@ impl<'db> TypeInferenceBuilder<'db> {
let ast::ExprNumberLiteral { range: _, value } = literal;
match value {
ast::Number::Int(n) => {
// TODO support big int literals
n.as_i64().map(Type::IntLiteral).unwrap_or(Type::Unknown)
}
ast::Number::Int(n) => n
.as_i64()
.map(Type::IntLiteral)
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int")),
// TODO builtins.float or builtins.complex
_ => Type::Unknown,
}
@@ -1004,8 +1003,8 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_expression(elt);
}
// TODO tuple type
Type::Unknown
// TODO generic
builtins_symbol_ty_by_name(self.db, "tuple")
}
fn infer_list_expression(&mut self, list: &ast::ExprList) -> Type<'db> {
@@ -1019,8 +1018,8 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_expression(elt);
}
// TODO list type
Type::Unknown
// TODO generic
builtins_symbol_ty_by_name(self.db, "list")
}
fn infer_set_expression(&mut self, set: &ast::ExprSet) -> Type<'db> {
@@ -1030,8 +1029,8 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_expression(elt);
}
// TODO set type
Type::Unknown
// TODO generic
builtins_symbol_ty_by_name(self.db, "set")
}
fn infer_dict_expression(&mut self, dict: &ast::ExprDict) -> Type<'db> {
@@ -1042,8 +1041,8 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_expression(&item.value);
}
// TODO dict type
Type::Unknown
// TODO generic
builtins_symbol_ty_by_name(self.db, "dict")
}
fn infer_generator_expression(&mut self, generator: &ast::ExprGenerator) -> Type<'db> {
@@ -1346,23 +1345,19 @@ impl<'db> TypeInferenceBuilder<'db> {
ast::Operator::Add => n
.checked_add(m)
.map(Type::IntLiteral)
// TODO builtins.int
.unwrap_or(Type::Unknown),
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int")),
ast::Operator::Sub => n
.checked_sub(m)
.map(Type::IntLiteral)
// TODO builtins.int
.unwrap_or(Type::Unknown),
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int")),
ast::Operator::Mult => n
.checked_mul(m)
.map(Type::IntLiteral)
// TODO builtins.int
.unwrap_or(Type::Unknown),
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int")),
ast::Operator::Div => n
.checked_div(m)
.map(Type::IntLiteral)
// TODO builtins.int
.unwrap_or(Type::Unknown),
.unwrap_or_else(|| builtins_symbol_ty_by_name(self.db, "int")),
ast::Operator::Mod => n
.checked_rem(m)
.map(Type::IntLiteral)
@@ -2236,6 +2231,90 @@ mod tests {
Ok(())
}
#[test]
fn big_int() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"/src/a.py",
"
x = 10_000_000_000_000_000_000
",
)?;
assert_public_ty(&db, "/src/a.py", "x", "Literal[int]");
Ok(())
}
#[test]
fn tuple_literal() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"/src/a.py",
"
x = ()
",
)?;
// TODO should be a generic type
assert_public_ty(&db, "/src/a.py", "x", "Literal[tuple]");
Ok(())
}
#[test]
fn list_literal() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"/src/a.py",
"
x = []
",
)?;
// TODO should be a generic type
assert_public_ty(&db, "/src/a.py", "x", "Literal[list]");
Ok(())
}
#[test]
fn set_literal() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"/src/a.py",
"
x = {1, 2}
",
)?;
// TODO should be a generic type
assert_public_ty(&db, "/src/a.py", "x", "Literal[set]");
Ok(())
}
#[test]
fn dict_literal() -> anyhow::Result<()> {
let mut db = setup_db();
db.write_dedented(
"/src/a.py",
"
x = {}
",
)?;
// TODO should be a generic type
assert_public_ty(&db, "/src/a.py", "x", "Literal[dict]");
Ok(())
}
fn first_public_def<'db>(db: &'db TestDb, file: File, name: &str) -> Definition<'db> {
let scope = global_scope(db, file);
*use_def_map(db, scope)

View File

@@ -232,6 +232,7 @@ linter.flake8_bandit.hardcoded_tmp_directory = [
]
linter.flake8_bandit.check_typed_exception = false
linter.flake8_bugbear.extend_immutable_calls = []
linter.flake8_builtins.builtins_allowed_modules = []
linter.flake8_builtins.builtins_ignorelist = []
linter.flake8_comprehensions.allow_dict_calls_with_keyword_arguments = false
linter.flake8_copyright.notice_rgx = (?i)Copyright\s+((?:\(C\)|©)\s+)?\d{4}((-|,\s)\d{4})*

View File

@@ -41,8 +41,7 @@ serde = { workspace = true }
serde_json = { workspace = true }
url = { workspace = true }
ureq = { workspace = true }
criterion = { workspace = true, default-features = false }
codspeed-criterion-compat = { workspace = true, default-features = false, optional = true }
codspeed-criterion-compat = { workspace = true, default-features = false }
[dev-dependencies]
ruff_db = { workspace = true }
@@ -56,9 +55,6 @@ red_knot = { workspace = true }
[lints]
workspace = true
[features]
codspeed = ["codspeed-criterion-compat"]
[target.'cfg(target_os = "windows")'.dev-dependencies]
mimalloc = { workspace = true }

View File

@@ -1,8 +1,9 @@
use std::path::Path;
use ruff_benchmark::criterion::{
use codspeed_criterion_compat::{
criterion_group, criterion_main, BenchmarkId, Criterion, Throughput,
};
use ruff_benchmark::{TestCase, TestFile, TestFileDownloadError};
use ruff_python_formatter::{format_module_ast, PreviewMode, PyFormatOptions};
use ruff_python_parser::{parse, Mode};

View File

@@ -1,6 +1,7 @@
use ruff_benchmark::criterion::{
use codspeed_criterion_compat::{
criterion_group, criterion_main, measurement::WallTime, BenchmarkId, Criterion, Throughput,
};
use ruff_benchmark::{TestCase, TestFile, TestFileDownloadError};
use ruff_python_parser::{lexer, Mode, TokenKind};

View File

@@ -1,6 +1,8 @@
use ruff_benchmark::criterion::{
criterion_group, criterion_main, BenchmarkGroup, BenchmarkId, Criterion, Throughput,
use codspeed_criterion_compat::{
self as criterion, criterion_group, criterion_main, BenchmarkGroup, BenchmarkId, Criterion,
Throughput,
};
use criterion::measurement;
use ruff_benchmark::{TestCase, TestFile, TestFileDownloadError};
use ruff_linter::linter::{lint_only, ParseSource};
use ruff_linter::rule_selector::PreviewOptions;
@@ -44,7 +46,7 @@ fn create_test_cases() -> Result<Vec<TestCase>, TestFileDownloadError> {
])
}
fn benchmark_linter(mut group: BenchmarkGroup, settings: &LinterSettings) {
fn benchmark_linter(mut group: BenchmarkGroup<measurement::WallTime>, settings: &LinterSettings) {
let test_cases = create_test_cases().unwrap();
for case in test_cases {

View File

@@ -1,6 +1,7 @@
use ruff_benchmark::criterion::{
use codspeed_criterion_compat::{
criterion_group, criterion_main, measurement::WallTime, BenchmarkId, Criterion, Throughput,
};
use ruff_benchmark::{TestCase, TestFile, TestFileDownloadError};
use ruff_python_ast::statement_visitor::{walk_stmt, StatementVisitor};
use ruff_python_ast::Stmt;

View File

@@ -1,10 +1,9 @@
#![allow(clippy::disallowed_names)]
use codspeed_criterion_compat::{criterion_group, criterion_main, BatchSize, Criterion};
use red_knot::db::RootDatabase;
use red_knot::workspace::WorkspaceMetadata;
use ruff_benchmark::criterion::{
criterion_group, criterion_main, BatchSize, Criterion, Throughput,
};
use ruff_db::files::{system_path_to_file, vendored_path_to_file, File};
use ruff_db::parsed::parsed_module;
use ruff_db::program::{ProgramSettings, SearchPathSettings, TargetVersion};
@@ -100,10 +99,7 @@ fn setup_case() -> Case {
}
fn benchmark_without_parse(criterion: &mut Criterion) {
let mut group = criterion.benchmark_group("red_knot/check_file");
group.throughput(Throughput::Bytes(FOO_CODE.len() as u64));
group.bench_function("red_knot_check_file[without_parse]", |b| {
criterion.bench_function("red_knot_check_file[without_parse]", |b| {
b.iter_batched_ref(
|| {
let case = setup_case();
@@ -123,15 +119,10 @@ fn benchmark_without_parse(criterion: &mut Criterion) {
BatchSize::SmallInput,
);
});
group.finish();
}
fn benchmark_incremental(criterion: &mut Criterion) {
let mut group = criterion.benchmark_group("red_knot/check_file");
group.throughput(Throughput::Bytes(FOO_CODE.len() as u64));
group.bench_function("red_knot_check_file[incremental]", |b| {
criterion.bench_function("red_knot_check_file[incremental]", |b| {
b.iter_batched_ref(
|| {
let mut case = setup_case();
@@ -156,15 +147,10 @@ fn benchmark_incremental(criterion: &mut Criterion) {
BatchSize::SmallInput,
);
});
group.finish();
}
fn benchmark_cold(criterion: &mut Criterion) {
let mut group = criterion.benchmark_group("red_knot/check_file");
group.throughput(Throughput::Bytes(FOO_CODE.len() as u64));
group.bench_function("red_knot_check_file[cold]", |b| {
criterion.bench_function("red_knot_check_file[cold]", |b| {
b.iter_batched_ref(
setup_case,
|case| {
@@ -176,11 +162,12 @@ fn benchmark_cold(criterion: &mut Criterion) {
BatchSize::SmallInput,
);
});
group.finish();
}
criterion_group!(cold, benchmark_cold);
criterion_group!(without_parse, benchmark_without_parse);
criterion_group!(incremental, benchmark_incremental);
criterion_main!(without_parse, cold, incremental);
criterion_group!(
check_file,
benchmark_cold,
benchmark_without_parse,
benchmark_incremental
);
criterion_main!(check_file);

View File

@@ -1,11 +0,0 @@
//! This module re-exports the criterion API but picks the right backend depending on whether
//! the benchmarks are built to run locally or with codspeed
#[cfg(not(codspeed))]
pub use criterion::*;
#[cfg(not(codspeed))]
pub type BenchmarkGroup<'a> = criterion::BenchmarkGroup<'a, measurement::WallTime>;
#[cfg(codspeed)]
pub use codspeed_criterion_compat::*;

View File

@@ -1,5 +1,3 @@
pub mod criterion;
use std::fmt::{Display, Formatter};
use std::path::PathBuf;
use std::process::Command;

View File

@@ -5,7 +5,7 @@ use dashmap::mapref::entry::Entry;
use crate::file_revision::FileRevision;
use crate::files::private::FileStatus;
use crate::system::{SystemPath, SystemPathBuf};
use crate::system::{Metadata, SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf};
use crate::vendored::{VendoredPath, VendoredPathBuf};
use crate::{Db, FxDashMap};
pub use path::FilePath;
@@ -47,6 +47,9 @@ struct FilesInner {
/// so that queries that depend on the existence of a file are re-executed when the file is created.
system_by_path: FxDashMap<SystemPathBuf, File>,
/// Lookup table that maps [`SystemVirtualPathBuf`]s to salsa interned [`File`] instances.
system_virtual_by_path: FxDashMap<SystemVirtualPathBuf, File>,
/// Lookup table that maps vendored files to the salsa [`File`] ingredients.
vendored_by_path: FxDashMap<VendoredPathBuf, File>,
}
@@ -126,6 +129,36 @@ impl Files {
Some(file)
}
/// Looks up a virtual file by its `path`.
///
/// For a non-existing file, creates a new salsa [`File`] ingredient and stores it for future lookups.
///
/// The operations fails if the system failed to provide a metadata for the path.
#[tracing::instrument(level = "trace", skip(self, db), ret)]
pub fn add_virtual_file(&self, db: &dyn Db, path: &SystemVirtualPath) -> Option<File> {
let file = match self.inner.system_virtual_by_path.entry(path.to_path_buf()) {
Entry::Occupied(entry) => *entry.get(),
Entry::Vacant(entry) => {
let metadata = db.system().virtual_path_metadata(path).ok()?;
let file = File::new(
db,
FilePath::SystemVirtual(path.to_path_buf()),
metadata.permissions(),
metadata.revision(),
FileStatus::Exists,
Count::default(),
);
entry.insert(file);
file
}
};
Some(file)
}
/// Refreshes the state of all known files under `path` recursively.
///
/// The most common use case is to update the [`Files`] state after removing or moving a directory.
@@ -227,6 +260,9 @@ impl File {
db.system().read_to_string(system)
}
FilePath::Vendored(vendored) => db.vendored().read_to_string(vendored),
FilePath::SystemVirtual(system_virtual) => {
db.system().read_virtual_path_to_string(system_virtual)
}
}
}
@@ -248,6 +284,9 @@ impl File {
std::io::ErrorKind::InvalidInput,
"Reading a notebook from the vendored file system is not supported.",
))),
FilePath::SystemVirtual(system_virtual) => {
db.system().read_virtual_path_to_notebook(system_virtual)
}
}
}
@@ -255,7 +294,7 @@ impl File {
#[tracing::instrument(level = "debug", skip(db))]
pub fn sync_path(db: &mut dyn Db, path: &SystemPath) {
let absolute = SystemPath::absolute(path, db.system().current_directory());
Self::sync_impl(db, &absolute, None);
Self::sync_system_path(db, &absolute, None);
}
/// Syncs the [`File`]'s state with the state of the file on the system.
@@ -265,22 +304,33 @@ impl File {
match path {
FilePath::System(system) => {
Self::sync_impl(db, &system, Some(self));
Self::sync_system_path(db, &system, Some(self));
}
FilePath::Vendored(_) => {
// Readonly, can never be out of date.
}
FilePath::SystemVirtual(system_virtual) => {
Self::sync_system_virtual_path(db, &system_virtual, self);
}
}
}
/// Private method providing the implementation for [`Self::sync_path`] and [`Self::sync_path`].
fn sync_impl(db: &mut dyn Db, path: &SystemPath, file: Option<File>) {
fn sync_system_path(db: &mut dyn Db, path: &SystemPath, file: Option<File>) {
let Some(file) = file.or_else(|| db.files().try_system(db, path)) else {
return;
};
let metadata = db.system().path_metadata(path);
Self::sync_impl(db, metadata, file);
}
fn sync_system_virtual_path(db: &mut dyn Db, path: &SystemVirtualPath, file: File) {
let metadata = db.system().virtual_path_metadata(path);
Self::sync_impl(db, metadata, file);
}
/// Private method providing the implementation for [`Self::sync_system_path`] and
/// [`Self::sync_system_virtual_path`].
fn sync_impl(db: &mut dyn Db, metadata: crate::system::Result<Metadata>, file: File) {
let (status, revision, permission) = match metadata {
Ok(metadata) if metadata.file_type().is_file() => (
FileStatus::Exists,

View File

@@ -1,5 +1,5 @@
use crate::files::{system_path_to_file, vendored_path_to_file, File};
use crate::system::{SystemPath, SystemPathBuf};
use crate::system::{SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf};
use crate::vendored::{VendoredPath, VendoredPathBuf};
use crate::Db;
@@ -8,11 +8,14 @@ use crate::Db;
/// The path abstracts that files in Ruff can come from different sources:
///
/// * a file stored on the [host system](crate::system::System).
/// * a virtual file stored on the [host system](crate::system::System).
/// * a vendored file stored in the [vendored file system](crate::vendored::VendoredFileSystem).
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub enum FilePath {
/// Path to a file on the [host system](crate::system::System).
System(SystemPathBuf),
/// Path to a virtual file on the [host system](crate::system::System).
SystemVirtual(SystemVirtualPathBuf),
/// Path to a file vendored as part of Ruff. Stored in the [vendored file system](crate::vendored::VendoredFileSystem).
Vendored(VendoredPathBuf),
}
@@ -30,7 +33,7 @@ impl FilePath {
pub fn into_system_path_buf(self) -> Option<SystemPathBuf> {
match self {
FilePath::System(path) => Some(path),
FilePath::Vendored(_) => None,
FilePath::Vendored(_) | FilePath::SystemVirtual(_) => None,
}
}
@@ -39,7 +42,7 @@ impl FilePath {
pub fn as_system_path(&self) -> Option<&SystemPath> {
match self {
FilePath::System(path) => Some(path.as_path()),
FilePath::Vendored(_) => None,
FilePath::Vendored(_) | FilePath::SystemVirtual(_) => None,
}
}
@@ -50,6 +53,14 @@ impl FilePath {
matches!(self, FilePath::System(_))
}
/// Returns `true` if the path is a file system path that is virtual i.e., it doesn't exists on
/// disk.
#[must_use]
#[inline]
pub const fn is_system_virtual_path(&self) -> bool {
matches!(self, FilePath::SystemVirtual(_))
}
/// Returns `true` if the path is a vendored path.
#[must_use]
#[inline]
@@ -62,7 +73,7 @@ impl FilePath {
pub fn as_vendored_path(&self) -> Option<&VendoredPath> {
match self {
FilePath::Vendored(path) => Some(path.as_path()),
FilePath::System(_) => None,
FilePath::System(_) | FilePath::SystemVirtual(_) => None,
}
}
@@ -71,6 +82,7 @@ impl FilePath {
match self {
FilePath::System(path) => path.as_str(),
FilePath::Vendored(path) => path.as_str(),
FilePath::SystemVirtual(path) => path.as_str(),
}
}
@@ -78,12 +90,14 @@ impl FilePath {
///
/// Returns `Some` if a file for `path` exists and is accessible by the user. Returns `None` otherwise.
///
/// See [`system_path_to_file`] and [`vendored_path_to_file`] if you always have either a file system or vendored path.
/// See [`system_path_to_file`] or [`vendored_path_to_file`] if you always have either a file
/// system or vendored path.
#[inline]
pub fn to_file(&self, db: &dyn Db) -> Option<File> {
match self {
FilePath::System(path) => system_path_to_file(db, path),
FilePath::Vendored(path) => vendored_path_to_file(db, path),
FilePath::SystemVirtual(_) => None,
}
}
@@ -92,6 +106,7 @@ impl FilePath {
match self {
FilePath::System(path) => path.extension(),
FilePath::Vendored(path) => path.extension(),
FilePath::SystemVirtual(_) => None,
}
}
}
@@ -126,6 +141,18 @@ impl From<&VendoredPath> for FilePath {
}
}
impl From<&SystemVirtualPath> for FilePath {
fn from(value: &SystemVirtualPath) -> Self {
FilePath::SystemVirtual(value.to_path_buf())
}
}
impl From<SystemVirtualPathBuf> for FilePath {
fn from(value: SystemVirtualPathBuf) -> Self {
FilePath::SystemVirtual(value)
}
}
impl PartialEq<SystemPath> for FilePath {
#[inline]
fn eq(&self, other: &SystemPath) -> bool {

View File

@@ -32,6 +32,9 @@ pub fn parsed_module(db: &dyn Db, file: File) -> ParsedModule {
.extension()
.map_or(PySourceType::Python, PySourceType::from_extension),
FilePath::Vendored(_) => PySourceType::Stub,
FilePath::SystemVirtual(path) => path
.extension()
.map_or(PySourceType::Python, PySourceType::from_extension),
};
ParsedModule::new(parse_unchecked_source(&source, ty))
@@ -74,9 +77,10 @@ impl std::fmt::Debug for ParsedModule {
mod tests {
use crate::files::{system_path_to_file, vendored_path_to_file};
use crate::parsed::parsed_module;
use crate::system::{DbWithTestSystem, SystemPath};
use crate::system::{DbWithTestSystem, SystemPath, SystemVirtualPath};
use crate::tests::TestDb;
use crate::vendored::{tests::VendoredFileSystemBuilder, VendoredPath};
use crate::Db;
#[test]
fn python_file() -> crate::system::Result<()> {
@@ -110,6 +114,38 @@ mod tests {
Ok(())
}
#[test]
fn virtual_python_file() -> crate::system::Result<()> {
let mut db = TestDb::new();
let path = SystemVirtualPath::new("untitled:Untitled-1");
db.write_virtual_file(path, "x = 10");
let file = db.files().add_virtual_file(&db, path).unwrap();
let parsed = parsed_module(&db, file);
assert!(parsed.is_valid());
Ok(())
}
#[test]
fn virtual_ipynb_file() -> crate::system::Result<()> {
let mut db = TestDb::new();
let path = SystemVirtualPath::new("untitled:Untitled-1.ipynb");
db.write_virtual_file(path, "%timeit a = b");
let file = db.files().add_virtual_file(&db, path).unwrap();
let parsed = parsed_module(&db, file);
assert!(parsed.is_valid());
Ok(())
}
#[test]
fn vendored_file() {
let mut db = TestDb::new();

View File

@@ -8,7 +8,7 @@ use ruff_notebook::Notebook;
use ruff_python_ast::PySourceType;
use ruff_source_file::LineIndex;
use crate::files::File;
use crate::files::{File, FilePath};
use crate::Db;
/// Reads the source text of a python text file (must be valid UTF8) or notebook.
@@ -16,25 +16,33 @@ use crate::Db;
pub fn source_text(db: &dyn Db, file: File) -> SourceText {
let _span = tracing::trace_span!("source_text", ?file).entered();
if let Some(path) = file.path(db).as_system_path() {
if path.extension().is_some_and(|extension| {
let is_notebook = match file.path(db) {
FilePath::System(system) => system.extension().is_some_and(|extension| {
PySourceType::try_from_extension(extension) == Some(PySourceType::Ipynb)
}) {
// TODO(micha): Proper error handling and emit a diagnostic. Tackle it together with `source_text`.
let notebook = file.read_to_notebook(db).unwrap_or_else(|error| {
tracing::error!("Failed to load notebook: {error}");
Notebook::empty()
});
return SourceText {
inner: Arc::new(SourceTextInner {
kind: SourceTextKind::Notebook(notebook),
count: Count::new(),
}),
};
}),
FilePath::SystemVirtual(system_virtual) => {
system_virtual.extension().is_some_and(|extension| {
PySourceType::try_from_extension(extension) == Some(PySourceType::Ipynb)
})
}
FilePath::Vendored(_) => false,
};
if is_notebook {
// TODO(micha): Proper error handling and emit a diagnostic. Tackle it together with `source_text`.
let notebook = file.read_to_notebook(db).unwrap_or_else(|error| {
tracing::error!("Failed to load notebook: {error}");
Notebook::empty()
});
return SourceText {
inner: Arc::new(SourceTextInner {
kind: SourceTextKind::Notebook(notebook),
count: Count::new(),
}),
};
}
let content = file.read_to_string(db).unwrap_or_else(|error| {
tracing::error!("Failed to load file: {error}");
String::default()

View File

@@ -11,6 +11,7 @@ use crate::file_revision::FileRevision;
pub use self::path::{
deduplicate_nested_paths, DeduplicatedNestedPathsIter, SystemPath, SystemPathBuf,
SystemVirtualPath, SystemVirtualPathBuf,
};
mod memory_fs;
@@ -50,6 +51,18 @@ pub trait System: Debug {
/// representation fall-back to deserializing the notebook from a string.
fn read_to_notebook(&self, path: &SystemPath) -> std::result::Result<Notebook, NotebookError>;
/// Reads the metadata of the virtual file at `path`.
fn virtual_path_metadata(&self, path: &SystemVirtualPath) -> Result<Metadata>;
/// Reads the content of the virtual file at `path` into a [`String`].
fn read_virtual_path_to_string(&self, path: &SystemVirtualPath) -> Result<String>;
/// Reads the content of the virtual file at `path` as a [`Notebook`].
fn read_virtual_path_to_notebook(
&self,
path: &SystemVirtualPath,
) -> std::result::Result<Notebook, NotebookError>;
/// Returns `true` if `path` exists.
fn path_exists(&self, path: &SystemPath) -> bool {
self.path_metadata(path).is_ok()

View File

@@ -4,9 +4,13 @@ use std::sync::{Arc, RwLock, RwLockWriteGuard};
use camino::{Utf8Path, Utf8PathBuf};
use filetime::FileTime;
use rustc_hash::FxHashMap;
use ruff_notebook::{Notebook, NotebookError};
use crate::system::{
walk_directory, DirectoryEntry, FileType, Metadata, Result, SystemPath, SystemPathBuf,
SystemVirtualPath, SystemVirtualPathBuf,
};
use super::walk_directory::{
@@ -50,6 +54,7 @@ impl MemoryFileSystem {
let fs = Self {
inner: Arc::new(MemoryFileSystemInner {
by_path: RwLock::new(BTreeMap::default()),
virtual_files: RwLock::new(FxHashMap::default()),
cwd: cwd.clone(),
}),
};
@@ -134,6 +139,42 @@ impl MemoryFileSystem {
ruff_notebook::Notebook::from_source_code(&content)
}
pub(crate) fn virtual_path_metadata(
&self,
path: impl AsRef<SystemVirtualPath>,
) -> Result<Metadata> {
let virtual_files = self.inner.virtual_files.read().unwrap();
let file = virtual_files
.get(&path.as_ref().to_path_buf())
.ok_or_else(not_found)?;
Ok(Metadata {
revision: file.last_modified.into(),
permissions: Some(MemoryFileSystem::PERMISSION),
file_type: FileType::File,
})
}
pub(crate) fn read_virtual_path_to_string(
&self,
path: impl AsRef<SystemVirtualPath>,
) -> Result<String> {
let virtual_files = self.inner.virtual_files.read().unwrap();
let file = virtual_files
.get(&path.as_ref().to_path_buf())
.ok_or_else(not_found)?;
Ok(file.content.clone())
}
pub(crate) fn read_virtual_path_to_notebook(
&self,
path: &SystemVirtualPath,
) -> std::result::Result<Notebook, NotebookError> {
let content = self.read_virtual_path_to_string(path)?;
ruff_notebook::Notebook::from_source_code(&content)
}
pub fn exists(&self, path: &SystemPath) -> bool {
let by_path = self.inner.by_path.read().unwrap();
let normalized = self.normalize_path(path);
@@ -141,6 +182,11 @@ impl MemoryFileSystem {
by_path.contains_key(&normalized)
}
pub fn virtual_path_exists(&self, path: &SystemVirtualPath) -> bool {
let virtual_files = self.inner.virtual_files.read().unwrap();
virtual_files.contains_key(&path.to_path_buf())
}
/// Writes the files to the file system.
///
/// The operation overrides existing files with the same normalized path.
@@ -173,6 +219,26 @@ impl MemoryFileSystem {
Ok(())
}
/// Stores a new virtual file in the file system.
///
/// The operation overrides the content for an existing virtual file with the same `path`.
pub fn write_virtual_file(&self, path: impl AsRef<SystemVirtualPath>, content: impl ToString) {
let path = path.as_ref();
let mut virtual_files = self.inner.virtual_files.write().unwrap();
match virtual_files.entry(path.to_path_buf()) {
std::collections::hash_map::Entry::Vacant(entry) => {
entry.insert(File {
content: content.to_string(),
last_modified: FileTime::now(),
});
}
std::collections::hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().content = content.to_string();
}
}
}
/// Returns a builder for walking the directory tree of `path`.
///
/// The only files that are ignored when setting `WalkDirectoryBuilder::standard_filters`
@@ -201,6 +267,17 @@ impl MemoryFileSystem {
remove_file(self, path.as_ref())
}
pub fn remove_virtual_file(&self, path: impl AsRef<SystemVirtualPath>) -> Result<()> {
let mut virtual_files = self.inner.virtual_files.write().unwrap();
match virtual_files.entry(path.as_ref().to_path_buf()) {
std::collections::hash_map::Entry::Occupied(entry) => {
entry.remove();
Ok(())
}
std::collections::hash_map::Entry::Vacant(_) => Err(not_found()),
}
}
/// Sets the last modified timestamp of the file stored at `path` to now.
///
/// Creates a new file if the file at `path` doesn't exist.
@@ -309,6 +386,7 @@ impl std::fmt::Debug for MemoryFileSystem {
struct MemoryFileSystemInner {
by_path: RwLock<BTreeMap<Utf8PathBuf, Entry>>,
virtual_files: RwLock<FxHashMap<SystemVirtualPathBuf, File>>,
cwd: SystemPathBuf,
}
@@ -586,6 +664,7 @@ mod tests {
use crate::system::walk_directory::WalkState;
use crate::system::{
DirectoryEntry, FileType, MemoryFileSystem, Result, SystemPath, SystemPathBuf,
SystemVirtualPath,
};
/// Creates a file system with the given files.
@@ -724,6 +803,18 @@ mod tests {
Ok(())
}
#[test]
fn write_virtual_file() {
let fs = MemoryFileSystem::new();
fs.write_virtual_file("a", "content");
let error = fs.read_to_string("a").unwrap_err();
assert_eq!(error.kind(), ErrorKind::NotFound);
assert_eq!(fs.read_virtual_path_to_string("a").unwrap(), "content");
}
#[test]
fn read() -> Result<()> {
let fs = MemoryFileSystem::new();
@@ -760,6 +851,15 @@ mod tests {
Ok(())
}
#[test]
fn read_fails_if_virtual_path_doesnt_exit() {
let fs = MemoryFileSystem::new();
let error = fs.read_virtual_path_to_string("a").unwrap_err();
assert_eq!(error.kind(), ErrorKind::NotFound);
}
#[test]
fn remove_file() -> Result<()> {
let fs = with_files(["a/a.py", "b.py"]);
@@ -777,6 +877,18 @@ mod tests {
Ok(())
}
#[test]
fn remove_virtual_file() {
let fs = MemoryFileSystem::new();
fs.write_virtual_file("a", "content");
fs.write_virtual_file("b", "content");
fs.remove_virtual_file("a").unwrap();
assert!(!fs.virtual_path_exists(SystemVirtualPath::new("a")));
assert!(fs.virtual_path_exists(SystemVirtualPath::new("b")));
}
#[test]
fn remove_non_existing_file() {
let fs = with_files(["b.py"]);

View File

@@ -7,6 +7,7 @@ use ruff_notebook::{Notebook, NotebookError};
use crate::system::{
DirectoryEntry, FileType, Metadata, Result, System, SystemPath, SystemPathBuf,
SystemVirtualPath,
};
use super::walk_directory::{
@@ -76,6 +77,21 @@ impl System for OsSystem {
Notebook::from_path(path.as_std_path())
}
fn virtual_path_metadata(&self, _path: &SystemVirtualPath) -> Result<Metadata> {
Err(not_found())
}
fn read_virtual_path_to_string(&self, _path: &SystemVirtualPath) -> Result<String> {
Err(not_found())
}
fn read_virtual_path_to_notebook(
&self,
_path: &SystemVirtualPath,
) -> std::result::Result<Notebook, NotebookError> {
Err(NotebookError::from(not_found()))
}
fn path_exists(&self, path: &SystemPath) -> bool {
path.as_std_path().exists()
}
@@ -275,6 +291,10 @@ impl From<WalkState> for ignore::WalkState {
}
}
fn not_found() -> std::io::Error {
std::io::Error::new(std::io::ErrorKind::NotFound, "No such file or directory")
}
#[cfg(test)]
mod tests {
use tempfile::TempDir;

View File

@@ -593,6 +593,137 @@ impl ruff_cache::CacheKey for SystemPathBuf {
}
}
/// A slice of a virtual path on [`System`](super::System) (akin to [`str`]).
#[repr(transparent)]
pub struct SystemVirtualPath(str);
impl SystemVirtualPath {
pub fn new(path: &str) -> &SystemVirtualPath {
// SAFETY: SystemVirtualPath is marked as #[repr(transparent)] so the conversion from a
// *const str to a *const SystemVirtualPath is valid.
unsafe { &*(path as *const str as *const SystemVirtualPath) }
}
/// Converts the path to an owned [`SystemVirtualPathBuf`].
pub fn to_path_buf(&self) -> SystemVirtualPathBuf {
SystemVirtualPathBuf(self.0.to_string())
}
/// Extracts the file extension, if possible.
///
/// # Examples
///
/// ```
/// use ruff_db::system::SystemVirtualPath;
///
/// assert_eq!(None, SystemVirtualPath::new("untitled:Untitled-1").extension());
/// assert_eq!("ipynb", SystemVirtualPath::new("untitled:Untitled-1.ipynb").extension().unwrap());
/// assert_eq!("ipynb", SystemVirtualPath::new("vscode-notebook-cell:Untitled-1.ipynb").extension().unwrap());
/// ```
///
/// See [`Path::extension`] for more details.
pub fn extension(&self) -> Option<&str> {
Path::new(&self.0).extension().and_then(|ext| ext.to_str())
}
/// Returns the path as a string slice.
#[inline]
pub fn as_str(&self) -> &str {
&self.0
}
}
/// An owned, virtual path on [`System`](`super::System`) (akin to [`String`]).
#[derive(Eq, PartialEq, Clone, Hash, PartialOrd, Ord)]
pub struct SystemVirtualPathBuf(String);
impl SystemVirtualPathBuf {
#[inline]
pub fn as_path(&self) -> &SystemVirtualPath {
SystemVirtualPath::new(&self.0)
}
}
impl From<String> for SystemVirtualPathBuf {
fn from(value: String) -> Self {
SystemVirtualPathBuf(value)
}
}
impl AsRef<SystemVirtualPath> for SystemVirtualPathBuf {
#[inline]
fn as_ref(&self) -> &SystemVirtualPath {
self.as_path()
}
}
impl AsRef<SystemVirtualPath> for SystemVirtualPath {
#[inline]
fn as_ref(&self) -> &SystemVirtualPath {
self
}
}
impl AsRef<SystemVirtualPath> for str {
#[inline]
fn as_ref(&self) -> &SystemVirtualPath {
SystemVirtualPath::new(self)
}
}
impl AsRef<SystemVirtualPath> for String {
#[inline]
fn as_ref(&self) -> &SystemVirtualPath {
SystemVirtualPath::new(self)
}
}
impl Deref for SystemVirtualPathBuf {
type Target = SystemVirtualPath;
fn deref(&self) -> &Self::Target {
self.as_path()
}
}
impl std::fmt::Debug for SystemVirtualPath {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
impl std::fmt::Display for SystemVirtualPath {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
impl std::fmt::Debug for SystemVirtualPathBuf {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
impl std::fmt::Display for SystemVirtualPathBuf {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
#[cfg(feature = "cache")]
impl ruff_cache::CacheKey for SystemVirtualPath {
fn cache_key(&self, hasher: &mut ruff_cache::CacheKeyHasher) {
self.as_str().cache_key(hasher);
}
}
#[cfg(feature = "cache")]
impl ruff_cache::CacheKey for SystemVirtualPathBuf {
fn cache_key(&self, hasher: &mut ruff_cache::CacheKeyHasher) {
self.as_path().cache_key(hasher);
}
}
/// Deduplicates identical paths and removes nested paths.
///
/// # Examples

View File

@@ -2,7 +2,9 @@ use ruff_notebook::{Notebook, NotebookError};
use ruff_python_trivia::textwrap;
use crate::files::File;
use crate::system::{DirectoryEntry, MemoryFileSystem, Metadata, Result, System, SystemPath};
use crate::system::{
DirectoryEntry, MemoryFileSystem, Metadata, Result, System, SystemPath, SystemVirtualPath,
};
use crate::Db;
use std::any::Any;
use std::panic::RefUnwindSafe;
@@ -71,6 +73,30 @@ impl System for TestSystem {
}
}
fn virtual_path_metadata(&self, path: &SystemVirtualPath) -> Result<Metadata> {
match &self.inner {
TestSystemInner::Stub(fs) => fs.virtual_path_metadata(path),
TestSystemInner::System(system) => system.virtual_path_metadata(path),
}
}
fn read_virtual_path_to_string(&self, path: &SystemVirtualPath) -> Result<String> {
match &self.inner {
TestSystemInner::Stub(fs) => fs.read_virtual_path_to_string(path),
TestSystemInner::System(system) => system.read_virtual_path_to_string(path),
}
}
fn read_virtual_path_to_notebook(
&self,
path: &SystemVirtualPath,
) -> std::result::Result<Notebook, NotebookError> {
match &self.inner {
TestSystemInner::Stub(fs) => fs.read_virtual_path_to_notebook(path),
TestSystemInner::System(system) => system.read_virtual_path_to_notebook(path),
}
}
fn path_exists(&self, path: &SystemPath) -> bool {
match &self.inner {
TestSystemInner::Stub(fs) => fs.exists(path),
@@ -151,6 +177,14 @@ pub trait DbWithTestSystem: Db + Sized {
result
}
/// Writes the content of the given virtual file.
fn write_virtual_file(&mut self, path: impl AsRef<SystemVirtualPath>, content: impl ToString) {
let path = path.as_ref();
self.test_system()
.memory_file_system()
.write_virtual_file(path, content);
}
/// Writes auto-dedented text to a file.
fn write_dedented(&mut self, path: &str, content: &str) -> crate::system::Result<()> {
self.write_file(path, textwrap::dedent(content))?;

View File

@@ -0,0 +1,5 @@
import some as sum
import float
from some import other as int
from some import input, exec
from directory import new as dir

View File

@@ -0,0 +1,5 @@
lambda print, copyright: print
lambda x, float, y: x + y
lambda min, max: min
lambda id: id
lambda dir: dir

View File

@@ -1,4 +1,5 @@
from collections import namedtuple
from enum import Enum
from typing import NamedTuple
@@ -20,3 +21,15 @@ class Good(namedtuple("foo", ["str", "int"])): # OK
class Good(NamedTuple): # Ok
pass
class Good(namedtuple("foo", ["str", "int"]), Enum):
pass
class UnusualButStillBad(namedtuple("foo", ["str", "int"]), NamedTuple("foo", [("x", int, "y", int)])):
pass
class UnusualButStillBad(namedtuple("foo", ["str", "int"]), object):
pass

View File

@@ -0,0 +1,20 @@
# Unused, but marked as required.
import os
# Unused, _not_ marked as required.
import sys
# Unused, _not_ marked as required (due to the alias).
import pathlib as non_alias
# Unused, marked as required.
import shelve as alias
# Unused, but marked as required.
from typing import List
# Unused, but marked as required.
from typing import Set as SetAlias
# Unused, but marked as required.
import urllib.parse

View File

@@ -0,0 +1,73 @@
# DOC201
def foo(num: int) -> str:
"""
Do something
Args:
num (int): A number
"""
return 'test'
# OK
def foo(num: int) -> str:
"""
Do something
Args:
num (int): A number
Returns:
str: A string
"""
return 'test'
class Bar:
# OK
def foo(self) -> str:
"""
Do something
Args:
num (int): A number
Returns:
str: A string
"""
return 'test'
# DOC201
def bar(self) -> str:
"""
Do something
Args:
num (int): A number
"""
return 'test'
# OK
@property
def baz(self) -> str:
"""
Do something
Args:
num (int): A number
"""
return 'test'
# OK
def test():
"""Do something."""
# DOC201
def nested():
"""Do something nested."""
return 5
print("I never return")

View File

@@ -0,0 +1,76 @@
# DOC201
def foo(num: int) -> str:
"""
Do something
Parameters
----------
num : int
A number
"""
return 'test'
# OK
def foo(num: int) -> str:
"""
Do something
Parameters
----------
num : int
A number
Returns
-------
str
A string
"""
return 'test'
class Bar:
# OK
def foo(self) -> str:
"""
Do something
Parameters
----------
num : int
A number
Returns
-------
str
A string
"""
return 'test'
# DOC201
def bar(self) -> str:
"""
Do something
Parameters
----------
num : int
A number
"""
return 'test'
# OK
@property
def baz(self) -> str:
"""
Do something
Parameters
----------
num : int
A number
"""
return 'test'

View File

@@ -0,0 +1,50 @@
# OK
def foo(num: int) -> str:
"""
Do something
Args:
num (int): A number
"""
print('test')
# DOC202
def foo(num: int) -> str:
"""
Do something
Args:
num (int): A number
Returns:
str: A string
"""
print('test')
class Bar:
# DOC202
def foo(self) -> str:
"""
Do something
Args:
num (int): A number
Returns:
str: A string
"""
print('test')
# OK
def bar(self) -> str:
"""
Do something
Args:
num (int): A number
"""
print('test')

View File

@@ -0,0 +1,62 @@
# OK
def foo(num: int) -> str:
"""
Do something
Parameters
----------
num : int
A number
"""
print('test')
# DOC202
def foo(num: int) -> str:
"""
Do something
Parameters
----------
num : int
A number
Returns
-------
str
A string
"""
print('test')
class Bar:
# DOC202
def foo(self) -> str:
"""
Do something
Parameters
----------
num : int
A number
Returns
-------
str
A string
"""
print('test')
# OK
def bar(self) -> str:
"""
Do something
Parameters
----------
num : int
A number
"""
print('test')

View File

@@ -63,3 +63,19 @@ class MyClass(BaseClass):
InnerClass().method()
defined_outside = defined_outside
from dataclasses import dataclass
@dataclass
class DataClass:
def normal(self):
super(DataClass, self).f() # Error
super().f() # OK
@dataclass(slots=True)
def normal(self):
super(DataClass, self).f() # OK
super().f() # OK (`TypeError` in practice)

View File

@@ -129,3 +129,13 @@ path = "%s-%s-%s.pem" % (
# Not a valid type annotation but this test shouldn't result in a panic.
# Refer: https://github.com/astral-sh/ruff/issues/11736
x: "'%s + %s' % (1, 2)"
# See: https://github.com/astral-sh/ruff/issues/12421
print("%.2X" % 1)
print("%.02X" % 1)
print("%02X" % 1)
print("%.00002X" % 1)
print("%.20X" % 1)
print("%2X" % 1)
print("%02X" % 1)

View File

@@ -2,7 +2,7 @@ use ruff_python_ast::Expr;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::{flake8_pie, pylint, refurb};
use crate::rules::{flake8_builtins, flake8_pie, pylint, refurb};
/// Run lint rules over all deferred lambdas in the [`SemanticModel`].
pub(crate) fn deferred_lambdas(checker: &mut Checker) {
@@ -24,6 +24,9 @@ pub(crate) fn deferred_lambdas(checker: &mut Checker) {
if checker.enabled(Rule::ReimplementedOperator) {
refurb::rules::reimplemented_operator(checker, &lambda.into());
}
if checker.enabled(Rule::BuiltinLambdaArgumentShadowing) {
flake8_builtins::rules::builtin_lambda_argument_shadowing(checker, lambda);
}
}
}
}

View File

@@ -84,6 +84,8 @@ pub(crate) fn definitions(checker: &mut Checker) {
Rule::UndocumentedPublicPackage,
]);
let enforce_pydoclint = checker.any_enabled(&[
Rule::DocstringMissingReturns,
Rule::DocstringExtraneousReturns,
Rule::DocstringMissingException,
Rule::DocstringExtraneousException,
]);

View File

@@ -1197,9 +1197,11 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
op: Operator::Add, ..
}) => {
if checker.enabled(Rule::ExplicitStringConcatenation) {
if let Some(diagnostic) =
flake8_implicit_str_concat::rules::explicit(expr, checker.locator)
{
if let Some(diagnostic) = flake8_implicit_str_concat::rules::explicit(
expr,
checker.locator,
checker.settings,
) {
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -597,8 +597,11 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::NonAsciiImportName) {
pylint::rules::non_ascii_module_import(checker, alias);
}
// TODO(charlie): Remove when stabilizing A004.
if let Some(asname) = &alias.asname {
if checker.enabled(Rule::BuiltinVariableShadowing) {
if checker.settings.preview.is_disabled()
&& checker.enabled(Rule::BuiltinVariableShadowing)
{
flake8_builtins::rules::builtin_variable_shadowing(
checker,
asname,
@@ -739,6 +742,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
checker.diagnostics.push(diagnostic);
}
}
if checker.enabled(Rule::BuiltinImportShadowing) {
flake8_builtins::rules::builtin_import_shadowing(checker, alias);
}
}
}
Stmt::ImportFrom(
@@ -917,8 +923,11 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
));
}
} else {
// TODO(charlie): Remove when stabilizing A004.
if let Some(asname) = &alias.asname {
if checker.enabled(Rule::BuiltinVariableShadowing) {
if checker.settings.preview.is_disabled()
&& checker.enabled(Rule::BuiltinVariableShadowing)
{
flake8_builtins::rules::builtin_variable_shadowing(
checker,
asname,
@@ -1030,6 +1039,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
}
}
}
if checker.enabled(Rule::BuiltinImportShadowing) {
flake8_builtins::rules::builtin_import_shadowing(checker, alias);
}
}
if checker.enabled(Rule::ImportSelf) {
if let Some(diagnostic) = pylint::rules::import_from_self(

View File

@@ -5,6 +5,7 @@ use ruff_python_trivia::CommentRanges;
use ruff_source_file::Locator;
use crate::registry::Rule;
use crate::rules::flake8_builtins::rules::builtin_module_shadowing;
use crate::rules::flake8_no_pep420::rules::implicit_namespace_package;
use crate::rules::pep8_naming::rules::invalid_module_name;
use crate::settings::LinterSettings;
@@ -41,5 +42,17 @@ pub(crate) fn check_file_path(
}
}
// flake8-builtins
if settings.rules.enabled(Rule::BuiltinModuleShadowing) {
if let Some(diagnostic) = builtin_module_shadowing(
path,
package,
&settings.flake8_builtins.builtins_allowed_modules,
settings.target_version,
) {
diagnostics.push(diagnostic);
}
}
diagnostics
}

View File

@@ -125,9 +125,9 @@ pub(crate) fn check_tokens(
flake8_implicit_str_concat::rules::implicit(
&mut diagnostics,
tokens,
settings,
locator,
indexer,
settings,
);
}

View File

@@ -310,6 +310,10 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Builtins, "001") => (RuleGroup::Stable, rules::flake8_builtins::rules::BuiltinVariableShadowing),
(Flake8Builtins, "002") => (RuleGroup::Stable, rules::flake8_builtins::rules::BuiltinArgumentShadowing),
(Flake8Builtins, "003") => (RuleGroup::Stable, rules::flake8_builtins::rules::BuiltinAttributeShadowing),
// TODO(charlie): When stabilizing, remove preview gating for A001's treatment of imports.
(Flake8Builtins, "004") => (RuleGroup::Preview, rules::flake8_builtins::rules::BuiltinImportShadowing),
(Flake8Builtins, "005") => (RuleGroup::Preview, rules::flake8_builtins::rules::BuiltinModuleShadowing),
(Flake8Builtins, "006") => (RuleGroup::Preview, rules::flake8_builtins::rules::BuiltinLambdaArgumentShadowing),
// flake8-bugbear
(Flake8Bugbear, "002") => (RuleGroup::Stable, rules::flake8_bugbear::rules::UnaryPrefixIncrementDecrement),
@@ -917,6 +921,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(FastApi, "002") => (RuleGroup::Preview, rules::fastapi::rules::FastApiNonAnnotatedDependency),
// pydoclint
(Pydoclint, "201") => (RuleGroup::Preview, rules::pydoclint::rules::DocstringMissingReturns),
(Pydoclint, "202") => (RuleGroup::Preview, rules::pydoclint::rules::DocstringExtraneousReturns),
(Pydoclint, "501") => (RuleGroup::Preview, rules::pydoclint::rules::DocstringMissingException),
(Pydoclint, "502") => (RuleGroup::Preview, rules::pydoclint::rules::DocstringExtraneousException),

View File

@@ -9,11 +9,12 @@ use anyhow::Result;
use libcst_native::{ImportAlias, Name as cstName, NameOrAttribute};
use ruff_diagnostics::Edit;
use ruff_python_ast::imports::{AnyImport, Import, ImportFrom};
use ruff_python_ast::{self as ast, ModModule, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_parser::{Parsed, Tokens};
use ruff_python_semantic::{ImportedName, SemanticModel};
use ruff_python_semantic::{
ImportedName, MemberNameImport, ModuleNameImport, NameImport, SemanticModel,
};
use ruff_python_trivia::textwrap::indent;
use ruff_source_file::Locator;
use ruff_text_size::{Ranged, TextSize};
@@ -71,7 +72,7 @@ impl<'a> Importer<'a> {
/// If there are no existing imports, the new import will be added at the top
/// of the file. Otherwise, it will be added after the most recent top-level
/// import statement.
pub(crate) fn add_import(&self, import: &AnyImport, at: TextSize) -> Edit {
pub(crate) fn add_import(&self, import: &NameImport, at: TextSize) -> Edit {
let required_import = import.to_string();
if let Some(stmt) = self.preceding_import(at) {
// Insert after the last top-level import.
@@ -359,8 +360,12 @@ impl<'a> Importer<'a> {
// Case 2a: No `functools` import is in scope; thus, we add `import functools`,
// and return `"functools.cache"` as the bound name.
if semantic.is_available(symbol.module) {
let import_edit =
self.add_import(&AnyImport::Import(Import::module(symbol.module)), at);
let import_edit = self.add_import(
&NameImport::Import(ModuleNameImport::module(
symbol.module.to_string(),
)),
at,
);
Ok((
import_edit,
format!(
@@ -378,9 +383,9 @@ impl<'a> Importer<'a> {
// `from functools import cache`, and return `"cache"` as the bound name.
if semantic.is_available(symbol.member) {
let import_edit = self.add_import(
&AnyImport::ImportFrom(ImportFrom::member(
symbol.module,
symbol.member,
&NameImport::ImportFrom(MemberNameImport::member(
symbol.module.to_string(),
symbol.member.to_string(),
)),
at,
);

View File

@@ -304,7 +304,9 @@ impl Rule {
| Rule::UTF8EncodingDeclaration => LintSource::Tokens,
Rule::IOError => LintSource::Io,
Rule::UnsortedImports | Rule::MissingRequiredImport => LintSource::Imports,
Rule::ImplicitNamespacePackage | Rule::InvalidModuleName => LintSource::Filesystem,
Rule::ImplicitNamespacePackage
| Rule::InvalidModuleName
| Rule::BuiltinModuleShadowing => LintSource::Filesystem,
Rule::IndentationWithInvalidMultiple
| Rule::IndentationWithInvalidMultipleComment
| Rule::MissingWhitespace

View File

@@ -66,7 +66,11 @@ use crate::rules::flake8_boolean_trap::helpers::is_allowed_func_def;
///
///
/// def round_number(value, method):
/// ...
/// return ceil(number) if method is RoundingMethod.UP else floor(number)
///
///
/// round_number(1.5, RoundingMethod.UP)
/// round_number(1.5, RoundingMethod.DOWN)
/// ```
///
/// Or, make the argument a keyword-only argument:

View File

@@ -18,6 +18,25 @@ mod tests {
#[test_case(Rule::BuiltinVariableShadowing, Path::new("A001.py"))]
#[test_case(Rule::BuiltinArgumentShadowing, Path::new("A002.py"))]
#[test_case(Rule::BuiltinAttributeShadowing, Path::new("A003.py"))]
#[test_case(Rule::BuiltinImportShadowing, Path::new("A004.py"))]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/non_builtin/__init__.py")
)]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/logging/__init__.py")
)]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/string/__init__.py")
)]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/package/bisect.py")
)]
#[test_case(Rule::BuiltinModuleShadowing, Path::new("A005/modules/package/xml.py"))]
#[test_case(Rule::BuiltinLambdaArgumentShadowing, Path::new("A006.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!("{}_{}", rule_code.noqa_code(), path.to_string_lossy());
let diagnostics = test_path(
@@ -31,6 +50,8 @@ mod tests {
#[test_case(Rule::BuiltinVariableShadowing, Path::new("A001.py"))]
#[test_case(Rule::BuiltinArgumentShadowing, Path::new("A002.py"))]
#[test_case(Rule::BuiltinAttributeShadowing, Path::new("A003.py"))]
#[test_case(Rule::BuiltinImportShadowing, Path::new("A004.py"))]
#[test_case(Rule::BuiltinLambdaArgumentShadowing, Path::new("A006.py"))]
fn builtins_ignorelist(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"{}_{}_builtins_ignorelist",
@@ -43,6 +64,46 @@ mod tests {
&LinterSettings {
flake8_builtins: super::settings::Settings {
builtins_ignorelist: vec!["id".to_string(), "dir".to_string()],
..Default::default()
},
..LinterSettings::for_rules(vec![rule_code])
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/non_builtin/__init__.py")
)]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/logging/__init__.py")
)]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/string/__init__.py")
)]
#[test_case(
Rule::BuiltinModuleShadowing,
Path::new("A005/modules/package/bisect.py")
)]
#[test_case(Rule::BuiltinModuleShadowing, Path::new("A005/modules/package/xml.py"))]
fn builtins_allowed_modules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"{}_{}_builtins_allowed_modules",
rule_code.noqa_code(),
path.to_string_lossy()
);
let diagnostics = test_path(
Path::new("flake8_builtins").join(path).as_path(),
&LinterSettings {
flake8_builtins: super::settings::Settings {
builtins_allowed_modules: vec!["xml".to_string(), "logging".to_string()],
..Default::default()
},
..LinterSettings::for_rules(vec![rule_code])
},

View File

@@ -1,8 +1,7 @@
use ruff_python_ast::Parameter;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Parameter;
use ruff_python_semantic::analyze::visibility::{is_overload, is_override};
use ruff_text_size::Ranged;
@@ -11,7 +10,7 @@ use crate::checkers::ast::Checker;
use super::super::helpers::shadows_builtin;
/// ## What it does
/// Checks for any function arguments that use the same name as a builtin.
/// Checks for function arguments that use the same names as builtins.
///
/// ## Why is this bad?
/// Reusing a builtin name for the name of an argument increases the

View File

@@ -10,8 +10,8 @@ use crate::checkers::ast::Checker;
use crate::rules::flake8_builtins::helpers::shadows_builtin;
/// ## What it does
/// Checks for any class attributes or methods that use the same name as a
/// builtin.
/// Checks for class attributes and methods that use the same names as
/// Python builtins.
///
/// ## Why is this bad?
/// Reusing a builtin name for the name of an attribute increases the

View File

@@ -0,0 +1,49 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Alias;
use crate::checkers::ast::Checker;
use crate::rules::flake8_builtins::helpers::shadows_builtin;
/// ## What it does
/// Checks for imports that use the same names as builtins.
///
/// ## Why is this bad?
/// Reusing a builtin for the name of an import increases the difficulty
/// of reading and maintaining the code, and can cause non-obvious errors,
/// as readers may mistake the variable for the builtin and vice versa.
///
/// Builtins can be marked as exceptions to this rule via the
/// [`lint.flake8-builtins.builtins-ignorelist`] configuration option.
///
/// ## Options
/// - `lint.flake8-builtins.builtins-ignorelist`
#[violation]
pub struct BuiltinImportShadowing {
name: String,
}
impl Violation for BuiltinImportShadowing {
#[derive_message_formats]
fn message(&self) -> String {
let BuiltinImportShadowing { name } = self;
format!("Import `{name}` is shadowing a Python builtin")
}
}
/// A004
pub(crate) fn builtin_import_shadowing(checker: &mut Checker, alias: &Alias) {
let name = alias.asname.as_ref().unwrap_or(&alias.name);
if shadows_builtin(
name.as_str(),
&checker.settings.flake8_builtins.builtins_ignorelist,
checker.source_type,
) {
checker.diagnostics.push(Diagnostic::new(
BuiltinImportShadowing {
name: name.to_string(),
},
name.range,
));
}
}

View File

@@ -0,0 +1,56 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::ExprLambda;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::rules::flake8_builtins::helpers::shadows_builtin;
/// ## What it does
/// Checks for lambda arguments that use the same names as Python builtins.
///
/// ## Why is this bad?
/// Reusing a builtin name for the name of a lambda argument increases the
/// difficulty of reading and maintaining the code, and can cause
/// non-obvious errors, as readers may mistake the variable for the
/// builtin and vice versa.
///
/// Builtins can be marked as exceptions to this rule via the
/// [`lint.flake8-builtins.builtins-ignorelist`] configuration option.
///
/// ## Options
/// - `lint.flake8-builtins.builtins-ignorelist`
#[violation]
pub struct BuiltinLambdaArgumentShadowing {
name: String,
}
impl Violation for BuiltinLambdaArgumentShadowing {
#[derive_message_formats]
fn message(&self) -> String {
let BuiltinLambdaArgumentShadowing { name } = self;
format!("Lambda argument `{name}` is shadowing a Python builtin")
}
}
/// A006
pub(crate) fn builtin_lambda_argument_shadowing(checker: &mut Checker, lambda: &ExprLambda) {
let Some(parameters) = lambda.parameters.as_ref() else {
return;
};
for param in parameters.iter_non_variadic_params() {
let name = &param.parameter.name;
if shadows_builtin(
name.as_ref(),
&checker.settings.flake8_builtins.builtins_ignorelist,
checker.source_type,
) {
checker.diagnostics.push(Diagnostic::new(
BuiltinLambdaArgumentShadowing {
name: name.to_string(),
},
name.range(),
));
}
}
}

View File

@@ -0,0 +1,73 @@
use std::path::Path;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_stdlib::path::is_module_file;
use ruff_python_stdlib::sys::is_known_standard_library;
use ruff_text_size::TextRange;
use crate::settings::types::PythonVersion;
/// ## What it does
/// Checks for modules that use the same names as Python builtin modules.
///
/// ## Why is this bad?
/// Reusing a builtin module name for the name of a module increases the
/// difficulty of reading and maintaining the code, and can cause
/// non-obvious errors, as readers may mistake the variable for the
/// builtin and vice versa.
///
/// Builtin modules can be marked as exceptions to this rule via the
/// [`lint.flake8-builtins.builtins-allowed-modules`] configuration option.
///
/// ## Options
/// - `lint.flake8-builtins.builtins-allowed-modules`
#[violation]
pub struct BuiltinModuleShadowing {
name: String,
}
impl Violation for BuiltinModuleShadowing {
#[derive_message_formats]
fn message(&self) -> String {
let BuiltinModuleShadowing { name } = self;
format!("Module `{name}` is shadowing a Python builtin module")
}
}
/// A005
pub(crate) fn builtin_module_shadowing(
path: &Path,
package: Option<&Path>,
allowed_modules: &[String],
target_version: PythonVersion,
) -> Option<Diagnostic> {
if !path
.extension()
.is_some_and(|ext| ext == "py" || ext == "pyi")
{
return None;
}
if let Some(package) = package {
let module_name = if is_module_file(path) {
package.file_name().unwrap().to_string_lossy()
} else {
path.file_stem().unwrap().to_string_lossy()
};
if is_known_standard_library(target_version.minor(), &module_name)
&& allowed_modules
.iter()
.all(|allowed_module| allowed_module != &module_name)
{
return Some(Diagnostic::new(
BuiltinModuleShadowing {
name: module_name.to_string(),
},
TextRange::default(),
));
}
}
None
}

View File

@@ -1,15 +1,14 @@
use ruff_text_size::TextRange;
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_text_size::TextRange;
use crate::checkers::ast::Checker;
use crate::rules::flake8_builtins::helpers::shadows_builtin;
/// ## What it does
/// Checks for variable (and function) assignments that use the same name
/// as a builtin.
/// Checks for variable (and function) assignments that use the same names
/// as builtins.
///
/// ## Why is this bad?
/// Reusing a builtin name for the name of a variable increases the

View File

@@ -1,7 +1,13 @@
pub(crate) use builtin_argument_shadowing::*;
pub(crate) use builtin_attribute_shadowing::*;
pub(crate) use builtin_import_shadowing::*;
pub(crate) use builtin_lambda_argument_shadowing::*;
pub(crate) use builtin_module_shadowing::*;
pub(crate) use builtin_variable_shadowing::*;
mod builtin_argument_shadowing;
mod builtin_attribute_shadowing;
mod builtin_import_shadowing;
mod builtin_lambda_argument_shadowing;
mod builtin_module_shadowing;
mod builtin_variable_shadowing;

View File

@@ -7,6 +7,7 @@ use std::fmt::{Display, Formatter};
#[derive(Debug, Clone, Default, CacheKey)]
pub struct Settings {
pub builtins_ignorelist: Vec<String>,
pub builtins_allowed_modules: Vec<String>,
}
impl Display for Settings {
@@ -15,7 +16,8 @@ impl Display for Settings {
formatter = f,
namespace = "linter.flake8_builtins",
fields = [
self.builtins_ignorelist | array
self.builtins_allowed_modules | array,
self.builtins_ignorelist | array,
]
}
Ok(())

View File

@@ -0,0 +1,55 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
A004.py:1:16: A004 Import `sum` is shadowing a Python builtin
|
1 | import some as sum
| ^^^ A004
2 | import float
3 | from some import other as int
|
A004.py:2:8: A004 Import `float` is shadowing a Python builtin
|
1 | import some as sum
2 | import float
| ^^^^^ A004
3 | from some import other as int
4 | from some import input, exec
|
A004.py:3:27: A004 Import `int` is shadowing a Python builtin
|
1 | import some as sum
2 | import float
3 | from some import other as int
| ^^^ A004
4 | from some import input, exec
5 | from directory import new as dir
|
A004.py:4:18: A004 Import `input` is shadowing a Python builtin
|
2 | import float
3 | from some import other as int
4 | from some import input, exec
| ^^^^^ A004
5 | from directory import new as dir
|
A004.py:4:25: A004 Import `exec` is shadowing a Python builtin
|
2 | import float
3 | from some import other as int
4 | from some import input, exec
| ^^^^ A004
5 | from directory import new as dir
|
A004.py:5:30: A004 Import `dir` is shadowing a Python builtin
|
3 | from some import other as int
4 | from some import input, exec
5 | from directory import new as dir
| ^^^ A004
|

View File

@@ -0,0 +1,47 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
A004.py:1:16: A004 Import `sum` is shadowing a Python builtin
|
1 | import some as sum
| ^^^ A004
2 | import float
3 | from some import other as int
|
A004.py:2:8: A004 Import `float` is shadowing a Python builtin
|
1 | import some as sum
2 | import float
| ^^^^^ A004
3 | from some import other as int
4 | from some import input, exec
|
A004.py:3:27: A004 Import `int` is shadowing a Python builtin
|
1 | import some as sum
2 | import float
3 | from some import other as int
| ^^^ A004
4 | from some import input, exec
5 | from directory import new as dir
|
A004.py:4:18: A004 Import `input` is shadowing a Python builtin
|
2 | import float
3 | from some import other as int
4 | from some import input, exec
| ^^^^^ A004
5 | from directory import new as dir
|
A004.py:4:25: A004 Import `exec` is shadowing a Python builtin
|
2 | import float
3 | from some import other as int
4 | from some import input, exec
| ^^^^ A004
5 | from directory import new as dir
|

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
__init__.py:1:1: A005 Module `logging` is shadowing a Python builtin module

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
bisect.py:1:1: A005 Module `bisect` is shadowing a Python builtin module

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
bisect.py:1:1: A005 Module `bisect` is shadowing a Python builtin module

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
xml.py:1:1: A005 Module `xml` is shadowing a Python builtin module

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
__init__.py:1:1: A005 Module `string` is shadowing a Python builtin module

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
__init__.py:1:1: A005 Module `string` is shadowing a Python builtin module

View File

@@ -0,0 +1,64 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
A006.py:1:8: A006 Lambda argument `print` is shadowing a Python builtin
|
1 | lambda print, copyright: print
| ^^^^^ A006
2 | lambda x, float, y: x + y
3 | lambda min, max: min
|
A006.py:1:15: A006 Lambda argument `copyright` is shadowing a Python builtin
|
1 | lambda print, copyright: print
| ^^^^^^^^^ A006
2 | lambda x, float, y: x + y
3 | lambda min, max: min
|
A006.py:2:11: A006 Lambda argument `float` is shadowing a Python builtin
|
1 | lambda print, copyright: print
2 | lambda x, float, y: x + y
| ^^^^^ A006
3 | lambda min, max: min
4 | lambda id: id
|
A006.py:3:8: A006 Lambda argument `min` is shadowing a Python builtin
|
1 | lambda print, copyright: print
2 | lambda x, float, y: x + y
3 | lambda min, max: min
| ^^^ A006
4 | lambda id: id
5 | lambda dir: dir
|
A006.py:3:13: A006 Lambda argument `max` is shadowing a Python builtin
|
1 | lambda print, copyright: print
2 | lambda x, float, y: x + y
3 | lambda min, max: min
| ^^^ A006
4 | lambda id: id
5 | lambda dir: dir
|
A006.py:4:8: A006 Lambda argument `id` is shadowing a Python builtin
|
2 | lambda x, float, y: x + y
3 | lambda min, max: min
4 | lambda id: id
| ^^ A006
5 | lambda dir: dir
|
A006.py:5:8: A006 Lambda argument `dir` is shadowing a Python builtin
|
3 | lambda min, max: min
4 | lambda id: id
5 | lambda dir: dir
| ^^^ A006
|

View File

@@ -0,0 +1,47 @@
---
source: crates/ruff_linter/src/rules/flake8_builtins/mod.rs
---
A006.py:1:8: A006 Lambda argument `print` is shadowing a Python builtin
|
1 | lambda print, copyright: print
| ^^^^^ A006
2 | lambda x, float, y: x + y
3 | lambda min, max: min
|
A006.py:1:15: A006 Lambda argument `copyright` is shadowing a Python builtin
|
1 | lambda print, copyright: print
| ^^^^^^^^^ A006
2 | lambda x, float, y: x + y
3 | lambda min, max: min
|
A006.py:2:11: A006 Lambda argument `float` is shadowing a Python builtin
|
1 | lambda print, copyright: print
2 | lambda x, float, y: x + y
| ^^^^^ A006
3 | lambda min, max: min
4 | lambda id: id
|
A006.py:3:8: A006 Lambda argument `min` is shadowing a Python builtin
|
1 | lambda print, copyright: print
2 | lambda x, float, y: x + y
3 | lambda min, max: min
| ^^^ A006
4 | lambda id: id
5 | lambda dir: dir
|
A006.py:3:13: A006 Lambda argument `max` is shadowing a Python builtin
|
1 | lambda print, copyright: print
2 | lambda x, float, y: x + y
3 | lambda min, max: min
| ^^^ A006
4 | lambda id: id
5 | lambda dir: dir
|

View File

@@ -22,10 +22,16 @@ use crate::rules::flake8_executable::helpers::is_executable;
/// If a `.py` file is executable, but does not have a shebang, it may be run
/// with the wrong interpreter, or fail to run at all.
///
/// If the file is meant to be executable, add a shebang; otherwise, remove the
/// executable bit from the file.
/// If the file is meant to be executable, add a shebang, as in:
/// ```python
/// #!/usr/bin/env python
/// ```
///
/// _This rule is only available on Unix-like systems._
/// Otherwise, remove the executable bit from the file (e.g., `chmod -x __main__.py`).
///
/// A file is considered executable if it has the executable bit set (i.e., its
/// permissions mode intersects with `0o111`). As such, _this rule is only
/// available on Unix-like systems_, and is not enforced on Windows or WSL.
///
/// ## References
/// - [Python documentation: Executable Python Scripts](https://docs.python.org/3/tutorial/appendix.html#executable-python-scripts)

View File

@@ -22,10 +22,21 @@ use crate::rules::flake8_executable::helpers::is_executable;
/// executable. If a file contains a shebang but is not executable, then the
/// shebang is misleading, or the file is missing the executable bit.
///
/// If the file is meant to be executable, add a shebang; otherwise, remove the
/// executable bit from the file.
/// If the file is meant to be executable, add a shebang, as in:
/// ```python
/// #!/usr/bin/env python
/// ```
///
/// _This rule is only available on Unix-like systems._
/// Otherwise, remove the executable bit from the file (e.g., `chmod -x __main__.py`).
///
/// A file is considered executable if it has the executable bit set (i.e., its
/// permissions mode intersects with `0o111`). As such, _this rule is only
/// available on Unix-like systems_, and is not enforced on Windows or WSL.
///
/// ## Example
/// ```python
/// #!/usr/bin/env python
/// ```
///
/// ## References
/// - [Python documentation: Executable Python Scripts](https://docs.python.org/3/tutorial/appendix.html#executable-python-scripts)

View File

@@ -2,8 +2,8 @@ use std::fmt;
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::imports::{AnyImport, ImportFrom};
use ruff_python_ast::Expr;
use ruff_python_semantic::{MemberNameImport, NameImport};
use ruff_text_size::{Ranged, TextSize};
use crate::checkers::ast::Checker;
@@ -86,7 +86,10 @@ impl AlwaysFixableViolation for FutureRequiredTypeAnnotation {
/// FA102
pub(crate) fn future_required_type_annotation(checker: &mut Checker, expr: &Expr, reason: Reason) {
let mut diagnostic = Diagnostic::new(FutureRequiredTypeAnnotation { reason }, expr.range());
let required_import = AnyImport::ImportFrom(ImportFrom::member("__future__", "annotations"));
let required_import = NameImport::ImportFrom(MemberNameImport::member(
"__future__".to_string(),
"annotations".to_string(),
));
diagnostic.set_fix(Fix::unsafe_edit(
checker
.importer()

View File

@@ -1,5 +1,6 @@
use ruff_python_ast::{self as ast, Expr, Operator};
use crate::settings::LinterSettings;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_source_file::Locator;
@@ -40,7 +41,19 @@ impl Violation for ExplicitStringConcatenation {
}
/// ISC003
pub(crate) fn explicit(expr: &Expr, locator: &Locator) -> Option<Diagnostic> {
pub(crate) fn explicit(
expr: &Expr,
locator: &Locator,
settings: &LinterSettings,
) -> Option<Diagnostic> {
// If the user sets `allow-multiline` to `false`, then we should allow explicitly concatenated
// strings that span multiple lines even if this rule is enabled. Otherwise, there's no way
// for the user to write multiline strings, and that setting is "more explicit" than this rule
// being enabled.
if !settings.flake8_implicit_str_concat.allow_multiline {
return None;
}
if let Expr::BinOp(ast::ExprBinOp {
left,
op,

View File

@@ -93,9 +93,9 @@ impl Violation for MultiLineImplicitStringConcatenation {
pub(crate) fn implicit(
diagnostics: &mut Vec<Diagnostic>,
tokens: &Tokens,
settings: &LinterSettings,
locator: &Locator,
indexer: &Indexer,
settings: &LinterSettings,
) {
for (a_token, b_token) in tokens
.iter()

View File

@@ -1,55 +1,4 @@
---
source: crates/ruff_linter/src/rules/flake8_implicit_str_concat/mod.rs
---
ISC.py:9:3: ISC003 Explicitly concatenated string should be implicitly concatenated
|
8 | _ = (
9 | "abc" +
| ___^
10 | | "def"
| |_______^ ISC003
11 | )
|
ISC.py:14:3: ISC003 Explicitly concatenated string should be implicitly concatenated
|
13 | _ = (
14 | f"abc" +
| ___^
15 | | "def"
| |_______^ ISC003
16 | )
|
ISC.py:19:3: ISC003 Explicitly concatenated string should be implicitly concatenated
|
18 | _ = (
19 | b"abc" +
| ___^
20 | | b"def"
| |________^ ISC003
21 | )
|
ISC.py:78:10: ISC003 Explicitly concatenated string should be implicitly concatenated
|
77 | # Explicitly concatenated nested f-strings
78 | _ = f"a {f"first"
| __________^
79 | | + f"second"} d"
| |_______________^ ISC003
80 | _ = f"a {f"first {f"middle"}"
81 | + f"second"} d"
|
ISC.py:80:10: ISC003 Explicitly concatenated string should be implicitly concatenated
|
78 | _ = f"a {f"first"
79 | + f"second"} d"
80 | _ = f"a {f"first {f"middle"}"
| __________^
81 | | + f"second"} d"
| |_______________^ ISC003
|

View File

@@ -92,23 +92,25 @@ pub(crate) fn no_slots_in_namedtuple_subclass(
}
}
/// If the class has a call-based namedtuple in its bases,
/// return the kind of namedtuple it is
/// (either `collections.namedtuple()`, or `typing.NamedTuple()`).
/// Else, return `None`.
/// If the class's bases consist solely of named tuples, return the kind of named tuple
/// (either `collections.namedtuple()`, or `typing.NamedTuple()`). Otherwise, return `None`.
fn namedtuple_base(bases: &[Expr], semantic: &SemanticModel) -> Option<NamedTupleKind> {
let mut kind = None;
for base in bases {
let Expr::Call(ast::ExprCall { func, .. }) = base else {
continue;
};
let Some(qualified_name) = semantic.resolve_qualified_name(func) else {
continue;
};
match qualified_name.segments() {
["collections", "namedtuple"] => return Some(NamedTupleKind::Collections),
["typing", "NamedTuple"] => return Some(NamedTupleKind::Typing),
_ => continue,
if let Expr::Call(ast::ExprCall { func, .. }) = base {
// Ex) `collections.namedtuple()`
let qualified_name = semantic.resolve_qualified_name(func)?;
match qualified_name.segments() {
["collections", "namedtuple"] => kind = kind.or(Some(NamedTupleKind::Collections)),
["typing", "NamedTuple"] => kind = kind.or(Some(NamedTupleKind::Typing)),
// Ex) `enum.Enum`
_ => return None,
}
} else if !semantic.match_builtin_expr(base, "object") {
// Allow inheriting from `object`.
return None;
}
}
None
kind
}

View File

@@ -1,16 +1,30 @@
---
source: crates/ruff_linter/src/rules/flake8_slots/mod.rs
---
SLOT002.py:5:7: SLOT002 Subclasses of `collections.namedtuple()` should define `__slots__`
SLOT002.py:6:7: SLOT002 Subclasses of `collections.namedtuple()` should define `__slots__`
|
5 | class Bad(namedtuple("foo", ["str", "int"])): # SLOT002
6 | class Bad(namedtuple("foo", ["str", "int"])): # SLOT002
| ^^^ SLOT002
6 | pass
7 | pass
|
SLOT002.py:9:7: SLOT002 Subclasses of call-based `typing.NamedTuple()` should define `__slots__`
SLOT002.py:10:7: SLOT002 Subclasses of call-based `typing.NamedTuple()` should define `__slots__`
|
9 | class UnusualButStillBad(NamedTuple("foo", [("x", int, "y", int)])): # SLOT002
10 | class UnusualButStillBad(NamedTuple("foo", [("x", int, "y", int)])): # SLOT002
| ^^^^^^^^^^^^^^^^^^ SLOT002
10 | pass
11 | pass
|
SLOT002.py:30:7: SLOT002 Subclasses of `collections.namedtuple()` should define `__slots__`
|
30 | class UnusualButStillBad(namedtuple("foo", ["str", "int"]), NamedTuple("foo", [("x", int, "y", int)])):
| ^^^^^^^^^^^^^^^^^^ SLOT002
31 | pass
|
SLOT002.py:34:7: SLOT002 Subclasses of `collections.namedtuple()` should define `__slots__`
|
34 | class UnusualButStillBad(namedtuple("foo", ["str", "int"]), object):
| ^^^^^^^^^^^^^^^^^^ SLOT002
35 | pass
|

View File

@@ -282,11 +282,11 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use ruff_python_semantic::{MemberNameImport, ModuleNameImport, NameImport};
use ruff_text_size::Ranged;
use rustc_hash::{FxHashMap, FxHashSet};
use test_case::test_case;
use ruff_text_size::Ranged;
use crate::assert_messages;
use crate::registry::Rule;
use crate::rules::isort::categorize::{ImportSection, KnownModules};
@@ -804,9 +804,12 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter([
"from __future__ import annotations".to_string()
]),
required_imports: BTreeSet::from_iter([NameImport::ImportFrom(
MemberNameImport::member(
"__future__".to_string(),
"annotations".to_string(),
),
)]),
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::MissingRequiredImport)
@@ -834,9 +837,13 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter([
"from __future__ import annotations as _annotations".to_string(),
]),
required_imports: BTreeSet::from_iter([NameImport::ImportFrom(
MemberNameImport::alias(
"__future__".to_string(),
"annotations".to_string(),
"_annotations".to_string(),
),
)]),
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::MissingRequiredImport)
@@ -858,8 +865,14 @@ mod tests {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter([
"from __future__ import annotations".to_string(),
"from __future__ import generator_stop".to_string(),
NameImport::ImportFrom(MemberNameImport::member(
"__future__".to_string(),
"annotations".to_string(),
)),
NameImport::ImportFrom(MemberNameImport::member(
"__future__".to_string(),
"generator_stop".to_string(),
)),
]),
..super::settings::Settings::default()
},
@@ -870,29 +883,6 @@ mod tests {
Ok(())
}
#[test_case(Path::new("docstring.py"))]
#[test_case(Path::new("docstring.pyi"))]
#[test_case(Path::new("docstring_only.py"))]
#[test_case(Path::new("empty.py"))]
fn combined_required_imports(path: &Path) -> Result<()> {
let snapshot = format!("combined_required_imports_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("isort/required_imports").join(path).as_path(),
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter(["from __future__ import annotations, \
generator_stop"
.to_string()]),
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::MissingRequiredImport)
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(Path::new("docstring.py"))]
#[test_case(Path::new("docstring.pyi"))]
#[test_case(Path::new("docstring_only.py"))]
@@ -904,7 +894,9 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter(["import os".to_string()]),
required_imports: BTreeSet::from_iter([NameImport::Import(
ModuleNameImport::module("os".to_string()),
)]),
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::MissingRequiredImport)
@@ -914,6 +906,40 @@ mod tests {
Ok(())
}
#[test_case(Path::new("unused.py"))]
fn required_import_unused(path: &Path) -> Result<()> {
let snapshot = format!("required_import_{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("isort/required_imports").join(path).as_path(),
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from_iter([
NameImport::Import(ModuleNameImport::module("os".to_string())),
NameImport::Import(ModuleNameImport::alias(
"shelve".to_string(),
"alias".to_string(),
)),
NameImport::ImportFrom(MemberNameImport::member(
"typing".to_string(),
"List".to_string(),
)),
NameImport::ImportFrom(MemberNameImport::alias(
"typing".to_string(),
"Set".to_string(),
"SetAlias".to_string(),
)),
NameImport::Import(ModuleNameImport::module("urllib.parse".to_string())),
]),
..super::settings::Settings::default()
},
..LinterSettings::for_rules([Rule::MissingRequiredImport, Rule::UnusedImport])
},
)?;
assert_messages!(snapshot, diagnostics);
Ok(())
}
#[test_case(Path::new("from_first.py"))]
fn from_first(path: &Path) -> Result<()> {
let snapshot = format!("from_first_{}", path.to_string_lossy());

View File

@@ -1,12 +1,10 @@
use log::error;
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::is_docstring_stmt;
use ruff_python_ast::imports::{Alias, AnyImport, FutureImport, Import, ImportFrom};
use ruff_python_ast::{self as ast, ModModule, PySourceType, Stmt};
use ruff_python_codegen::Stylist;
use ruff_python_parser::{parse_module, Parsed};
use ruff_python_parser::Parsed;
use ruff_python_semantic::{FutureImport, NameImport};
use ruff_source_file::Locator;
use ruff_text_size::{TextRange, TextSize};
@@ -53,18 +51,19 @@ impl AlwaysFixableViolation for MissingRequiredImport {
}
}
/// Return `true` if the [`Stmt`] includes the given [`AnyImport`].
fn includes_import(stmt: &Stmt, target: &AnyImport) -> bool {
/// Return `true` if the [`Stmt`] includes the given [`AnyImportRef`].
fn includes_import(stmt: &Stmt, target: &NameImport) -> bool {
match target {
AnyImport::Import(target) => {
NameImport::Import(target) => {
let Stmt::Import(ast::StmtImport { names, range: _ }) = &stmt else {
return false;
};
names.iter().any(|alias| {
&alias.name == target.name.name && alias.asname.as_deref() == target.name.as_name
alias.name == target.name.name
&& alias.asname.as_deref() == target.name.as_name.as_deref()
})
}
AnyImport::ImportFrom(target) => {
NameImport::ImportFrom(target) => {
let Stmt::ImportFrom(ast::StmtImportFrom {
module,
names,
@@ -74,11 +73,11 @@ fn includes_import(stmt: &Stmt, target: &AnyImport) -> bool {
else {
return false;
};
module.as_deref() == target.module
module.as_deref() == target.module.as_deref()
&& *level == target.level
&& names.iter().any(|alias| {
&alias.name == target.name.name
&& alias.asname.as_deref() == target.name.as_name
alias.name == target.name.name
&& alias.asname.as_deref() == target.name.as_name.as_deref()
})
}
}
@@ -86,7 +85,7 @@ fn includes_import(stmt: &Stmt, target: &AnyImport) -> bool {
#[allow(clippy::too_many_arguments)]
fn add_required_import(
required_import: &AnyImport,
required_import: &NameImport,
parsed: &Parsed<ModModule>,
locator: &Locator,
stylist: &Stylist,
@@ -134,69 +133,8 @@ pub(crate) fn add_required_imports(
.isort
.required_imports
.iter()
.flat_map(|required_import| {
let Ok(body) = parse_module(required_import).map(Parsed::into_suite) else {
error!("Failed to parse required import: `{}`", required_import);
return vec![];
};
if body.is_empty() || body.len() > 1 {
error!(
"Expected require import to contain a single statement: `{}`",
required_import
);
return vec![];
}
let stmt = &body[0];
match stmt {
Stmt::ImportFrom(ast::StmtImportFrom {
module,
names,
level,
range: _,
}) => names
.iter()
.filter_map(|name| {
add_required_import(
&AnyImport::ImportFrom(ImportFrom {
module: module.as_deref(),
name: Alias {
name: name.name.as_str(),
as_name: name.asname.as_deref(),
},
level: *level,
}),
parsed,
locator,
stylist,
source_type,
)
})
.collect(),
Stmt::Import(ast::StmtImport { names, range: _ }) => names
.iter()
.filter_map(|name| {
add_required_import(
&AnyImport::Import(Import {
name: Alias {
name: name.name.as_str(),
as_name: name.asname.as_deref(),
},
}),
parsed,
locator,
stylist,
source_type,
)
})
.collect(),
_ => {
error!(
"Expected required import to be in import-from style: `{}`",
required_import
);
vec![]
}
}
.filter_map(|required_import| {
add_required_import(required_import, parsed, locator, stylist, source_type)
})
.collect()
}

View File

@@ -9,11 +9,11 @@ use rustc_hash::FxHashSet;
use serde::{Deserialize, Serialize};
use strum::IntoEnumIterator;
use ruff_macros::CacheKey;
use crate::display_settings;
use crate::rules::isort::categorize::KnownModules;
use crate::rules::isort::ImportType;
use ruff_macros::CacheKey;
use ruff_python_semantic::NameImport;
use super::categorize::ImportSection;
@@ -47,7 +47,7 @@ impl Display for RelativeImportsOrder {
#[derive(Debug, Clone, CacheKey)]
#[allow(clippy::struct_excessive_bools)]
pub struct Settings {
pub required_imports: BTreeSet<String>,
pub required_imports: BTreeSet<NameImport>,
pub combine_as_imports: bool,
pub force_single_line: bool,
pub force_sort_within_sections: bool,

View File

@@ -1,16 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---
docstring.py:1:1: I002 [*] Missing required import: `from __future__ import annotations`
Safe fix
1 1 | """Hello, world!"""
2 |+from __future__ import annotations
2 3 |
3 4 | x = 1
docstring.py:1:1: I002 [*] Missing required import: `from __future__ import generator_stop`
Safe fix
1 1 | """Hello, world!"""
2 |+from __future__ import generator_stop
2 3 |
3 4 | x = 1

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---

View File

@@ -1,4 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---

View File

@@ -0,0 +1,40 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---
unused.py:5:8: F401 [*] `sys` imported but unused
|
4 | # Unused, _not_ marked as required.
5 | import sys
| ^^^ F401
6 |
7 | # Unused, _not_ marked as required (due to the alias).
|
= help: Remove unused import: `sys`
Safe fix
2 2 | import os
3 3 |
4 4 | # Unused, _not_ marked as required.
5 |-import sys
6 5 |
7 6 | # Unused, _not_ marked as required (due to the alias).
8 7 | import pathlib as non_alias
unused.py:8:19: F401 [*] `pathlib` imported but unused
|
7 | # Unused, _not_ marked as required (due to the alias).
8 | import pathlib as non_alias
| ^^^^^^^^^ F401
9 |
10 | # Unused, marked as required.
|
= help: Remove unused import: `pathlib`
Safe fix
5 5 | import sys
6 6 |
7 7 | # Unused, _not_ marked as required (due to the alias).
8 |-import pathlib as non_alias
9 8 |
10 9 | # Unused, marked as required.
11 10 | import shelve as alias

View File

@@ -4,6 +4,7 @@ use std::path::Path;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_stdlib::identifiers::{is_migration_name, is_module_name};
use ruff_python_stdlib::path::is_module_file;
use ruff_text_size::TextRange;
use crate::rules::pep8_naming::settings::IgnoreNames;
@@ -92,16 +93,6 @@ pub(crate) fn invalid_module_name(
None
}
/// Return `true` if a [`Path`] should use the name of its parent directory as its module name.
fn is_module_file(path: &Path) -> bool {
path.file_name().is_some_and(|file_name| {
file_name == "__init__.py"
|| file_name == "__init__.pyi"
|| file_name == "__main__.py"
|| file_name == "__main__.pyi"
})
}
/// Return `true` if a [`Path`] refers to a migration file.
fn is_migration_file(path: &Path) -> bool {
path.parent()

View File

@@ -26,6 +26,8 @@ mod tests {
Ok(())
}
#[test_case(Rule::DocstringMissingReturns, Path::new("DOC201_google.py"))]
#[test_case(Rule::DocstringExtraneousReturns, Path::new("DOC202_google.py"))]
#[test_case(Rule::DocstringMissingException, Path::new("DOC501_google.py"))]
#[test_case(Rule::DocstringExtraneousException, Path::new("DOC502_google.py"))]
fn rules_google_style(rule_code: Rule, path: &Path) -> Result<()> {
@@ -45,6 +47,8 @@ mod tests {
Ok(())
}
#[test_case(Rule::DocstringMissingReturns, Path::new("DOC201_numpy.py"))]
#[test_case(Rule::DocstringExtraneousReturns, Path::new("DOC202_numpy.py"))]
#[test_case(Rule::DocstringMissingException, Path::new("DOC501_numpy.py"))]
#[test_case(Rule::DocstringExtraneousException, Path::new("DOC502_numpy.py"))]
fn rules_numpy_style(rule_code: Rule, path: &Path) -> Result<()> {

View File

@@ -3,17 +3,105 @@ use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::name::QualifiedName;
use ruff_python_ast::visitor::{self, Visitor};
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::{self as ast, statement_visitor, Expr, Stmt};
use ruff_python_semantic::{Definition, MemberKind, SemanticModel};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::docstrings::sections::{SectionContexts, SectionKind};
use crate::docstrings::sections::{SectionContext, SectionContexts, SectionKind};
use crate::docstrings::styles::SectionStyle;
use crate::registry::Rule;
use crate::rules::pydocstyle::settings::Convention;
/// ## What it does
/// Checks for functions with explicit returns missing a returns section in
/// their docstring.
///
/// ## Why is this bad?
/// Docstrings missing return sections are a sign of incomplete documentation
/// or refactors.
///
/// ## Example
/// ```python
/// def calculate_speed(distance: float, time: float) -> float:
/// """Calculate speed as distance divided by time.
///
/// Args:
/// distance: Distance traveled.
/// time: Time spent traveling.
/// """
/// return distance / time
/// ```
///
/// Use instead:
/// ```python
/// def calculate_speed(distance: float, time: float) -> float:
/// """Calculate speed as distance divided by time.
///
/// Args:
/// distance: Distance traveled.
/// time: Time spent traveling.
///
/// Returns:
/// Speed as distance divided by time.
/// """
/// return distance / time
/// ```
#[violation]
pub struct DocstringMissingReturns;
impl Violation for DocstringMissingReturns {
#[derive_message_formats]
fn message(&self) -> String {
format!("`return` is not documented in docstring")
}
}
/// ## What it does
/// Checks for function docstrings that have a returns section without
/// needing one.
///
/// ## Why is this bad?
/// Functions without an explicit return should not have a returns section
/// in their docstrings.
///
/// ## Example
/// ```python
/// def say_hello(n: int) -> None:
/// """Says hello to the user.
///
/// Args:
/// n: Number of times to say hello.
///
/// Returns:
/// Doesn't return anything.
/// """
/// for _ in range(n):
/// print("Hello!")
/// ```
///
/// Use instead:
/// ```python
/// def say_hello(n: int) -> None:
/// """Says hello to the user.
///
/// Args:
/// n: Number of times to say hello.
/// """
/// for _ in range(n):
/// print("Hello!")
/// ```
#[violation]
pub struct DocstringExtraneousReturns;
impl Violation for DocstringExtraneousReturns {
#[derive_message_formats]
fn message(&self) -> String {
format!("Docstring should not have a returns section because the function doesn't return anything")
}
}
/// ## What it does
/// Checks for function docstrings that do not include documentation for all
/// explicitly-raised exceptions.
@@ -135,31 +223,68 @@ impl Violation for DocstringExtraneousException {
}
}
// A generic docstring section.
#[derive(Debug)]
struct DocstringEntries<'a> {
raised_exceptions: Vec<QualifiedName<'a>>,
raised_exceptions_range: TextRange,
struct GenericSection {
range: TextRange,
}
impl<'a> DocstringEntries<'a> {
/// Return the raised exceptions for the docstring, or `None` if the docstring does not contain
/// a `Raises` section.
fn from_sections(sections: &'a SectionContexts, style: SectionStyle) -> Option<Self> {
for section in sections.iter() {
if section.kind() == SectionKind::Raises {
return Some(Self {
raised_exceptions: parse_entries(section.following_lines_str(), style),
raised_exceptions_range: section.range(),
});
}
}
None
impl Ranged for GenericSection {
fn range(&self) -> TextRange {
self.range
}
}
impl Ranged for DocstringEntries<'_> {
impl GenericSection {
fn from_section(section: &SectionContext) -> Self {
Self {
range: section.range(),
}
}
}
// A Raises docstring section.
#[derive(Debug)]
struct RaisesSection<'a> {
raised_exceptions: Vec<QualifiedName<'a>>,
range: TextRange,
}
impl Ranged for RaisesSection<'_> {
fn range(&self) -> TextRange {
self.raised_exceptions_range
self.range
}
}
impl<'a> RaisesSection<'a> {
/// Return the raised exceptions for the docstring, or `None` if the docstring does not contain
/// a `Raises` section.
fn from_section(section: &SectionContext<'a>, style: SectionStyle) -> Self {
Self {
raised_exceptions: parse_entries(section.following_lines_str(), style),
range: section.range(),
}
}
}
#[derive(Debug)]
struct DocstringSections<'a> {
returns: Option<GenericSection>,
raises: Option<RaisesSection<'a>>,
}
impl<'a> DocstringSections<'a> {
fn from_sections(sections: &'a SectionContexts, style: SectionStyle) -> Self {
let mut returns: Option<GenericSection> = None;
let mut raises: Option<RaisesSection> = None;
for section in sections.iter() {
match section.kind() {
SectionKind::Raises => raises = Some(RaisesSection::from_section(&section, style)),
SectionKind::Returns => returns = Some(GenericSection::from_section(&section)),
_ => continue,
}
}
Self { returns, raises }
}
}
@@ -219,34 +344,49 @@ fn parse_entries_numpy(content: &str) -> Vec<QualifiedName> {
entries
}
/// An individual exception raised in a function body.
/// An individual documentable statement in a function body.
#[derive(Debug)]
struct Entry<'a> {
qualified_name: QualifiedName<'a>,
struct Entry {
range: TextRange,
}
impl Ranged for Entry<'_> {
impl Ranged for Entry {
fn range(&self) -> TextRange {
self.range
}
}
/// The exceptions raised in a function body.
/// An individual exception raised in a function body.
#[derive(Debug)]
struct BodyEntries<'a> {
raised_exceptions: Vec<Entry<'a>>,
struct ExceptionEntry<'a> {
qualified_name: QualifiedName<'a>,
range: TextRange,
}
/// An AST visitor to extract the raised exceptions from a function body.
impl Ranged for ExceptionEntry<'_> {
fn range(&self) -> TextRange {
self.range
}
}
/// A summary of documentable statements from the function body
#[derive(Debug)]
struct BodyEntries<'a> {
returns: Vec<Entry>,
raised_exceptions: Vec<ExceptionEntry<'a>>,
}
/// An AST visitor to extract a summary of documentable statements from a function body.
struct BodyVisitor<'a> {
raised_exceptions: Vec<Entry<'a>>,
returns: Vec<Entry>,
raised_exceptions: Vec<ExceptionEntry<'a>>,
semantic: &'a SemanticModel<'a>,
}
impl<'a> BodyVisitor<'a> {
fn new(semantic: &'a SemanticModel) -> Self {
Self {
returns: Vec::new(),
raised_exceptions: Vec::new(),
semantic,
}
@@ -254,22 +394,35 @@ impl<'a> BodyVisitor<'a> {
fn finish(self) -> BodyEntries<'a> {
BodyEntries {
returns: self.returns,
raised_exceptions: self.raised_exceptions,
}
}
}
impl<'a> Visitor<'a> for BodyVisitor<'a> {
impl<'a> StatementVisitor<'a> for BodyVisitor<'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
if let Stmt::Raise(ast::StmtRaise { exc: Some(exc), .. }) = stmt {
if let Some(qualified_name) = extract_raised_exception(self.semantic, exc.as_ref()) {
self.raised_exceptions.push(Entry {
qualified_name,
range: exc.as_ref().range(),
});
match stmt {
Stmt::Raise(ast::StmtRaise { exc: Some(exc), .. }) => {
if let Some(qualified_name) = extract_raised_exception(self.semantic, exc.as_ref())
{
self.raised_exceptions.push(ExceptionEntry {
qualified_name,
range: exc.as_ref().range(),
});
}
}
Stmt::Return(ast::StmtReturn {
range,
value: Some(_),
}) => {
self.returns.push(Entry { range: *range });
}
Stmt::FunctionDef(_) | Stmt::ClassDef(_) => return,
_ => {}
}
visitor::walk_stmt(self, stmt);
statement_visitor::walk_stmt(self, stmt);
}
}
@@ -286,7 +439,28 @@ fn extract_raised_exception<'a>(
None
}
/// DOC501, DOC502
// Checks if a function has a `@property` decorator
fn is_property(definition: &Definition, checker: &Checker) -> bool {
let Some(function) = definition.as_function_def() else {
return false;
};
let Some(last_decorator) = function.decorator_list.last() else {
return false;
};
checker
.semantic()
.resolve_qualified_name(&last_decorator.expression)
.is_some_and(|qualified_name| {
matches!(
qualified_name.segments(),
["", "property"] | ["functools", "cached_property"]
)
})
}
/// DOC201, DOC202, DOC501, DOC502
pub(crate) fn check_docstring(
checker: &mut Checker,
definition: &Definition,
@@ -307,22 +481,43 @@ pub(crate) fn check_docstring(
}
// Prioritize the specified convention over the determined style.
let docstring_entries = match convention {
let docstring_sections = match convention {
Some(Convention::Google) => {
DocstringEntries::from_sections(section_contexts, SectionStyle::Google)
DocstringSections::from_sections(section_contexts, SectionStyle::Google)
}
Some(Convention::Numpy) => {
DocstringEntries::from_sections(section_contexts, SectionStyle::Numpy)
DocstringSections::from_sections(section_contexts, SectionStyle::Numpy)
}
_ => DocstringEntries::from_sections(section_contexts, section_contexts.style()),
_ => DocstringSections::from_sections(section_contexts, section_contexts.style()),
};
let body_entries = {
let mut visitor = BodyVisitor::new(checker.semantic());
visitor::walk_body(&mut visitor, member.body());
visitor.visit_body(member.body());
visitor.finish()
};
// DOC201
if checker.enabled(Rule::DocstringMissingReturns) {
if !is_property(definition, checker) && docstring_sections.returns.is_none() {
if let Some(body_return) = body_entries.returns.first() {
let diagnostic = Diagnostic::new(DocstringMissingReturns, body_return.range());
diagnostics.push(diagnostic);
}
}
}
// DOC202
if checker.enabled(Rule::DocstringExtraneousReturns) {
if let Some(docstring_returns) = docstring_sections.returns {
if body_entries.returns.is_empty() {
let diagnostic =
Diagnostic::new(DocstringExtraneousReturns, docstring_returns.range());
diagnostics.push(diagnostic);
}
}
}
// DOC501
if checker.enabled(Rule::DocstringMissingException) {
for body_raise in &body_entries.raised_exceptions {
@@ -334,8 +529,8 @@ pub(crate) fn check_docstring(
continue;
}
if !docstring_entries.as_ref().is_some_and(|entries| {
entries.raised_exceptions.iter().any(|exception| {
if !docstring_sections.raises.as_ref().is_some_and(|section| {
section.raised_exceptions.iter().any(|exception| {
body_raise
.qualified_name
.segments()
@@ -355,9 +550,9 @@ pub(crate) fn check_docstring(
// DOC502
if checker.enabled(Rule::DocstringExtraneousException) {
if let Some(docstring_entries) = docstring_entries {
if let Some(docstring_raises) = docstring_sections.raises {
let mut extraneous_exceptions = Vec::new();
for docstring_raise in &docstring_entries.raised_exceptions {
for docstring_raise in &docstring_raises.raised_exceptions {
if !body_entries.raised_exceptions.iter().any(|exception| {
exception
.qualified_name
@@ -372,7 +567,7 @@ pub(crate) fn check_docstring(
DocstringExtraneousException {
ids: extraneous_exceptions,
},
docstring_entries.range(),
docstring_raises.range(),
);
diagnostics.push(diagnostic);
}

View File

@@ -0,0 +1,24 @@
---
source: crates/ruff_linter/src/rules/pydoclint/mod.rs
---
DOC202_google.py:20:1: DOC202 Docstring should not have a returns section because the function doesn't return anything
|
18 | num (int): A number
19 |
20 | / Returns:
21 | | str: A string
22 | | """
| |____^ DOC202
23 | print('test')
|
DOC202_google.py:36:1: DOC202 Docstring should not have a returns section because the function doesn't return anything
|
34 | num (int): A number
35 |
36 | / Returns:
37 | | str: A string
38 | | """
| |________^ DOC202
39 | print('test')
|

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff_linter/src/rules/pydoclint/mod.rs
---
DOC202_numpy.py:24:1: DOC202 Docstring should not have a returns section because the function doesn't return anything
|
22 | A number
23 |
24 | / Returns
25 | | -------
26 | | str
27 | | A string
28 | | """
| |____^ DOC202
29 | print('test')
|
DOC202_numpy.py:44:1: DOC202 Docstring should not have a returns section because the function doesn't return anything
|
42 | A number
43 |
44 | / Returns
45 | | -------
46 | | str
47 | | A string
48 | | """
| |________^ DOC202
49 | print('test')
|

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff_linter/src/rules/pydoclint/mod.rs
---
DOC201_google.py:9:5: DOC201 `return` is not documented in docstring
|
7 | num (int): A number
8 | """
9 | return 'test'
| ^^^^^^^^^^^^^ DOC201
|
DOC201_google.py:50:9: DOC201 `return` is not documented in docstring
|
48 | num (int): A number
49 | """
50 | return 'test'
| ^^^^^^^^^^^^^ DOC201
|
DOC201_google.py:71:9: DOC201 `return` is not documented in docstring
|
69 | def nested():
70 | """Do something nested."""
71 | return 5
| ^^^^^^^^ DOC201
72 |
73 | print("I never return")
|

Some files were not shown because too many files have changed in this diff Show More