Compare commits
1 Commits
split-comp
...
cjm/phis
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bccc33c6fe |
@@ -1,43 +1,5 @@
|
||||
# Breaking Changes
|
||||
|
||||
## 0.6.0
|
||||
|
||||
- Detect imports in `src` layouts by default for `isort` rules ([#12848](https://github.com/astral-sh/ruff/pull/12848))
|
||||
|
||||
- The pytest rules `PT001` and `PT023` now default to omitting the decorator parentheses when there are no arguments ([#12838](https://github.com/astral-sh/ruff/pull/12838)).
|
||||
|
||||
- Lint and format Jupyter Notebook by default ([#12878](https://github.com/astral-sh/ruff/pull/12878)).
|
||||
|
||||
You can disable specific rules for notebooks using [`per-file-ignores`](https://docs.astral.sh/ruff/settings/#lint_per-file-ignores):
|
||||
|
||||
```toml
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"*.ipynb" = ["E501"] # disable line-too-long in notebooks
|
||||
```
|
||||
|
||||
If you'd prefer to either only lint or only format Jupyter Notebook files, you can use the
|
||||
section-specific `exclude` option to do so. For example, the following would only lint Jupyter
|
||||
Notebook files and not format them:
|
||||
|
||||
```toml
|
||||
[tool.ruff.format]
|
||||
exclude = ["*.ipynb"]
|
||||
```
|
||||
|
||||
And, conversely, the following would only format Jupyter Notebook files and not lint them:
|
||||
|
||||
```toml
|
||||
[tool.ruff.lint]
|
||||
exclude = ["*.ipynb"]
|
||||
```
|
||||
|
||||
You can completely disable Jupyter Notebook support by updating the [`extend-exclude`](https://docs.astral.sh/ruff/settings/#extend-exclude) setting:
|
||||
|
||||
```toml
|
||||
[tool.ruff]
|
||||
extend-exclude = ["*.ipynb"]
|
||||
```
|
||||
|
||||
## 0.5.0
|
||||
|
||||
- Follow the XDG specification to discover user-level configurations on macOS (same as on other Unix platforms)
|
||||
|
||||
91
CHANGELOG.md
91
CHANGELOG.md
@@ -1,96 +1,5 @@
|
||||
# Changelog
|
||||
|
||||
## 0.6.0
|
||||
|
||||
Check out the [blog post](https://astral.sh/blog/ruff-v0.6.0) for a migration guide and overview of the changes!
|
||||
|
||||
### Breaking changes
|
||||
|
||||
See also, the "Remapped rules" section which may result in disabled rules.
|
||||
|
||||
- Lint and format Jupyter Notebook by default ([#12878](https://github.com/astral-sh/ruff/pull/12878)).
|
||||
- Detect imports in `src` layouts by default for `isort` rules ([#12848](https://github.com/astral-sh/ruff/pull/12848))
|
||||
- The pytest rules `PT001` and `PT023` now default to omitting the decorator parentheses when there are no arguments ([#12838](https://github.com/astral-sh/ruff/pull/12838)).
|
||||
|
||||
### Deprecations
|
||||
|
||||
The following rules are now deprecated:
|
||||
|
||||
- [`pytest-missing-fixture-name-underscore`](https://docs.astral.sh/ruff/rules/pytest-missing-fixture-name-underscore/) (`PT004`)
|
||||
- [`pytest-incorrect-fixture-name-underscore`](https://docs.astral.sh/ruff/rules/pytest-incorrect-fixture-name-underscore/) (`PT005`)
|
||||
- [`unpacked-list-comprehension`](https://docs.astral.sh/ruff/rules/unpacked-list-comprehension/) (`UP027`)
|
||||
|
||||
### Remapped rules
|
||||
|
||||
The following rules have been remapped to new rule codes:
|
||||
|
||||
- [`unnecessary-dict-comprehension-for-iterable`](https://docs.astral.sh/ruff/rules/unnecessary-dict-comprehension-for-iterable/): `RUF025` to `C420`
|
||||
|
||||
### Stabilization
|
||||
|
||||
The following rules have been stabilized and are no longer in preview:
|
||||
|
||||
- [`singledispatch-method`](https://docs.astral.sh/ruff/rules/singledispatch-method/) (`PLE1519`)
|
||||
- [`singledispatchmethod-function`](https://docs.astral.sh/ruff/rules/singledispatchmethod-function/) (`PLE1520`)
|
||||
- [`bad-staticmethod-argument`](https://docs.astral.sh/ruff/rules/bad-staticmethod-argument/) (`PLW0211`)
|
||||
- [`if-stmt-min-max`](https://docs.astral.sh/ruff/rules/if-stmt-min-max/) (`PLR1730`)
|
||||
- [`invalid-bytes-return-type`](https://docs.astral.sh/ruff/rules/invalid-bytes-return-type/) (`PLE0308`)
|
||||
- [`invalid-hash-return-type`](https://docs.astral.sh/ruff/rules/invalid-hash-return-type/) (`PLE0309`)
|
||||
- [`invalid-index-return-type`](https://docs.astral.sh/ruff/rules/invalid-index-return-type/) (`PLE0305`)
|
||||
- [`invalid-length-return-type`](https://docs.astral.sh/ruff/rules/invalid-length-return-type/) (`E303`)
|
||||
- [`self-or-cls-assignment`](https://docs.astral.sh/ruff/rules/self-or-cls-assignment/) (`PLW0642`)
|
||||
- [`byte-string-usage`](https://docs.astral.sh/ruff/rules/byte-string-usage/) (`PYI057`)
|
||||
- [`duplicate-literal-member`](https://docs.astral.sh/ruff/rules/duplicate-literal-member/) (`PYI062`)
|
||||
- [`redirected-noqa`](https://docs.astral.sh/ruff/rules/redirected-noqa/) (`RUF101`)
|
||||
|
||||
The following behaviors have been stabilized:
|
||||
|
||||
- [`cancel-scope-no-checkpoint`](https://docs.astral.sh/ruff/rules/cancel-scope-no-checkpoint/) (`ASYNC100`): Support `asyncio` and `anyio` context mangers.
|
||||
- [`async-function-with-timeout`](https://docs.astral.sh/ruff/rules/async-function-with-timeout/) (`ASYNC109`): Support `asyncio` and `anyio` context mangers.
|
||||
- [`async-busy-wait`](https://docs.astral.sh/ruff/rules/async-busy-wait/) (`ASYNC110`): Support `asyncio` and `anyio` context mangers.
|
||||
- [`async-zero-sleep`](https://docs.astral.sh/ruff/rules/async-zero-sleep/) (`ASYNC115`): Support `anyio` context mangers.
|
||||
- [`long-sleep-not-forever`](https://docs.astral.sh/ruff/rules/long-sleep-not-forever/) (`ASYNC116`): Support `anyio` context mangers.
|
||||
|
||||
The following fixes have been stabilized:
|
||||
|
||||
- [`superfluous-else-return`](https://docs.astral.sh/ruff/rules/superfluous-else-return/) (`RET505`)
|
||||
- [`superfluous-else-raise`](https://docs.astral.sh/ruff/rules/superfluous-else-raise/) (`RET506`)
|
||||
- [`superfluous-else-continue`](https://docs.astral.sh/ruff/rules/superfluous-else-continue/) (`RET507`)
|
||||
- [`superfluous-else-break`](https://docs.astral.sh/ruff/rules/superfluous-else-break/) (`RET508`)
|
||||
|
||||
### Preview features
|
||||
|
||||
- \[`flake8-simplify`\] Further simplify to binary in preview for (`SIM108`) ([#12796](https://github.com/astral-sh/ruff/pull/12796))
|
||||
- \[`pyupgrade`\] Show violations without auto-fix (`UP031`) ([#11229](https://github.com/astral-sh/ruff/pull/11229))
|
||||
|
||||
### Rule changes
|
||||
|
||||
- \[`flake8-import-conventions`\] Add `xml.etree.ElementTree` to default conventions ([#12455](https://github.com/astral-sh/ruff/pull/12455))
|
||||
- \[`flake8-pytest-style`\] Add a space after comma in CSV output (`PT006`) ([#12853](https://github.com/astral-sh/ruff/pull/12853))
|
||||
|
||||
### Server
|
||||
|
||||
- Show a message for incorrect settings ([#12781](https://github.com/astral-sh/ruff/pull/12781))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- \[`flake8-async`\] Do not lint yield in context manager (`ASYNC100`) ([#12896](https://github.com/astral-sh/ruff/pull/12896))
|
||||
- \[`flake8-comprehensions`\] Do not lint `async for` comprehensions (`C419`) ([#12895](https://github.com/astral-sh/ruff/pull/12895))
|
||||
- \[`flake8-return`\] Only add return `None` at end of a function (`RET503`) ([#11074](https://github.com/astral-sh/ruff/pull/11074))
|
||||
- \[`flake8-type-checking`\] Avoid treating `dataclasses.KW_ONLY` as typing-only (`TCH003`) ([#12863](https://github.com/astral-sh/ruff/pull/12863))
|
||||
- \[`pep8-naming`\] Treat `type(Protocol)` et al as metaclass base (`N805`) ([#12770](https://github.com/astral-sh/ruff/pull/12770))
|
||||
- \[`pydoclint`\] Don't enforce returns and yields in abstract methods (`DOC201`, `DOC202`) ([#12771](https://github.com/astral-sh/ruff/pull/12771))
|
||||
- \[`ruff`\] Skip tuples with slice expressions in (`RUF031`) ([#12768](https://github.com/astral-sh/ruff/pull/12768))
|
||||
- \[`ruff`\] Ignore unparenthesized tuples in subscripts when the subscript is a type annotation or type alias (`RUF031`) ([#12762](https://github.com/astral-sh/ruff/pull/12762))
|
||||
- \[`ruff`\] Ignore template strings passed to logging and `builtins._()` calls (`RUF027`) ([#12889](https://github.com/astral-sh/ruff/pull/12889))
|
||||
- \[`ruff`\] Do not remove parens for tuples with starred expressions in Python \<=3.10 (`RUF031`) ([#12784](https://github.com/astral-sh/ruff/pull/12784))
|
||||
- Evaluate default parameter values for a function in that function's enclosing scope ([#12852](https://github.com/astral-sh/ruff/pull/12852))
|
||||
|
||||
### Other changes
|
||||
|
||||
- Respect VS Code cell metadata when detecting the language of Jupyter Notebook cells ([#12864](https://github.com/astral-sh/ruff/pull/12864))
|
||||
- Respect `kernelspec` notebook metadata when detecting the preferred language for a Jupyter Notebook ([#12875](https://github.com/astral-sh/ruff/pull/12875))
|
||||
|
||||
## 0.5.7
|
||||
|
||||
### Preview features
|
||||
|
||||
7
Cargo.lock
generated
7
Cargo.lock
generated
@@ -1904,6 +1904,7 @@ dependencies = [
|
||||
"ruff_text_size",
|
||||
"rustc-hash 2.0.0",
|
||||
"salsa",
|
||||
"smallvec",
|
||||
"tempfile",
|
||||
"tracing",
|
||||
"walkdir",
|
||||
@@ -2060,7 +2061,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"argfile",
|
||||
@@ -2252,7 +2253,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff_linter"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"annotate-snippets 0.9.2",
|
||||
@@ -2572,7 +2573,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff_wasm"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
dependencies = [
|
||||
"console_error_panic_hook",
|
||||
"console_log",
|
||||
|
||||
@@ -136,8 +136,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
|
||||
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
|
||||
|
||||
# For a specific version.
|
||||
curl -LsSf https://astral.sh/ruff/0.6.0/install.sh | sh
|
||||
powershell -c "irm https://astral.sh/ruff/0.6.0/install.ps1 | iex"
|
||||
curl -LsSf https://astral.sh/ruff/0.5.7/install.sh | sh
|
||||
powershell -c "irm https://astral.sh/ruff/0.5.7/install.ps1 | iex"
|
||||
```
|
||||
|
||||
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
|
||||
@@ -170,7 +170,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.6.0
|
||||
rev: v0.5.7
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
|
||||
@@ -26,6 +26,7 @@ countme = { workspace = true }
|
||||
once_cell = { workspace = true }
|
||||
ordermap = { workspace = true }
|
||||
salsa = { workspace = true }
|
||||
smallvec = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
rustc-hash = { workspace = true }
|
||||
hashbrown = { workspace = true }
|
||||
|
||||
@@ -126,7 +126,7 @@ impl<'db> SemanticIndex<'db> {
|
||||
///
|
||||
/// Use the Salsa cached [`use_def_map()`] query if you only need the
|
||||
/// use-def map for a single scope.
|
||||
pub(super) fn use_def_map(&self, scope_id: FileScopeId) -> Arc<UseDefMap> {
|
||||
pub(super) fn use_def_map(&self, scope_id: FileScopeId) -> Arc<UseDefMap<'db>> {
|
||||
self.use_def_maps[scope_id].clone()
|
||||
}
|
||||
|
||||
@@ -311,7 +311,7 @@ mod tests {
|
||||
|
||||
use crate::db::tests::TestDb;
|
||||
use crate::semantic_index::ast_ids::HasScopedUseId;
|
||||
use crate::semantic_index::definition::DefinitionKind;
|
||||
use crate::semantic_index::definition::{DefinitionKind, DefinitionNode};
|
||||
use crate::semantic_index::symbol::{FileScopeId, Scope, ScopeKind, SymbolTable};
|
||||
use crate::semantic_index::{global_scope, semantic_index, symbol_table, use_def_map};
|
||||
use crate::Db;
|
||||
@@ -374,10 +374,11 @@ mod tests {
|
||||
let foo = global_table.symbol_id_by_name("foo").unwrap();
|
||||
|
||||
let use_def = use_def_map(&db, scope);
|
||||
let [definition] = use_def.public_definitions(foo) else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
assert!(matches!(definition.node(&db), DefinitionKind::Import(_)));
|
||||
let definition = use_def.public_definition(foo).unwrap();
|
||||
assert!(matches!(
|
||||
definition.kind(&db),
|
||||
DefinitionKind::Node(DefinitionNode::Import(_))
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -411,16 +412,16 @@ mod tests {
|
||||
);
|
||||
|
||||
let use_def = use_def_map(&db, scope);
|
||||
let [definition] = use_def.public_definitions(
|
||||
global_table
|
||||
.symbol_id_by_name("foo")
|
||||
.expect("symbol to exist"),
|
||||
) else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
let definition = use_def
|
||||
.public_definition(
|
||||
global_table
|
||||
.symbol_id_by_name("foo")
|
||||
.expect("symbol to exist"),
|
||||
)
|
||||
.unwrap();
|
||||
assert!(matches!(
|
||||
definition.node(&db),
|
||||
DefinitionKind::ImportFrom(_)
|
||||
definition.kind(&db),
|
||||
DefinitionKind::Node(DefinitionNode::ImportFrom(_))
|
||||
));
|
||||
}
|
||||
|
||||
@@ -438,14 +439,12 @@ mod tests {
|
||||
"a symbol used but not defined in a scope should have only the used flag"
|
||||
);
|
||||
let use_def = use_def_map(&db, scope);
|
||||
let [definition] =
|
||||
use_def.public_definitions(global_table.symbol_id_by_name("x").expect("symbol exists"))
|
||||
else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
let definition = use_def
|
||||
.public_definition(global_table.symbol_id_by_name("x").expect("symbol exists"))
|
||||
.unwrap();
|
||||
assert!(matches!(
|
||||
definition.node(&db),
|
||||
DefinitionKind::Assignment(_)
|
||||
definition.kind(&db),
|
||||
DefinitionKind::Node(DefinitionNode::Assignment(_))
|
||||
));
|
||||
}
|
||||
|
||||
@@ -477,14 +476,12 @@ y = 2
|
||||
assert_eq!(names(&class_table), vec!["x"]);
|
||||
|
||||
let use_def = index.use_def_map(class_scope_id);
|
||||
let [definition] =
|
||||
use_def.public_definitions(class_table.symbol_id_by_name("x").expect("symbol exists"))
|
||||
else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
let definition = use_def
|
||||
.public_definition(class_table.symbol_id_by_name("x").expect("symbol exists"))
|
||||
.unwrap();
|
||||
assert!(matches!(
|
||||
definition.node(&db),
|
||||
DefinitionKind::Assignment(_)
|
||||
definition.kind(&db),
|
||||
DefinitionKind::Node(DefinitionNode::Assignment(_))
|
||||
));
|
||||
}
|
||||
|
||||
@@ -515,16 +512,16 @@ y = 2
|
||||
assert_eq!(names(&function_table), vec!["x"]);
|
||||
|
||||
let use_def = index.use_def_map(function_scope_id);
|
||||
let [definition] = use_def.public_definitions(
|
||||
function_table
|
||||
.symbol_id_by_name("x")
|
||||
.expect("symbol exists"),
|
||||
) else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
let definition = use_def
|
||||
.public_definition(
|
||||
function_table
|
||||
.symbol_id_by_name("x")
|
||||
.expect("symbol exists"),
|
||||
)
|
||||
.unwrap();
|
||||
assert!(matches!(
|
||||
definition.node(&db),
|
||||
DefinitionKind::Assignment(_)
|
||||
definition.kind(&db),
|
||||
DefinitionKind::Node(DefinitionNode::Assignment(_))
|
||||
));
|
||||
}
|
||||
|
||||
@@ -594,10 +591,10 @@ y = 2
|
||||
let element_use_id =
|
||||
element.scoped_use_id(&db, comprehension_scope_id.to_scope_id(&db, file));
|
||||
|
||||
let [definition] = use_def.use_definitions(element_use_id) else {
|
||||
panic!("expected one definition")
|
||||
};
|
||||
let DefinitionKind::Comprehension(comprehension) = definition.node(&db) else {
|
||||
let definition = use_def.definition_for_use(element_use_id).unwrap();
|
||||
let DefinitionKind::Node(DefinitionNode::Comprehension(comprehension)) =
|
||||
definition.kind(&db)
|
||||
else {
|
||||
panic!("expected generator definition")
|
||||
};
|
||||
let ast::Comprehension { target, .. } = comprehension.node();
|
||||
@@ -611,7 +608,7 @@ y = 2
|
||||
/// the outer comprehension scope and the variables are correctly defined in the respective
|
||||
/// scopes.
|
||||
#[test]
|
||||
fn nested_generators() {
|
||||
fn nested_comprehensions() {
|
||||
let TestCase { db, file } = test_case(
|
||||
"
|
||||
[{x for x in iter2} for y in iter1]
|
||||
@@ -644,7 +641,7 @@ y = 2
|
||||
.child_scopes(comprehension_scope_id)
|
||||
.collect::<Vec<_>>()[..]
|
||||
else {
|
||||
panic!("expected one inner generator scope")
|
||||
panic!("expected one inner comprehension scope")
|
||||
};
|
||||
|
||||
assert_eq!(inner_comprehension_scope.kind(), ScopeKind::Comprehension);
|
||||
@@ -693,14 +690,17 @@ def func():
|
||||
assert_eq!(names(&func2_table), vec!["y"]);
|
||||
|
||||
let use_def = index.use_def_map(FileScopeId::global());
|
||||
let [definition] = use_def.public_definitions(
|
||||
global_table
|
||||
.symbol_id_by_name("func")
|
||||
.expect("symbol exists"),
|
||||
) else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
assert!(matches!(definition.node(&db), DefinitionKind::Function(_)));
|
||||
let definition = use_def
|
||||
.public_definition(
|
||||
global_table
|
||||
.symbol_id_by_name("func")
|
||||
.expect("symbol exists"),
|
||||
)
|
||||
.unwrap();
|
||||
assert!(matches!(
|
||||
definition.kind(&db),
|
||||
DefinitionKind::Node(DefinitionNode::Function(_))
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -800,10 +800,9 @@ class C[T]:
|
||||
};
|
||||
let x_use_id = x_use_expr_name.scoped_use_id(&db, scope);
|
||||
let use_def = use_def_map(&db, scope);
|
||||
let [definition] = use_def.use_definitions(x_use_id) else {
|
||||
panic!("expected one definition");
|
||||
};
|
||||
let DefinitionKind::Assignment(assignment) = definition.node(&db) else {
|
||||
let definition = use_def.definition_for_use(x_use_id).unwrap();
|
||||
let DefinitionKind::Node(DefinitionNode::Assignment(assignment)) = definition.kind(&db)
|
||||
else {
|
||||
panic!("should be an assignment definition")
|
||||
};
|
||||
let ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
|
||||
|
||||
@@ -13,15 +13,15 @@ use crate::ast_node_ref::AstNodeRef;
|
||||
use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
|
||||
use crate::semantic_index::ast_ids::AstIdsBuilder;
|
||||
use crate::semantic_index::definition::{
|
||||
AssignmentDefinitionNodeRef, ComprehensionDefinitionNodeRef, Definition, DefinitionNodeKey,
|
||||
DefinitionNodeRef, ImportFromDefinitionNodeRef,
|
||||
AssignmentDefinitionNodeRef, ComprehensionDefinitionNodeRef, Definition, DefinitionKind,
|
||||
DefinitionNodeKey, DefinitionNodeRef, ImportFromDefinitionNodeRef,
|
||||
};
|
||||
use crate::semantic_index::expression::Expression;
|
||||
use crate::semantic_index::symbol::{
|
||||
FileScopeId, NodeWithScopeKey, NodeWithScopeRef, Scope, ScopeId, ScopedSymbolId, SymbolFlags,
|
||||
SymbolTableBuilder,
|
||||
};
|
||||
use crate::semantic_index::use_def::{FlowSnapshot, UseDefMapBuilder};
|
||||
use crate::semantic_index::use_def::{BasicBlockId, UseDefMapBuilder};
|
||||
use crate::semantic_index::SemanticIndex;
|
||||
use crate::Db;
|
||||
|
||||
@@ -33,8 +33,8 @@ pub(super) struct SemanticIndexBuilder<'db> {
|
||||
scope_stack: Vec<FileScopeId>,
|
||||
/// The assignment we're currently visiting.
|
||||
current_assignment: Option<CurrentAssignment<'db>>,
|
||||
/// Flow states at each `break` in the current loop.
|
||||
loop_break_states: Vec<FlowSnapshot>,
|
||||
/// Basic block ending at each `break` in the current loop.
|
||||
loop_breaks: Vec<BasicBlockId>,
|
||||
|
||||
// Semantic Index fields
|
||||
scopes: IndexVec<FileScopeId, Scope>,
|
||||
@@ -56,7 +56,7 @@ impl<'db> SemanticIndexBuilder<'db> {
|
||||
module: parsed,
|
||||
scope_stack: Vec::new(),
|
||||
current_assignment: None,
|
||||
loop_break_states: vec![],
|
||||
loop_breaks: vec![],
|
||||
|
||||
scopes: IndexVec::new(),
|
||||
symbol_tables: IndexVec::new(),
|
||||
@@ -98,7 +98,8 @@ impl<'db> SemanticIndexBuilder<'db> {
|
||||
|
||||
let file_scope_id = self.scopes.push(scope);
|
||||
self.symbol_tables.push(SymbolTableBuilder::new());
|
||||
self.use_def_maps.push(UseDefMapBuilder::new());
|
||||
self.use_def_maps
|
||||
.push(UseDefMapBuilder::new(self.db, self.file, file_scope_id));
|
||||
let ast_id_scope = self.ast_ids.push(AstIdsBuilder::new());
|
||||
|
||||
#[allow(unsafe_code)]
|
||||
@@ -132,41 +133,50 @@ impl<'db> SemanticIndexBuilder<'db> {
|
||||
&mut self.symbol_tables[scope_id]
|
||||
}
|
||||
|
||||
fn current_use_def_map_mut(&mut self) -> &mut UseDefMapBuilder<'db> {
|
||||
fn current_use_def_map(&mut self) -> &mut UseDefMapBuilder<'db> {
|
||||
let scope_id = self.current_scope();
|
||||
&mut self.use_def_maps[scope_id]
|
||||
}
|
||||
|
||||
fn current_use_def_map(&self) -> &UseDefMapBuilder<'db> {
|
||||
let scope_id = self.current_scope();
|
||||
&self.use_def_maps[scope_id]
|
||||
}
|
||||
|
||||
fn current_ast_ids(&mut self) -> &mut AstIdsBuilder {
|
||||
let scope_id = self.current_scope();
|
||||
&mut self.ast_ids[scope_id]
|
||||
}
|
||||
|
||||
fn flow_snapshot(&self) -> FlowSnapshot {
|
||||
self.current_use_def_map().snapshot()
|
||||
/// Start a new basic block and return the previous block's ID.
|
||||
fn next_block(&mut self) -> BasicBlockId {
|
||||
self.current_use_def_map().next_block(/* sealed */ true)
|
||||
}
|
||||
|
||||
fn flow_restore(&mut self, state: FlowSnapshot) {
|
||||
self.current_use_def_map_mut().restore(state);
|
||||
/// Start a new unsealed basic block and return the previous block's ID.
|
||||
fn next_block_unsealed(&mut self) -> BasicBlockId {
|
||||
self.current_use_def_map().next_block(/* sealed */ false)
|
||||
}
|
||||
|
||||
fn flow_merge(&mut self, state: &FlowSnapshot) {
|
||||
self.current_use_def_map_mut().merge(state);
|
||||
/// Seal an unsealed basic block.
|
||||
fn seal_block(&mut self) {
|
||||
self.current_use_def_map().seal_current_block();
|
||||
}
|
||||
|
||||
/// Start a new basic block with the given block as predecessor.
|
||||
fn new_block_from(&mut self, predecessor: BasicBlockId) {
|
||||
self.current_use_def_map()
|
||||
.new_block_from(predecessor, /* sealed */ true);
|
||||
}
|
||||
|
||||
/// Add a predecessor to the current block.
|
||||
fn merge_block(&mut self, predecessor: BasicBlockId) {
|
||||
self.current_use_def_map().merge_block(predecessor);
|
||||
}
|
||||
|
||||
/// Add predecessors to the current block.
|
||||
fn merge_blocks(&mut self, predecessors: Vec<BasicBlockId>) {
|
||||
self.current_use_def_map().merge_blocks(predecessors);
|
||||
}
|
||||
|
||||
fn add_or_update_symbol(&mut self, name: Name, flags: SymbolFlags) -> ScopedSymbolId {
|
||||
let symbol_table = self.current_symbol_table();
|
||||
let (symbol_id, added) = symbol_table.add_or_update_symbol(name, flags);
|
||||
if added {
|
||||
let use_def_map = self.current_use_def_map_mut();
|
||||
use_def_map.add_symbol(symbol_id);
|
||||
}
|
||||
symbol_id
|
||||
symbol_table.add_or_update_symbol(name, flags)
|
||||
}
|
||||
|
||||
fn add_definition<'a>(
|
||||
@@ -181,15 +191,13 @@ impl<'db> SemanticIndexBuilder<'db> {
|
||||
self.current_scope(),
|
||||
symbol,
|
||||
#[allow(unsafe_code)]
|
||||
unsafe {
|
||||
definition_node.into_owned(self.module.clone())
|
||||
},
|
||||
DefinitionKind::Node(unsafe { definition_node.into_owned(self.module.clone()) }),
|
||||
countme::Count::default(),
|
||||
);
|
||||
|
||||
self.definitions_by_node
|
||||
.insert(definition_node.key(), definition);
|
||||
self.current_use_def_map_mut()
|
||||
self.current_use_def_map()
|
||||
.record_definition(symbol, definition);
|
||||
|
||||
definition
|
||||
@@ -455,21 +463,19 @@ where
|
||||
}
|
||||
ast::Stmt::If(node) => {
|
||||
self.visit_expr(&node.test);
|
||||
let pre_if = self.flow_snapshot();
|
||||
let pre_if = self.next_block();
|
||||
self.visit_body(&node.body);
|
||||
let mut post_clauses: Vec<FlowSnapshot> = vec![];
|
||||
let mut post_clauses: Vec<BasicBlockId> = vec![];
|
||||
for clause in &node.elif_else_clauses {
|
||||
// snapshot after every block except the last; the last one will just become
|
||||
// the state that we merge the other snapshots into
|
||||
post_clauses.push(self.flow_snapshot());
|
||||
post_clauses.push(self.next_block());
|
||||
// we can only take an elif/else branch if none of the previous ones were
|
||||
// taken, so the block entry state is always `pre_if`
|
||||
self.flow_restore(pre_if.clone());
|
||||
self.new_block_from(pre_if);
|
||||
self.visit_elif_else_clause(clause);
|
||||
}
|
||||
for post_clause_state in post_clauses {
|
||||
self.flow_merge(&post_clause_state);
|
||||
}
|
||||
self.next_block_unsealed();
|
||||
let has_else = node
|
||||
.elif_else_clauses
|
||||
.last()
|
||||
@@ -477,35 +483,39 @@ where
|
||||
if !has_else {
|
||||
// if there's no else clause, then it's possible we took none of the branches,
|
||||
// and the pre_if state can reach here
|
||||
self.flow_merge(&pre_if);
|
||||
self.merge_block(pre_if);
|
||||
}
|
||||
self.merge_blocks(post_clauses);
|
||||
self.seal_block();
|
||||
}
|
||||
ast::Stmt::While(node) => {
|
||||
self.visit_expr(&node.test);
|
||||
|
||||
let pre_loop = self.flow_snapshot();
|
||||
let pre_loop = self.next_block();
|
||||
|
||||
// Save aside any break states from an outer loop
|
||||
let saved_break_states = std::mem::take(&mut self.loop_break_states);
|
||||
let saved_break_states = std::mem::take(&mut self.loop_breaks);
|
||||
self.visit_body(&node.body);
|
||||
// Get the break states from the body of this loop, and restore the saved outer
|
||||
// ones.
|
||||
let break_states =
|
||||
std::mem::replace(&mut self.loop_break_states, saved_break_states);
|
||||
let break_states = std::mem::replace(&mut self.loop_breaks, saved_break_states);
|
||||
|
||||
// We may execute the `else` clause without ever executing the body, so merge in
|
||||
// the pre-loop state before visiting `else`.
|
||||
self.flow_merge(&pre_loop);
|
||||
self.next_block_unsealed();
|
||||
self.merge_block(pre_loop);
|
||||
self.seal_block();
|
||||
self.visit_body(&node.orelse);
|
||||
|
||||
// Breaking out of a while loop bypasses the `else` clause, so merge in the break
|
||||
// states after visiting `else`.
|
||||
for break_state in break_states {
|
||||
self.flow_merge(&break_state);
|
||||
}
|
||||
self.next_block_unsealed();
|
||||
self.merge_blocks(break_states);
|
||||
self.seal_block();
|
||||
}
|
||||
ast::Stmt::Break(_) => {
|
||||
self.loop_break_states.push(self.flow_snapshot());
|
||||
let block_id = self.next_block();
|
||||
self.loop_breaks.push(block_id);
|
||||
}
|
||||
_ => {
|
||||
walk_stmt(self, stmt);
|
||||
@@ -559,7 +569,7 @@ where
|
||||
|
||||
if flags.contains(SymbolFlags::IS_USED) {
|
||||
let use_id = self.current_ast_ids().record_use(expr);
|
||||
self.current_use_def_map_mut().record_use(symbol, use_id);
|
||||
self.current_use_def_map().record_use(symbol, use_id);
|
||||
}
|
||||
|
||||
walk_expr(self, expr);
|
||||
@@ -586,12 +596,14 @@ where
|
||||
// AST inspection, so we can't simplify here, need to record test expression for
|
||||
// later checking)
|
||||
self.visit_expr(test);
|
||||
let pre_if = self.flow_snapshot();
|
||||
let pre_if = self.next_block();
|
||||
self.visit_expr(body);
|
||||
let post_body = self.flow_snapshot();
|
||||
self.flow_restore(pre_if);
|
||||
let post_body = self.next_block();
|
||||
self.new_block_from(pre_if);
|
||||
self.visit_expr(orelse);
|
||||
self.flow_merge(&post_body);
|
||||
self.next_block_unsealed();
|
||||
self.merge_block(post_body);
|
||||
self.seal_block();
|
||||
}
|
||||
ast::Expr::ListComp(
|
||||
list_comprehension @ ast::ExprListComp {
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use ruff_db::files::File;
|
||||
use ruff_db::parsed::ParsedModule;
|
||||
use ruff_index::newtype_index;
|
||||
use ruff_python_ast as ast;
|
||||
|
||||
use crate::ast_node_ref::AstNodeRef;
|
||||
@@ -8,7 +9,7 @@ use crate::semantic_index::symbol::{FileScopeId, ScopeId, ScopedSymbolId};
|
||||
use crate::Db;
|
||||
|
||||
#[salsa::tracked]
|
||||
pub struct Definition<'db> {
|
||||
pub(crate) struct Definition<'db> {
|
||||
/// The file in which the definition occurs.
|
||||
#[id]
|
||||
pub(crate) file: File,
|
||||
@@ -23,7 +24,7 @@ pub struct Definition<'db> {
|
||||
|
||||
#[no_eq]
|
||||
#[return_ref]
|
||||
pub(crate) node: DefinitionKind,
|
||||
pub(crate) kind: DefinitionKind,
|
||||
|
||||
#[no_eq]
|
||||
count: countme::Count<Definition<'static>>,
|
||||
@@ -35,6 +36,22 @@ impl<'db> Definition<'db> {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) enum DefinitionKind {
|
||||
/// Inserted at control-flow merge points, if multiple definitions can reach the merge point.
|
||||
///
|
||||
/// Operands are not kept inline, since it's not possible to construct cyclically-referential
|
||||
/// Salsa tracked structs; they are kept instead in the
|
||||
/// [`UseDefMap`](super::use_def::UseDefMap).
|
||||
Phi(ScopedPhiId),
|
||||
|
||||
/// An assignment to the symbol.
|
||||
Node(DefinitionNode),
|
||||
}
|
||||
|
||||
#[newtype_index]
|
||||
pub(crate) struct ScopedPhiId;
|
||||
|
||||
#[derive(Copy, Clone, Debug)]
|
||||
pub(crate) enum DefinitionNodeRef<'a> {
|
||||
Import(&'a ast::Alias),
|
||||
@@ -115,37 +132,37 @@ pub(crate) struct ComprehensionDefinitionNodeRef<'a> {
|
||||
|
||||
impl DefinitionNodeRef<'_> {
|
||||
#[allow(unsafe_code)]
|
||||
pub(super) unsafe fn into_owned(self, parsed: ParsedModule) -> DefinitionKind {
|
||||
pub(super) unsafe fn into_owned(self, parsed: ParsedModule) -> DefinitionNode {
|
||||
match self {
|
||||
DefinitionNodeRef::Import(alias) => {
|
||||
DefinitionKind::Import(AstNodeRef::new(parsed, alias))
|
||||
DefinitionNode::Import(AstNodeRef::new(parsed, alias))
|
||||
}
|
||||
DefinitionNodeRef::ImportFrom(ImportFromDefinitionNodeRef { node, alias_index }) => {
|
||||
DefinitionKind::ImportFrom(ImportFromDefinitionKind {
|
||||
DefinitionNode::ImportFrom(ImportFromDefinitionNode {
|
||||
node: AstNodeRef::new(parsed, node),
|
||||
alias_index,
|
||||
})
|
||||
}
|
||||
DefinitionNodeRef::Function(function) => {
|
||||
DefinitionKind::Function(AstNodeRef::new(parsed, function))
|
||||
DefinitionNode::Function(AstNodeRef::new(parsed, function))
|
||||
}
|
||||
DefinitionNodeRef::Class(class) => {
|
||||
DefinitionKind::Class(AstNodeRef::new(parsed, class))
|
||||
DefinitionNode::Class(AstNodeRef::new(parsed, class))
|
||||
}
|
||||
DefinitionNodeRef::NamedExpression(named) => {
|
||||
DefinitionKind::NamedExpression(AstNodeRef::new(parsed, named))
|
||||
DefinitionNode::NamedExpression(AstNodeRef::new(parsed, named))
|
||||
}
|
||||
DefinitionNodeRef::Assignment(AssignmentDefinitionNodeRef { assignment, target }) => {
|
||||
DefinitionKind::Assignment(AssignmentDefinitionKind {
|
||||
DefinitionNode::Assignment(AssignmentDefinitionNode {
|
||||
assignment: AstNodeRef::new(parsed.clone(), assignment),
|
||||
target: AstNodeRef::new(parsed, target),
|
||||
})
|
||||
}
|
||||
DefinitionNodeRef::AnnotatedAssignment(assign) => {
|
||||
DefinitionKind::AnnotatedAssignment(AstNodeRef::new(parsed, assign))
|
||||
DefinitionNode::AnnotatedAssignment(AstNodeRef::new(parsed, assign))
|
||||
}
|
||||
DefinitionNodeRef::Comprehension(ComprehensionDefinitionNodeRef { node, first }) => {
|
||||
DefinitionKind::Comprehension(ComprehensionDefinitionKind {
|
||||
DefinitionNode::Comprehension(ComprehensionDefinitionNode {
|
||||
node: AstNodeRef::new(parsed, node),
|
||||
first,
|
||||
})
|
||||
@@ -173,24 +190,24 @@ impl DefinitionNodeRef<'_> {
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum DefinitionKind {
|
||||
pub enum DefinitionNode {
|
||||
Import(AstNodeRef<ast::Alias>),
|
||||
ImportFrom(ImportFromDefinitionKind),
|
||||
ImportFrom(ImportFromDefinitionNode),
|
||||
Function(AstNodeRef<ast::StmtFunctionDef>),
|
||||
Class(AstNodeRef<ast::StmtClassDef>),
|
||||
NamedExpression(AstNodeRef<ast::ExprNamed>),
|
||||
Assignment(AssignmentDefinitionKind),
|
||||
Assignment(AssignmentDefinitionNode),
|
||||
AnnotatedAssignment(AstNodeRef<ast::StmtAnnAssign>),
|
||||
Comprehension(ComprehensionDefinitionKind),
|
||||
Comprehension(ComprehensionDefinitionNode),
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ComprehensionDefinitionKind {
|
||||
pub struct ComprehensionDefinitionNode {
|
||||
node: AstNodeRef<ast::Comprehension>,
|
||||
first: bool,
|
||||
}
|
||||
|
||||
impl ComprehensionDefinitionKind {
|
||||
impl ComprehensionDefinitionNode {
|
||||
pub(crate) fn node(&self) -> &ast::Comprehension {
|
||||
self.node.node()
|
||||
}
|
||||
@@ -201,12 +218,12 @@ impl ComprehensionDefinitionKind {
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ImportFromDefinitionKind {
|
||||
pub struct ImportFromDefinitionNode {
|
||||
node: AstNodeRef<ast::StmtImportFrom>,
|
||||
alias_index: usize,
|
||||
}
|
||||
|
||||
impl ImportFromDefinitionKind {
|
||||
impl ImportFromDefinitionNode {
|
||||
pub(crate) fn import(&self) -> &ast::StmtImportFrom {
|
||||
self.node.node()
|
||||
}
|
||||
@@ -218,12 +235,12 @@ impl ImportFromDefinitionKind {
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
#[allow(dead_code)]
|
||||
pub struct AssignmentDefinitionKind {
|
||||
pub struct AssignmentDefinitionNode {
|
||||
assignment: AstNodeRef<ast::StmtAssign>,
|
||||
target: AstNodeRef<ast::ExprName>,
|
||||
}
|
||||
|
||||
impl AssignmentDefinitionKind {
|
||||
impl AssignmentDefinitionNode {
|
||||
pub(crate) fn assignment(&self) -> &ast::StmtAssign {
|
||||
self.assignment.node()
|
||||
}
|
||||
|
||||
@@ -272,7 +272,7 @@ impl SymbolTableBuilder {
|
||||
&mut self,
|
||||
name: Name,
|
||||
flags: SymbolFlags,
|
||||
) -> (ScopedSymbolId, bool) {
|
||||
) -> ScopedSymbolId {
|
||||
let hash = SymbolTable::hash_name(&name);
|
||||
let entry = self
|
||||
.table
|
||||
@@ -285,7 +285,7 @@ impl SymbolTableBuilder {
|
||||
let symbol = &mut self.table.symbols[*entry.key()];
|
||||
symbol.insert_flags(flags);
|
||||
|
||||
(*entry.key(), false)
|
||||
*entry.key()
|
||||
}
|
||||
RawEntryMut::Vacant(entry) => {
|
||||
let mut symbol = Symbol::new(name);
|
||||
@@ -295,7 +295,7 @@ impl SymbolTableBuilder {
|
||||
entry.insert_with_hasher(hash, id, (), |id| {
|
||||
SymbolTable::hash_name(self.table.symbols[*id].name().as_str())
|
||||
});
|
||||
(id, true)
|
||||
id
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -56,299 +56,323 @@
|
||||
//! visible at the end of the scope.
|
||||
//!
|
||||
//! The data structure we build to answer these two questions is the `UseDefMap`. It has a
|
||||
//! `definitions_by_use` vector indexed by [`ScopedUseId`] and a `public_definitions` vector
|
||||
//! indexed by [`ScopedSymbolId`]. The values in each of these vectors are (in principle) a list of
|
||||
//! visible definitions at that use, or at the end of the scope for that symbol.
|
||||
//! `definitions_by_use` vector indexed by [`ScopedUseId`] and a `public_definitions` map
|
||||
//! indexed by [`ScopedSymbolId`]. The values in each are the visible definition of a symbol at
|
||||
//! that use, or at the end of the scope.
|
||||
//!
|
||||
//! In order to avoid vectors-of-vectors and all the allocations that would entail, we don't
|
||||
//! actually store these "list of visible definitions" as a vector of [`Definition`] IDs. Instead,
|
||||
//! the values in `definitions_by_use` and `public_definitions` are a [`Definitions`] struct that
|
||||
//! keeps a [`Range`] into a third vector of [`Definition`] IDs, `all_definitions`. The trick with
|
||||
//! this representation is that it requires that the definitions visible at any given use of a
|
||||
//! symbol are stored sequentially in `all_definitions`.
|
||||
//!
|
||||
//! There is another special kind of possible "definition" for a symbol: it might be unbound in the
|
||||
//! scope. (This isn't equivalent to "zero visible definitions", since we may go through an `if`
|
||||
//! that has a definition for the symbol, leaving us with one visible definition, but still also
|
||||
//! the "unbound" possibility, since we might not have taken the `if` branch.)
|
||||
//!
|
||||
//! The simplest way to model "unbound" would be as an actual [`Definition`] itself: the initial
|
||||
//! visible [`Definition`] for each symbol in a scope. But actually modeling it this way would
|
||||
//! dramatically increase the number of [`Definition`] that Salsa must track. Since "unbound" is a
|
||||
//! special definition in that all symbols share it, and it doesn't have any additional per-symbol
|
||||
//! state, we can represent it more efficiently: we use the `may_be_unbound` boolean on the
|
||||
//! [`Definitions`] struct. If this flag is `true`, it means the symbol/use really has one
|
||||
//! additional visible "definition", which is the unbound state. If this flag is `false`, it means
|
||||
//! we've eliminated the possibility of unbound: every path we've followed includes a definition
|
||||
//! for this symbol.
|
||||
//!
|
||||
//! To build a [`UseDefMap`], the [`UseDefMapBuilder`] is notified of each new use and definition
|
||||
//! as they are encountered by the
|
||||
//! [`SemanticIndexBuilder`](crate::semantic_index::builder::SemanticIndexBuilder) AST visit. For
|
||||
//! each symbol, the builder tracks the currently-visible definitions for that symbol. When we hit
|
||||
//! a use of a symbol, it records the currently-visible definitions for that symbol as the visible
|
||||
//! definitions for that use. When we reach the end of the scope, it records the currently-visible
|
||||
//! definitions for each symbol as the public definitions of that symbol.
|
||||
//!
|
||||
//! Let's walk through the above example. Initially we record for `x` that it has no visible
|
||||
//! definitions, and may be unbound. When we see `x = 1`, we record that as the sole visible
|
||||
//! definition of `x`, and flip `may_be_unbound` to `false`. Then we see `x = 2`, and it replaces
|
||||
//! `x = 1` as the sole visible definition of `x`. When we get to `y = x`, we record that the
|
||||
//! visible definitions for that use of `x` are just the `x = 2` definition.
|
||||
//!
|
||||
//! Then we hit the `if` branch. We visit the `test` node (`flag` in this case), since that will
|
||||
//! happen regardless. Then we take a pre-branch snapshot of the currently visible definitions for
|
||||
//! all symbols, which we'll need later. Then we go ahead and visit the `if` body. When we see `x =
|
||||
//! 3`, it replaces `x = 2` as the sole visible definition of `x`. At the end of the `if` body, we
|
||||
//! take another snapshot of the currently-visible definitions; we'll call this the post-if-body
|
||||
//! snapshot.
|
||||
//!
|
||||
//! Now we need to visit the `else` clause. The conditions when entering the `else` clause should
|
||||
//! be the pre-if conditions; if we are entering the `else` clause, we know that the `if` test
|
||||
//! failed and we didn't execute the `if` body. So we first reset the builder to the pre-if state,
|
||||
//! using the snapshot we took previously (meaning we now have `x = 2` as the sole visible
|
||||
//! definition for `x` again), then visit the `else` clause, where `x = 4` replaces `x = 2` as the
|
||||
//! sole visible definition of `x`.
|
||||
//!
|
||||
//! Now we reach the end of the if/else, and want to visit the following code. The state here needs
|
||||
//! to reflect that we might have gone through the `if` branch, or we might have gone through the
|
||||
//! `else` branch, and we don't know which. So we need to "merge" our current builder state
|
||||
//! (reflecting the end-of-else state, with `x = 4` as the only visible definition) with our
|
||||
//! post-if-body snapshot (which has `x = 3` as the only visible definition). The result of this
|
||||
//! merge is that we now have two visible definitions of `x`: `x = 3` and `x = 4`.
|
||||
//!
|
||||
//! The [`UseDefMapBuilder`] itself just exposes methods for taking a snapshot, resetting to a
|
||||
//! snapshot, and merging a snapshot into the current state. The logic using these methods lives in
|
||||
//! [`SemanticIndexBuilder`](crate::semantic_index::builder::SemanticIndexBuilder), e.g. where it
|
||||
//! visits a `StmtIf` node.
|
||||
//!
|
||||
//! (In the future we may have some other questions we want to answer as well, such as "is this
|
||||
//! definition used?", which will require tracking a bit more info in our map, e.g. a "used" bit
|
||||
//! for each [`Definition`] which is flipped to true when we record that definition for a use.)
|
||||
//! Rather than have multiple definitions, we use a Phi definition at control flow join points to
|
||||
//! merge the visible definition in each path. This means at any given point we always have exactly
|
||||
//! one definition for a symbol. (This is analogous to static-single-assignment, or SSA, form, and
|
||||
//! in fact we use the algorithm from [Simple and efficient construction of static single
|
||||
//! assignment form](https://dl.acm.org/doi/10.1007/978-3-642-37051-9_6) here.)
|
||||
use crate::semantic_index::ast_ids::ScopedUseId;
|
||||
use crate::semantic_index::definition::Definition;
|
||||
use crate::semantic_index::symbol::ScopedSymbolId;
|
||||
use ruff_index::IndexVec;
|
||||
use std::ops::Range;
|
||||
use crate::semantic_index::definition::{Definition, DefinitionKind, ScopedPhiId};
|
||||
use crate::semantic_index::symbol::{FileScopeId, ScopedSymbolId};
|
||||
use crate::Db;
|
||||
use ruff_db::files::File;
|
||||
use ruff_index::{newtype_index, IndexVec};
|
||||
use rustc_hash::{FxHashMap, FxHashSet};
|
||||
use smallvec::{smallvec, SmallVec};
|
||||
|
||||
/// All definitions that can reach a given use of a name.
|
||||
/// Number of basic block predecessors we store inline.
|
||||
const PREDECESSORS: usize = 2;
|
||||
|
||||
/// Input operands (definitions) for a Phi definition. None means not defined.
|
||||
// TODO would like to use SmallVec here but can't due to lifetime invariance issue.
|
||||
type PhiOperands<'db> = Vec<Option<Definition<'db>>>;
|
||||
|
||||
/// Definition for each use of a name.
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub(crate) struct UseDefMap<'db> {
|
||||
// TODO store constraints with definitions for type narrowing
|
||||
/// Definition IDs array for `definitions_by_use` and `public_definitions` to slice into.
|
||||
all_definitions: Vec<Definition<'db>>,
|
||||
/// Definition that reaches each [`ScopedUseId`].
|
||||
definitions_by_use: IndexVec<ScopedUseId, Option<Definition<'db>>>,
|
||||
|
||||
/// Definitions that can reach a [`ScopedUseId`].
|
||||
definitions_by_use: IndexVec<ScopedUseId, Definitions>,
|
||||
/// Definition of each symbol visible at end of scope.
|
||||
///
|
||||
/// Sparse, because it only includes symbols defined in the scope.
|
||||
public_definitions: FxHashMap<ScopedSymbolId, Definition<'db>>,
|
||||
|
||||
/// Definitions of each symbol visible at end of scope.
|
||||
public_definitions: IndexVec<ScopedSymbolId, Definitions>,
|
||||
/// Operands for each Phi definition in this scope.
|
||||
phi_operands: IndexVec<ScopedPhiId, PhiOperands<'db>>,
|
||||
}
|
||||
|
||||
impl<'db> UseDefMap<'db> {
|
||||
pub(crate) fn use_definitions(&self, use_id: ScopedUseId) -> &[Definition<'db>] {
|
||||
&self.all_definitions[self.definitions_by_use[use_id].definitions_range.clone()]
|
||||
/// Return the dominating definition for a given use of a name; None means not-defined.
|
||||
pub(crate) fn definition_for_use(&self, use_id: ScopedUseId) -> Option<Definition<'db>> {
|
||||
self.definitions_by_use[use_id]
|
||||
}
|
||||
|
||||
pub(crate) fn use_may_be_unbound(&self, use_id: ScopedUseId) -> bool {
|
||||
self.definitions_by_use[use_id].may_be_unbound
|
||||
/// Return the definition visible at end of scope for a symbol.
|
||||
///
|
||||
/// Return None if the symbol is never defined in the scope.
|
||||
pub(crate) fn public_definition(&self, symbol_id: ScopedSymbolId) -> Option<Definition<'db>> {
|
||||
self.public_definitions.get(&symbol_id).copied()
|
||||
}
|
||||
|
||||
pub(crate) fn public_definitions(&self, symbol: ScopedSymbolId) -> &[Definition<'db>] {
|
||||
&self.all_definitions[self.public_definitions[symbol].definitions_range.clone()]
|
||||
}
|
||||
|
||||
pub(crate) fn public_may_be_unbound(&self, symbol: ScopedSymbolId) -> bool {
|
||||
self.public_definitions[symbol].may_be_unbound
|
||||
/// Return the operands for a Phi in this scope; a None means not-defined.
|
||||
pub(crate) fn phi_operands<'s>(&'s self, phi_id: ScopedPhiId) -> &'s [Option<Definition<'db>>] {
|
||||
self.phi_operands[phi_id].as_slice()
|
||||
}
|
||||
}
|
||||
|
||||
/// Definitions visible for a symbol at a particular use (or end-of-scope).
|
||||
#[derive(Clone, Debug, PartialEq, Eq)]
|
||||
struct Definitions {
|
||||
/// [`Range`] in `all_definitions` of the visible definition IDs.
|
||||
definitions_range: Range<usize>,
|
||||
/// Is the symbol possibly unbound at this point?
|
||||
may_be_unbound: bool,
|
||||
}
|
||||
type PredecessorBlocks = SmallVec<[BasicBlockId; PREDECESSORS]>;
|
||||
|
||||
impl Definitions {
|
||||
/// The default state of a symbol is "no definitions, may be unbound", aka definitely-unbound.
|
||||
fn unbound() -> Self {
|
||||
Self {
|
||||
definitions_range: Range::default(),
|
||||
may_be_unbound: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
/// A basic block is a linear region of code (no branches.)
|
||||
#[newtype_index]
|
||||
pub(super) struct BasicBlockId;
|
||||
|
||||
impl Default for Definitions {
|
||||
fn default() -> Self {
|
||||
Definitions::unbound()
|
||||
}
|
||||
}
|
||||
|
||||
/// A snapshot of the visible definitions for each symbol at a particular point in control flow.
|
||||
#[derive(Clone, Debug)]
|
||||
pub(super) struct FlowSnapshot {
|
||||
definitions_by_symbol: IndexVec<ScopedSymbolId, Definitions>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(super) struct UseDefMapBuilder<'db> {
|
||||
/// Definition IDs array for `definitions_by_use` and `definitions_by_symbol` to slice into.
|
||||
all_definitions: Vec<Definition<'db>>,
|
||||
db: &'db dyn Db,
|
||||
file: File,
|
||||
file_scope: FileScopeId,
|
||||
|
||||
/// Visible definitions at each so-far-recorded use.
|
||||
definitions_by_use: IndexVec<ScopedUseId, Definitions>,
|
||||
/// Predecessor blocks for each basic block.
|
||||
///
|
||||
/// Entry block has none, all other blocks have at least one, blocks that join control flow can
|
||||
/// have two or more.
|
||||
predecessors: IndexVec<BasicBlockId, PredecessorBlocks>,
|
||||
|
||||
/// Currently visible definitions for each symbol.
|
||||
definitions_by_symbol: IndexVec<ScopedSymbolId, Definitions>,
|
||||
/// The definition of each symbol which dominates each basic block.
|
||||
///
|
||||
/// No entry means "lazily unfilled"; we haven't had to query for it yet, and we may never have
|
||||
/// to, if the symbol isn't used in this block or any successor block.
|
||||
///
|
||||
/// Each block has an [`FxHashMap`] of symbols instead of an [`IndexVec`] because it is lazy
|
||||
/// and potentially sparse; it will only include a definition for a symbol that is actually
|
||||
/// used in that block or a successor. An [`IndexVec`] would have to be eagerly filled with
|
||||
/// placeholders.
|
||||
definitions_per_block:
|
||||
IndexVec<BasicBlockId, FxHashMap<ScopedSymbolId, Option<Definition<'db>>>>,
|
||||
|
||||
/// Incomplete Phi definitions in each block.
|
||||
///
|
||||
/// An incomplete Phi is used when we don't know, while processing a block's body, what new
|
||||
/// predecessors it may later gain (that is, backward jumps.)
|
||||
///
|
||||
/// Sparse, because relative few blocks (just loop headers) will have any incomplete Phis.
|
||||
incomplete_phis: FxHashMap<BasicBlockId, Vec<Definition<'db>>>,
|
||||
|
||||
/// Operands for each Phi definition in this scope.
|
||||
phi_operands: IndexVec<ScopedPhiId, PhiOperands<'db>>,
|
||||
|
||||
/// Are this block's predecessors fully populated?
|
||||
///
|
||||
/// If not, it isn't safe to recurse to predecessors yet; we might miss a predecessor block.
|
||||
sealed_blocks: IndexVec<BasicBlockId, bool>,
|
||||
|
||||
/// Definition for each so-far-recorded use.
|
||||
definitions_by_use: IndexVec<ScopedUseId, Option<Definition<'db>>>,
|
||||
|
||||
/// All symbols defined in this scope.
|
||||
defined_symbols: FxHashSet<ScopedSymbolId>,
|
||||
}
|
||||
|
||||
impl<'db> UseDefMapBuilder<'db> {
|
||||
pub(super) fn new() -> Self {
|
||||
Self {
|
||||
all_definitions: Vec::new(),
|
||||
pub(super) fn new(db: &'db dyn Db, file: File, file_scope: FileScopeId) -> Self {
|
||||
let mut new = Self {
|
||||
db,
|
||||
file,
|
||||
file_scope,
|
||||
predecessors: IndexVec::new(),
|
||||
definitions_per_block: IndexVec::new(),
|
||||
incomplete_phis: FxHashMap::default(),
|
||||
sealed_blocks: IndexVec::new(),
|
||||
definitions_by_use: IndexVec::new(),
|
||||
definitions_by_symbol: IndexVec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn add_symbol(&mut self, symbol: ScopedSymbolId) {
|
||||
let new_symbol = self.definitions_by_symbol.push(Definitions::unbound());
|
||||
debug_assert_eq!(symbol, new_symbol);
|
||||
phi_operands: IndexVec::new(),
|
||||
defined_symbols: FxHashSet::default(),
|
||||
};
|
||||
|
||||
// create the entry basic block
|
||||
new.predecessors.push(PredecessorBlocks::default());
|
||||
new.definitions_per_block.push(FxHashMap::default());
|
||||
new.sealed_blocks.push(true);
|
||||
|
||||
new
|
||||
}
|
||||
|
||||
/// Record a definition for a symbol.
|
||||
pub(super) fn record_definition(
|
||||
&mut self,
|
||||
symbol: ScopedSymbolId,
|
||||
symbol_id: ScopedSymbolId,
|
||||
definition: Definition<'db>,
|
||||
) {
|
||||
// We have a new definition of a symbol; this replaces any previous definitions in this
|
||||
// path.
|
||||
let def_idx = self.all_definitions.len();
|
||||
self.all_definitions.push(definition);
|
||||
self.definitions_by_symbol[symbol] = Definitions {
|
||||
#[allow(clippy::range_plus_one)]
|
||||
definitions_range: def_idx..(def_idx + 1),
|
||||
may_be_unbound: false,
|
||||
};
|
||||
self.memoize(self.current_block_id(), symbol_id, Some(definition));
|
||||
self.defined_symbols.insert(symbol_id);
|
||||
}
|
||||
|
||||
pub(super) fn record_use(&mut self, symbol: ScopedSymbolId, use_id: ScopedUseId) {
|
||||
// We have a use of a symbol; clone the currently visible definitions for that symbol, and
|
||||
// record them as the visible definitions for this use.
|
||||
let new_use = self
|
||||
.definitions_by_use
|
||||
.push(self.definitions_by_symbol[symbol].clone());
|
||||
/// Record a use of a symbol.
|
||||
pub(super) fn record_use(&mut self, symbol_id: ScopedSymbolId, use_id: ScopedUseId) {
|
||||
let definition_id = self.lookup(symbol_id);
|
||||
let new_use = self.definitions_by_use.push(definition_id);
|
||||
debug_assert_eq!(use_id, new_use);
|
||||
}
|
||||
|
||||
/// Take a snapshot of the current visible-symbols state.
|
||||
pub(super) fn snapshot(&self) -> FlowSnapshot {
|
||||
FlowSnapshot {
|
||||
definitions_by_symbol: self.definitions_by_symbol.clone(),
|
||||
/// Get the id of the current basic block.
|
||||
pub(super) fn current_block_id(&self) -> BasicBlockId {
|
||||
BasicBlockId::from(self.definitions_per_block.len() - 1)
|
||||
}
|
||||
|
||||
/// Push a new basic block, with given block as predecessor.
|
||||
pub(super) fn new_block_from(&mut self, block_id: BasicBlockId, sealed: bool) {
|
||||
self.new_block_with_predecessors(smallvec![block_id], sealed);
|
||||
}
|
||||
|
||||
/// Push a new basic block, with current block as predecessor; return the current block's ID.
|
||||
pub(super) fn next_block(&mut self, sealed: bool) -> BasicBlockId {
|
||||
let current_block_id = self.current_block_id();
|
||||
self.new_block_from(current_block_id, sealed);
|
||||
current_block_id
|
||||
}
|
||||
|
||||
/// Add a predecessor to the current block.
|
||||
pub(super) fn merge_block(&mut self, new_predecessor: BasicBlockId) {
|
||||
let block_id = self.current_block_id();
|
||||
debug_assert!(!self.sealed_blocks[block_id]);
|
||||
self.predecessors[block_id].push(new_predecessor);
|
||||
}
|
||||
|
||||
/// Add predecessors to the current block.
|
||||
pub(super) fn merge_blocks(&mut self, new_predecessors: Vec<BasicBlockId>) {
|
||||
let block_id = self.current_block_id();
|
||||
debug_assert!(!self.sealed_blocks[block_id]);
|
||||
self.predecessors[block_id].extend(new_predecessors);
|
||||
}
|
||||
|
||||
/// Mark the current block as sealed; it cannot have any more predecessors added.
|
||||
pub(super) fn seal_current_block(&mut self) {
|
||||
self.seal_block(self.current_block_id());
|
||||
}
|
||||
|
||||
/// Mark a block as sealed; it cannot have any more predecessors added.
|
||||
pub(super) fn seal_block(&mut self, block_id: BasicBlockId) {
|
||||
debug_assert!(!self.sealed_blocks[block_id]);
|
||||
if let Some(phis) = self.incomplete_phis.get(&block_id) {
|
||||
for phi in phis.clone() {
|
||||
self.add_phi_operands(block_id, phi);
|
||||
}
|
||||
self.incomplete_phis.remove(&block_id);
|
||||
}
|
||||
self.sealed_blocks[block_id] = true;
|
||||
}
|
||||
|
||||
pub(super) fn finish(mut self) -> UseDefMap<'db> {
|
||||
debug_assert!(self.incomplete_phis.is_empty());
|
||||
debug_assert!(self.sealed_blocks.iter().all(|&b| b));
|
||||
self.definitions_by_use.shrink_to_fit();
|
||||
self.phi_operands.shrink_to_fit();
|
||||
|
||||
let mut public_definitions: FxHashMap<ScopedSymbolId, Definition<'db>> =
|
||||
FxHashMap::default();
|
||||
|
||||
for symbol_id in self.defined_symbols.clone() {
|
||||
// SAFETY: We are only looking up defined symbols here, can't get None.
|
||||
public_definitions.insert(symbol_id, self.lookup(symbol_id).unwrap());
|
||||
}
|
||||
|
||||
UseDefMap {
|
||||
definitions_by_use: self.definitions_by_use,
|
||||
public_definitions,
|
||||
phi_operands: self.phi_operands,
|
||||
}
|
||||
}
|
||||
|
||||
/// Restore the current builder visible-definitions state to the given snapshot.
|
||||
pub(super) fn restore(&mut self, snapshot: FlowSnapshot) {
|
||||
// We never remove symbols from `definitions_by_symbol` (it's an IndexVec, and the symbol
|
||||
// IDs must line up), so the current number of known symbols must always be equal to or
|
||||
// greater than the number of known symbols in a previously-taken snapshot.
|
||||
let num_symbols = self.definitions_by_symbol.len();
|
||||
debug_assert!(num_symbols >= snapshot.definitions_by_symbol.len());
|
||||
/// Push a new basic block (with given predecessors) and return its ID.
|
||||
fn new_block_with_predecessors(
|
||||
&mut self,
|
||||
predecessors: PredecessorBlocks,
|
||||
sealed: bool,
|
||||
) -> BasicBlockId {
|
||||
let new_block_id = self.predecessors.push(predecessors);
|
||||
self.definitions_per_block.push(FxHashMap::default());
|
||||
self.sealed_blocks.push(sealed);
|
||||
|
||||
// Restore the current visible-definitions state to the given snapshot.
|
||||
self.definitions_by_symbol = snapshot.definitions_by_symbol;
|
||||
|
||||
// If the snapshot we are restoring is missing some symbols we've recorded since, we need
|
||||
// to fill them in so the symbol IDs continue to line up. Since they don't exist in the
|
||||
// snapshot, the correct state to fill them in with is "unbound", the default.
|
||||
self.definitions_by_symbol
|
||||
.resize(num_symbols, Definitions::unbound());
|
||||
new_block_id
|
||||
}
|
||||
|
||||
/// Merge the given snapshot into the current state, reflecting that we might have taken either
|
||||
/// path to get here. The new visible-definitions state for each symbol should include
|
||||
/// definitions from both the prior state and the snapshot.
|
||||
pub(super) fn merge(&mut self, snapshot: &FlowSnapshot) {
|
||||
// The tricky thing about merging two Ranges pointing into `all_definitions` is that if the
|
||||
// two Ranges aren't already adjacent in `all_definitions`, we will have to copy at least
|
||||
// one or the other of the ranges to the end of `all_definitions` so as to make them
|
||||
// adjacent. We can't ever move things around in `all_definitions` because previously
|
||||
// recorded uses may still have ranges pointing to any part of it; all we can do is append.
|
||||
// It's possible we may end up with some old entries in `all_definitions` that nobody is
|
||||
// pointing to, but that's OK.
|
||||
/// Look up the dominating definition for a symbol in the current block.
|
||||
///
|
||||
/// If there isn't a local definition, recursively look up the symbol in predecessor blocks,
|
||||
/// memoizing the found symbol in each block.
|
||||
fn lookup(&mut self, symbol_id: ScopedSymbolId) -> Option<Definition<'db>> {
|
||||
self.lookup_impl(self.current_block_id(), symbol_id)
|
||||
}
|
||||
|
||||
// We never remove symbols from `definitions_by_symbol` (it's an IndexVec, and the symbol
|
||||
// IDs must line up), so the current number of known symbols must always be equal to or
|
||||
// greater than the number of known symbols in a previously-taken snapshot.
|
||||
debug_assert!(self.definitions_by_symbol.len() >= snapshot.definitions_by_symbol.len());
|
||||
|
||||
for (symbol_id, current) in self.definitions_by_symbol.iter_mut_enumerated() {
|
||||
let Some(snapshot) = snapshot.definitions_by_symbol.get(symbol_id) else {
|
||||
// Symbol not present in snapshot, so it's unbound from that path.
|
||||
current.may_be_unbound = true;
|
||||
continue;
|
||||
};
|
||||
|
||||
// If the symbol can be unbound in either predecessor, it can be unbound post-merge.
|
||||
current.may_be_unbound |= snapshot.may_be_unbound;
|
||||
|
||||
// Merge the definition ranges.
|
||||
let current = &mut current.definitions_range;
|
||||
let snapshot = &snapshot.definitions_range;
|
||||
|
||||
// We never create reversed ranges.
|
||||
debug_assert!(current.end >= current.start);
|
||||
debug_assert!(snapshot.end >= snapshot.start);
|
||||
|
||||
if current == snapshot {
|
||||
// Ranges already identical, nothing to do.
|
||||
} else if snapshot.is_empty() {
|
||||
// Merging from an empty range; nothing to do.
|
||||
} else if (*current).is_empty() {
|
||||
// Merging to an empty range; just use the incoming range.
|
||||
*current = snapshot.clone();
|
||||
} else if snapshot.end >= current.start && snapshot.start <= current.end {
|
||||
// Ranges are adjacent or overlapping, merge them in-place.
|
||||
*current = current.start.min(snapshot.start)..current.end.max(snapshot.end);
|
||||
} else if current.end == self.all_definitions.len() {
|
||||
// Ranges are not adjacent or overlapping, `current` is at the end of
|
||||
// `all_definitions`, we need to copy `snapshot` to the end so they are adjacent
|
||||
// and can be merged into one range.
|
||||
self.all_definitions.extend_from_within(snapshot.clone());
|
||||
current.end = self.all_definitions.len();
|
||||
} else if snapshot.end == self.all_definitions.len() {
|
||||
// Ranges are not adjacent or overlapping, `snapshot` is at the end of
|
||||
// `all_definitions`, we need to copy `current` to the end so they are adjacent and
|
||||
// can be merged into one range.
|
||||
self.all_definitions.extend_from_within(current.clone());
|
||||
current.start = snapshot.start;
|
||||
current.end = self.all_definitions.len();
|
||||
} else {
|
||||
// Ranges are not adjacent and neither one is at the end of `all_definitions`, we
|
||||
// have to copy both to the end so they are adjacent and we can merge them.
|
||||
let start = self.all_definitions.len();
|
||||
self.all_definitions.extend_from_within(current.clone());
|
||||
self.all_definitions.extend_from_within(snapshot.clone());
|
||||
current.start = start;
|
||||
current.end = self.all_definitions.len();
|
||||
fn lookup_impl(
|
||||
&mut self,
|
||||
block_id: BasicBlockId,
|
||||
symbol_id: ScopedSymbolId,
|
||||
) -> Option<Definition<'db>> {
|
||||
if let Some(local) = self.definitions_per_block[block_id].get(&symbol_id) {
|
||||
return *local;
|
||||
}
|
||||
if !self.sealed_blocks[block_id] {
|
||||
// we may still be missing predecessors; insert an incomplete Phi.
|
||||
let definition = self.create_incomplete_phi(block_id, symbol_id);
|
||||
self.incomplete_phis
|
||||
.entry(block_id)
|
||||
.or_default()
|
||||
.push(definition);
|
||||
return Some(definition);
|
||||
}
|
||||
match self.predecessors[block_id].as_slice() {
|
||||
// entry block, no definition found: return None
|
||||
[] => None,
|
||||
// single predecessor, recurse
|
||||
&[single_predecessor_id] => {
|
||||
let definition = self.lookup_impl(single_predecessor_id, symbol_id);
|
||||
self.memoize(block_id, symbol_id, definition);
|
||||
definition
|
||||
}
|
||||
// multiple predecessors: create and memoize an incomplete Phi to break cycles, then
|
||||
// recurse into predecessors and fill the Phi operands.
|
||||
_ => {
|
||||
let phi = self.create_incomplete_phi(block_id, symbol_id);
|
||||
self.add_phi_operands(block_id, phi);
|
||||
Some(phi)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn finish(mut self) -> UseDefMap<'db> {
|
||||
self.all_definitions.shrink_to_fit();
|
||||
self.definitions_by_symbol.shrink_to_fit();
|
||||
self.definitions_by_use.shrink_to_fit();
|
||||
/// Recurse into predecessors to add operands for an incomplete Phi.
|
||||
fn add_phi_operands(&mut self, block_id: BasicBlockId, phi: Definition<'db>) {
|
||||
let predecessors: PredecessorBlocks = self.predecessors[block_id].clone();
|
||||
let operands: PhiOperands = predecessors
|
||||
.iter()
|
||||
.map(|pred_id| self.lookup_impl(*pred_id, phi.symbol(self.db)))
|
||||
.collect();
|
||||
let DefinitionKind::Phi(phi_id) = phi.kind(self.db) else {
|
||||
unreachable!("add_phi_operands called with non-Phi");
|
||||
};
|
||||
self.phi_operands[*phi_id] = operands;
|
||||
}
|
||||
|
||||
UseDefMap {
|
||||
all_definitions: self.all_definitions,
|
||||
definitions_by_use: self.definitions_by_use,
|
||||
public_definitions: self.definitions_by_symbol,
|
||||
}
|
||||
/// Remember a given definition for a given symbol in the given block.
|
||||
fn memoize(
|
||||
&mut self,
|
||||
block_id: BasicBlockId,
|
||||
symbol_id: ScopedSymbolId,
|
||||
definition_id: Option<Definition<'db>>,
|
||||
) {
|
||||
self.definitions_per_block[block_id].insert(symbol_id, definition_id);
|
||||
}
|
||||
|
||||
/// Create an incomplete Phi for the given block and symbol, memoize it, and return its ID.
|
||||
fn create_incomplete_phi(
|
||||
&mut self,
|
||||
block_id: BasicBlockId,
|
||||
symbol_id: ScopedSymbolId,
|
||||
) -> Definition<'db> {
|
||||
let phi_id = self.phi_operands.push(vec![]);
|
||||
let definition = Definition::new(
|
||||
self.db,
|
||||
self.file,
|
||||
self.file_scope,
|
||||
symbol_id,
|
||||
DefinitionKind::Phi(phi_id),
|
||||
countme::Count::default(),
|
||||
);
|
||||
self.memoize(block_id, symbol_id, Some(definition));
|
||||
definition
|
||||
}
|
||||
}
|
||||
|
||||
@@ -151,7 +151,7 @@ impl HasTy for ast::StmtFunctionDef {
|
||||
fn ty<'db>(&self, model: &SemanticModel<'db>) -> Type<'db> {
|
||||
let index = semantic_index(model.db, model.file);
|
||||
let definition = index.definition(self);
|
||||
definition_ty(model.db, definition)
|
||||
definition_ty(model.db, Some(definition))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -159,7 +159,7 @@ impl HasTy for StmtClassDef {
|
||||
fn ty<'db>(&self, model: &SemanticModel<'db>) -> Type<'db> {
|
||||
let index = semantic_index(model.db, model.file);
|
||||
let definition = index.definition(self);
|
||||
definition_ty(model.db, definition)
|
||||
definition_ty(model.db, Some(definition))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -167,7 +167,7 @@ impl HasTy for ast::Alias {
|
||||
fn ty<'db>(&self, model: &SemanticModel<'db>) -> Type<'db> {
|
||||
let index = semantic_index(model.db, model.file);
|
||||
let definition = index.definition(self);
|
||||
definition_ty(model.db, definition)
|
||||
definition_ty(model.db, Some(definition))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -23,13 +23,7 @@ pub(crate) fn symbol_ty<'db>(
|
||||
let _span = tracing::trace_span!("symbol_ty", ?symbol).entered();
|
||||
|
||||
let use_def = use_def_map(db, scope);
|
||||
definitions_ty(
|
||||
db,
|
||||
use_def.public_definitions(symbol),
|
||||
use_def
|
||||
.public_may_be_unbound(symbol)
|
||||
.then_some(Type::Unbound),
|
||||
)
|
||||
definition_ty(db, use_def.public_definition(symbol))
|
||||
}
|
||||
|
||||
/// Shorthand for `symbol_ty` that takes a symbol name instead of an ID.
|
||||
@@ -60,49 +54,16 @@ pub(crate) fn builtins_symbol_ty_by_name<'db>(db: &'db dyn Db, name: &str) -> Ty
|
||||
}
|
||||
|
||||
/// Infer the type of a [`Definition`].
|
||||
pub(crate) fn definition_ty<'db>(db: &'db dyn Db, definition: Definition<'db>) -> Type<'db> {
|
||||
let inference = infer_definition_types(db, definition);
|
||||
inference.definition_ty(definition)
|
||||
}
|
||||
|
||||
/// Infer the combined type of an array of [`Definition`]s, plus one optional "unbound type".
|
||||
///
|
||||
/// Will return a union if there is more than one definition, or at least one plus an unbound
|
||||
/// type.
|
||||
///
|
||||
/// The "unbound type" represents the type in case control flow may not have passed through any
|
||||
/// definitions in this scope. If this isn't possible, then it will be `None`. If it is possible,
|
||||
/// and the result in that case should be Unbound (e.g. an unbound function local), then it will be
|
||||
/// `Some(Type::Unbound)`. If it is possible and the result should be something else (e.g. an
|
||||
/// implicit global lookup), then `unbound_type` will be `Some(the_global_symbol_type)`.
|
||||
///
|
||||
/// # Panics
|
||||
/// Will panic if called with zero definitions and no `unbound_ty`. This is a logic error,
|
||||
/// as any symbol with zero visible definitions clearly may be unbound, and the caller should
|
||||
/// provide an `unbound_ty`.
|
||||
pub(crate) fn definitions_ty<'db>(
|
||||
pub(crate) fn definition_ty<'db>(
|
||||
db: &'db dyn Db,
|
||||
definitions: &[Definition<'db>],
|
||||
unbound_ty: Option<Type<'db>>,
|
||||
definition: Option<Definition<'db>>,
|
||||
) -> Type<'db> {
|
||||
let def_types = definitions.iter().map(|def| definition_ty(db, *def));
|
||||
let mut all_types = unbound_ty.into_iter().chain(def_types);
|
||||
|
||||
let Some(first) = all_types.next() else {
|
||||
panic!("definitions_ty should never be called with zero definitions and no unbound_ty.")
|
||||
};
|
||||
|
||||
if let Some(second) = all_types.next() {
|
||||
let mut builder = UnionBuilder::new(db);
|
||||
builder = builder.add(first).add(second);
|
||||
|
||||
for variant in all_types {
|
||||
builder = builder.add(variant);
|
||||
match definition {
|
||||
Some(definition) => {
|
||||
let inference = infer_definition_types(db, definition);
|
||||
inference.definition_ty(definition)
|
||||
}
|
||||
|
||||
builder.build()
|
||||
} else {
|
||||
first
|
||||
None => Type::Unbound,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -145,8 +106,27 @@ impl<'db> Type<'db> {
|
||||
matches!(self, Type::Unbound)
|
||||
}
|
||||
|
||||
pub const fn is_unknown(&self) -> bool {
|
||||
matches!(self, Type::Unknown)
|
||||
pub fn may_be_unbound(&self, db: &'db dyn Db) -> bool {
|
||||
match self {
|
||||
Type::Unbound => true,
|
||||
Type::Union(union) => union.contains(db, Type::Unbound),
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn replace_unbound_with(&self, db: &'db dyn Db, replacement: Type<'db>) -> Type<'db> {
|
||||
match self {
|
||||
Type::Unbound => replacement,
|
||||
Type::Union(union) => union
|
||||
.elements(db)
|
||||
.into_iter()
|
||||
.fold(UnionBuilder::new(db), |builder, ty| {
|
||||
builder.add(ty.replace_unbound_with(db, replacement))
|
||||
})
|
||||
.build(),
|
||||
ty => *ty,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
|
||||
@@ -33,13 +33,17 @@ use crate::builtins::builtins_scope;
|
||||
use crate::module_name::ModuleName;
|
||||
use crate::module_resolver::resolve_module;
|
||||
use crate::semantic_index::ast_ids::{HasScopedAstId, HasScopedUseId, ScopedExpressionId};
|
||||
use crate::semantic_index::definition::{Definition, DefinitionKind, DefinitionNodeKey};
|
||||
use crate::semantic_index::definition::{
|
||||
Definition, DefinitionKind, DefinitionNode, DefinitionNodeKey, ScopedPhiId,
|
||||
};
|
||||
use crate::semantic_index::expression::Expression;
|
||||
use crate::semantic_index::semantic_index;
|
||||
use crate::semantic_index::symbol::{FileScopeId, NodeWithScopeKind, NodeWithScopeRef, ScopeId};
|
||||
use crate::semantic_index::symbol::{
|
||||
FileScopeId, NodeWithScopeKind, NodeWithScopeRef, ScopeId, Symbol,
|
||||
};
|
||||
use crate::semantic_index::SemanticIndex;
|
||||
use crate::types::{
|
||||
builtins_symbol_ty_by_name, definitions_ty, global_symbol_ty_by_name, ClassType, FunctionType,
|
||||
builtins_symbol_ty_by_name, definition_ty, global_symbol_ty_by_name, ClassType, FunctionType,
|
||||
Name, Type, UnionBuilder,
|
||||
};
|
||||
use crate::Db;
|
||||
@@ -276,37 +280,45 @@ impl<'db> TypeInferenceBuilder<'db> {
|
||||
}
|
||||
|
||||
fn infer_region_definition(&mut self, definition: Definition<'db>) {
|
||||
match definition.node(self.db) {
|
||||
DefinitionKind::Function(function) => {
|
||||
self.infer_function_definition(function.node(), definition);
|
||||
}
|
||||
DefinitionKind::Class(class) => self.infer_class_definition(class.node(), definition),
|
||||
DefinitionKind::Import(import) => {
|
||||
self.infer_import_definition(import.node(), definition);
|
||||
}
|
||||
DefinitionKind::ImportFrom(import_from) => {
|
||||
self.infer_import_from_definition(
|
||||
import_from.import(),
|
||||
import_from.alias(),
|
||||
definition,
|
||||
);
|
||||
}
|
||||
DefinitionKind::Assignment(assignment) => {
|
||||
self.infer_assignment_definition(assignment.assignment(), definition);
|
||||
}
|
||||
DefinitionKind::AnnotatedAssignment(annotated_assignment) => {
|
||||
self.infer_annotated_assignment_definition(annotated_assignment.node(), definition);
|
||||
}
|
||||
DefinitionKind::NamedExpression(named_expression) => {
|
||||
self.infer_named_expression_definition(named_expression.node(), definition);
|
||||
}
|
||||
DefinitionKind::Comprehension(comprehension) => {
|
||||
self.infer_comprehension_definition(
|
||||
comprehension.node(),
|
||||
comprehension.is_first(),
|
||||
definition,
|
||||
);
|
||||
}
|
||||
match definition.kind(self.db) {
|
||||
DefinitionKind::Phi(phi_id) => self.infer_phi_definition(*phi_id, definition),
|
||||
DefinitionKind::Node(node) => match node {
|
||||
DefinitionNode::Function(function) => {
|
||||
self.infer_function_definition(function.node(), definition);
|
||||
}
|
||||
DefinitionNode::Class(class) => {
|
||||
self.infer_class_definition(class.node(), definition);
|
||||
}
|
||||
DefinitionNode::Import(import) => {
|
||||
self.infer_import_definition(import.node(), definition);
|
||||
}
|
||||
DefinitionNode::ImportFrom(import_from) => {
|
||||
self.infer_import_from_definition(
|
||||
import_from.import(),
|
||||
import_from.alias(),
|
||||
definition,
|
||||
);
|
||||
}
|
||||
DefinitionNode::Assignment(assignment) => {
|
||||
self.infer_assignment_definition(assignment.assignment(), definition);
|
||||
}
|
||||
DefinitionNode::AnnotatedAssignment(annotated_assignment) => {
|
||||
self.infer_annotated_assignment_definition(
|
||||
annotated_assignment.node(),
|
||||
definition,
|
||||
);
|
||||
}
|
||||
DefinitionNode::NamedExpression(named_expression) => {
|
||||
self.infer_named_expression_definition(named_expression.node(), definition);
|
||||
}
|
||||
DefinitionNode::Comprehension(comprehension) => {
|
||||
self.infer_comprehension_definition(
|
||||
comprehension.node(),
|
||||
comprehension.is_first(),
|
||||
definition,
|
||||
);
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -396,6 +408,18 @@ impl<'db> TypeInferenceBuilder<'db> {
|
||||
self.extend(result);
|
||||
}
|
||||
|
||||
fn infer_phi_definition(&mut self, phi_id: ScopedPhiId, definition: Definition<'db>) {
|
||||
let file_scope_id = self.scope.file_scope_id(self.db);
|
||||
let use_def = self.index.use_def_map(file_scope_id);
|
||||
let ty = use_def
|
||||
.phi_operands(phi_id)
|
||||
.iter()
|
||||
.map(|&definition| definition_ty(self.db, definition))
|
||||
.fold(UnionBuilder::new(self.db), UnionBuilder::add)
|
||||
.build();
|
||||
self.types.definitions.insert(definition, ty);
|
||||
}
|
||||
|
||||
fn infer_function_definition_statement(&mut self, function: &ast::StmtFunctionDef) {
|
||||
self.infer_definition(function);
|
||||
}
|
||||
@@ -1338,6 +1362,22 @@ impl<'db> TypeInferenceBuilder<'db> {
|
||||
Type::Unknown
|
||||
}
|
||||
|
||||
fn infer_global_name_reference(&self, symbol: &Symbol) -> Type<'db> {
|
||||
let file_scope_id = self.scope.file_scope_id(self.db);
|
||||
// implicit global
|
||||
let mut ty = if file_scope_id == FileScopeId::global() {
|
||||
Type::Unbound
|
||||
} else {
|
||||
global_symbol_ty_by_name(self.db, self.file, symbol.name())
|
||||
};
|
||||
// fallback to builtins
|
||||
if ty.may_be_unbound(self.db) && Some(self.scope) != builtins_scope(self.db) {
|
||||
ty = ty
|
||||
.replace_unbound_with(self.db, builtins_symbol_ty_by_name(self.db, symbol.name()));
|
||||
}
|
||||
ty
|
||||
}
|
||||
|
||||
fn infer_name_expression(&mut self, name: &ast::ExprName) -> Type<'db> {
|
||||
let ast::ExprName { range: _, id, ctx } = name;
|
||||
|
||||
@@ -1346,34 +1386,18 @@ impl<'db> TypeInferenceBuilder<'db> {
|
||||
let file_scope_id = self.scope.file_scope_id(self.db);
|
||||
let use_def = self.index.use_def_map(file_scope_id);
|
||||
let use_id = name.scoped_use_id(self.db, self.scope);
|
||||
let may_be_unbound = use_def.use_may_be_unbound(use_id);
|
||||
|
||||
let unbound_ty = if may_be_unbound {
|
||||
let mut ty = definition_ty(self.db, use_def.definition_for_use(use_id));
|
||||
if ty.may_be_unbound(self.db) {
|
||||
let symbols = self.index.symbol_table(file_scope_id);
|
||||
// SAFETY: the symbol table always creates a symbol for every Name node.
|
||||
let symbol = symbols.symbol_by_name(id).unwrap();
|
||||
if !symbol.is_defined() || !self.scope.is_function_like(self.db) {
|
||||
// implicit global
|
||||
let mut unbound_ty = if file_scope_id == FileScopeId::global() {
|
||||
Type::Unbound
|
||||
} else {
|
||||
global_symbol_ty_by_name(self.db, self.file, id)
|
||||
};
|
||||
// fallback to builtins
|
||||
if matches!(unbound_ty, Type::Unbound)
|
||||
&& Some(self.scope) != builtins_scope(self.db)
|
||||
{
|
||||
unbound_ty = builtins_symbol_ty_by_name(self.db, id);
|
||||
}
|
||||
Some(unbound_ty)
|
||||
} else {
|
||||
Some(Type::Unbound)
|
||||
ty = ty.replace_unbound_with(
|
||||
self.db,
|
||||
self.infer_global_name_reference(symbol),
|
||||
);
|
||||
}
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
definitions_ty(self.db, use_def.use_definitions(use_id), unbound_ty)
|
||||
}
|
||||
ty
|
||||
}
|
||||
ExprContext::Store | ExprContext::Del => Type::None,
|
||||
ExprContext::Invalid => Type::Unknown,
|
||||
@@ -2163,6 +2187,38 @@ mod tests {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn conditionally_global_or_builtin() -> anyhow::Result<()> {
|
||||
let mut db = setup_db();
|
||||
|
||||
db.write_dedented(
|
||||
"/src/a.py",
|
||||
"
|
||||
if flag:
|
||||
copyright = 1
|
||||
def f():
|
||||
y = copyright
|
||||
",
|
||||
)?;
|
||||
|
||||
let file = system_path_to_file(&db, "src/a.py").expect("Expected file to exist.");
|
||||
let index = semantic_index(&db, file);
|
||||
let function_scope = index
|
||||
.child_scopes(FileScopeId::global())
|
||||
.next()
|
||||
.unwrap()
|
||||
.0
|
||||
.to_scope_id(&db, file);
|
||||
let y_ty = symbol_ty_by_name(&db, function_scope, "y");
|
||||
|
||||
assert_eq!(
|
||||
y_ty.display(&db).to_string(),
|
||||
"Literal[1] | Literal[copyright]"
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Class name lookups do fall back to globals, but the public type never does.
|
||||
#[test]
|
||||
fn unbound_class_local() -> anyhow::Result<()> {
|
||||
@@ -2386,11 +2442,10 @@ mod tests {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn first_public_def<'db>(db: &'db TestDb, file: File, name: &str) -> Definition<'db> {
|
||||
fn public_def<'db>(db: &'db TestDb, file: File, name: &str) -> Definition<'db> {
|
||||
let scope = global_scope(db, file);
|
||||
*use_def_map(db, scope)
|
||||
.public_definitions(symbol_table(db, scope).symbol_id_by_name(name).unwrap())
|
||||
.first()
|
||||
use_def_map(db, scope)
|
||||
.public_definition(symbol_table(db, scope).symbol_id_by_name(name).unwrap())
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
@@ -2533,7 +2588,7 @@ mod tests {
|
||||
assert_function_query_was_not_run(
|
||||
&db,
|
||||
infer_definition_types,
|
||||
first_public_def(&db, a, "x"),
|
||||
public_def(&db, a, "x"),
|
||||
&events,
|
||||
);
|
||||
|
||||
@@ -2569,7 +2624,7 @@ mod tests {
|
||||
assert_function_query_was_not_run(
|
||||
&db,
|
||||
infer_definition_types,
|
||||
first_public_def(&db, a, "x"),
|
||||
public_def(&db, a, "x"),
|
||||
&events,
|
||||
);
|
||||
Ok(())
|
||||
|
||||
@@ -1 +1 @@
|
||||
1ace5718deaf3041f8e3d1dc9c9e8a8e830e517f
|
||||
4ef2d66663fc080fefa379e6ae5fc45d4f8b54eb
|
||||
|
||||
@@ -753,11 +753,9 @@ class Constant(expr):
|
||||
__match_args__ = ("value", "kind")
|
||||
value: Any # None, str, bytes, bool, int, float, complex, Ellipsis
|
||||
kind: str | None
|
||||
if sys.version_info < (3, 14):
|
||||
# Aliases for value, for backwards compatibility
|
||||
s: Any
|
||||
n: int | float | complex
|
||||
|
||||
# Aliases for value, for backwards compatibility
|
||||
s: Any
|
||||
n: int | float | complex
|
||||
def __init__(self, value: Any, kind: str | None = None, **kwargs: Unpack[_Attributes]) -> None: ...
|
||||
|
||||
class NamedExpr(expr):
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
import sys
|
||||
from abc import abstractmethod
|
||||
from types import MappingProxyType
|
||||
from typing import ( # noqa: Y022,Y038
|
||||
from typing import ( # noqa: Y022,Y038,Y057
|
||||
AbstractSet as Set,
|
||||
AsyncGenerator as AsyncGenerator,
|
||||
AsyncIterable as AsyncIterable,
|
||||
AsyncIterator as AsyncIterator,
|
||||
Awaitable as Awaitable,
|
||||
ByteString as ByteString,
|
||||
Callable as Callable,
|
||||
Collection as Collection,
|
||||
Container as Container,
|
||||
@@ -58,12 +59,8 @@ __all__ = [
|
||||
"ValuesView",
|
||||
"Sequence",
|
||||
"MutableSequence",
|
||||
"ByteString",
|
||||
]
|
||||
if sys.version_info < (3, 14):
|
||||
from typing import ByteString as ByteString # noqa: Y057
|
||||
|
||||
__all__ += ["ByteString"]
|
||||
|
||||
if sys.version_info >= (3, 12):
|
||||
__all__ += ["Buffer"]
|
||||
|
||||
|
||||
@@ -51,8 +51,8 @@ class _CDataMeta(type):
|
||||
# By default mypy complains about the following two methods, because strictly speaking cls
|
||||
# might not be a Type[_CT]. However this can never actually happen, because the only class that
|
||||
# uses _CDataMeta as its metaclass is _CData. So it's safe to ignore the errors here.
|
||||
def __mul__(cls: type[_CT], other: int) -> type[Array[_CT]]: ... # type: ignore[misc] # pyright: ignore[reportGeneralTypeIssues]
|
||||
def __rmul__(cls: type[_CT], other: int) -> type[Array[_CT]]: ... # type: ignore[misc] # pyright: ignore[reportGeneralTypeIssues]
|
||||
def __mul__(cls: type[_CT], other: int) -> type[Array[_CT]]: ... # type: ignore[misc]
|
||||
def __rmul__(cls: type[_CT], other: int) -> type[Array[_CT]]: ... # type: ignore[misc]
|
||||
|
||||
class _CData(metaclass=_CDataMeta):
|
||||
_b_base_: int
|
||||
|
||||
@@ -357,17 +357,7 @@ class Action(_AttributeHolder):
|
||||
|
||||
if sys.version_info >= (3, 12):
|
||||
class BooleanOptionalAction(Action):
|
||||
if sys.version_info >= (3, 14):
|
||||
def __init__(
|
||||
self,
|
||||
option_strings: Sequence[str],
|
||||
dest: str,
|
||||
default: bool | None = None,
|
||||
required: bool = False,
|
||||
help: str | None = None,
|
||||
deprecated: bool = False,
|
||||
) -> None: ...
|
||||
elif sys.version_info >= (3, 13):
|
||||
if sys.version_info >= (3, 13):
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
|
||||
@@ -10,28 +10,27 @@ class _ABC(type):
|
||||
if sys.version_info >= (3, 9):
|
||||
def __init__(cls, *args: Unused) -> None: ...
|
||||
|
||||
if sys.version_info < (3, 14):
|
||||
@deprecated("Replaced by ast.Constant; removed in Python 3.14")
|
||||
class Num(Constant, metaclass=_ABC):
|
||||
value: int | float | complex
|
||||
@deprecated("Replaced by ast.Constant; removal scheduled for Python 3.14")
|
||||
class Num(Constant, metaclass=_ABC):
|
||||
value: int | float | complex
|
||||
|
||||
@deprecated("Replaced by ast.Constant; removed in Python 3.14")
|
||||
class Str(Constant, metaclass=_ABC):
|
||||
value: str
|
||||
# Aliases for value, for backwards compatibility
|
||||
s: str
|
||||
@deprecated("Replaced by ast.Constant; removal scheduled for Python 3.14")
|
||||
class Str(Constant, metaclass=_ABC):
|
||||
value: str
|
||||
# Aliases for value, for backwards compatibility
|
||||
s: str
|
||||
|
||||
@deprecated("Replaced by ast.Constant; removed in Python 3.14")
|
||||
class Bytes(Constant, metaclass=_ABC):
|
||||
value: bytes
|
||||
# Aliases for value, for backwards compatibility
|
||||
s: bytes
|
||||
@deprecated("Replaced by ast.Constant; removal scheduled for Python 3.14")
|
||||
class Bytes(Constant, metaclass=_ABC):
|
||||
value: bytes
|
||||
# Aliases for value, for backwards compatibility
|
||||
s: bytes
|
||||
|
||||
@deprecated("Replaced by ast.Constant; removed in Python 3.14")
|
||||
class NameConstant(Constant, metaclass=_ABC): ...
|
||||
@deprecated("Replaced by ast.Constant; removal scheduled for Python 3.14")
|
||||
class NameConstant(Constant, metaclass=_ABC): ...
|
||||
|
||||
@deprecated("Replaced by ast.Constant; removed in Python 3.14")
|
||||
class Ellipsis(Constant, metaclass=_ABC): ...
|
||||
@deprecated("Replaced by ast.Constant; removal scheduled for Python 3.14")
|
||||
class Ellipsis(Constant, metaclass=_ABC): ...
|
||||
|
||||
if sys.version_info >= (3, 9):
|
||||
class slice(AST): ...
|
||||
|
||||
@@ -151,13 +151,13 @@ if sys.version_info >= (3, 10):
|
||||
@overload
|
||||
def gather(*coros_or_futures: _FutureLike[_T], return_exceptions: Literal[False] = False) -> Future[list[_T]]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def gather(coro_or_future1: _FutureLike[_T1], /, *, return_exceptions: bool) -> Future[tuple[_T1 | BaseException]]: ...
|
||||
def gather(coro_or_future1: _FutureLike[_T1], /, *, return_exceptions: bool) -> Future[tuple[_T1 | BaseException]]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def gather(
|
||||
def gather( # type: ignore[overload-overlap]
|
||||
coro_or_future1: _FutureLike[_T1], coro_or_future2: _FutureLike[_T2], /, *, return_exceptions: bool
|
||||
) -> Future[tuple[_T1 | BaseException, _T2 | BaseException]]: ...
|
||||
@overload
|
||||
def gather(
|
||||
def gather( # type: ignore[overload-overlap]
|
||||
coro_or_future1: _FutureLike[_T1],
|
||||
coro_or_future2: _FutureLike[_T2],
|
||||
coro_or_future3: _FutureLike[_T3],
|
||||
@@ -166,7 +166,7 @@ if sys.version_info >= (3, 10):
|
||||
return_exceptions: bool,
|
||||
) -> Future[tuple[_T1 | BaseException, _T2 | BaseException, _T3 | BaseException]]: ...
|
||||
@overload
|
||||
def gather(
|
||||
def gather( # type: ignore[overload-overlap]
|
||||
coro_or_future1: _FutureLike[_T1],
|
||||
coro_or_future2: _FutureLike[_T2],
|
||||
coro_or_future3: _FutureLike[_T3],
|
||||
@@ -176,7 +176,7 @@ if sys.version_info >= (3, 10):
|
||||
return_exceptions: bool,
|
||||
) -> Future[tuple[_T1 | BaseException, _T2 | BaseException, _T3 | BaseException, _T4 | BaseException]]: ...
|
||||
@overload
|
||||
def gather(
|
||||
def gather( # type: ignore[overload-overlap]
|
||||
coro_or_future1: _FutureLike[_T1],
|
||||
coro_or_future2: _FutureLike[_T2],
|
||||
coro_or_future3: _FutureLike[_T3],
|
||||
@@ -189,7 +189,7 @@ if sys.version_info >= (3, 10):
|
||||
tuple[_T1 | BaseException, _T2 | BaseException, _T3 | BaseException, _T4 | BaseException, _T5 | BaseException]
|
||||
]: ...
|
||||
@overload
|
||||
def gather(
|
||||
def gather( # type: ignore[overload-overlap]
|
||||
coro_or_future1: _FutureLike[_T1],
|
||||
coro_or_future2: _FutureLike[_T2],
|
||||
coro_or_future3: _FutureLike[_T3],
|
||||
|
||||
@@ -159,7 +159,7 @@ if sys.platform != "win32":
|
||||
|
||||
class _UnixSelectorEventLoop(BaseSelectorEventLoop):
|
||||
if sys.version_info >= (3, 13):
|
||||
async def create_unix_server(
|
||||
async def create_unix_server( # type: ignore[override]
|
||||
self,
|
||||
protocol_factory: _ProtocolFactory,
|
||||
path: StrPath | None = None,
|
||||
|
||||
@@ -1744,7 +1744,7 @@ _SupportsSumNoDefaultT = TypeVar("_SupportsSumNoDefaultT", bound=_SupportsSumWit
|
||||
# without creating many false-positive errors (see #7578).
|
||||
# Instead, we special-case the most common examples of this: bool and literal integers.
|
||||
@overload
|
||||
def sum(iterable: Iterable[bool | _LiteralInteger], /, start: int = 0) -> int: ...
|
||||
def sum(iterable: Iterable[bool | _LiteralInteger], /, start: int = 0) -> int: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def sum(iterable: Iterable[_SupportsSumNoDefaultT], /) -> _SupportsSumNoDefaultT | Literal[0]: ...
|
||||
@overload
|
||||
@@ -1752,8 +1752,9 @@ def sum(iterable: Iterable[_AddableT1], /, start: _AddableT2) -> _AddableT1 | _A
|
||||
|
||||
# The argument to `vars()` has to have a `__dict__` attribute, so the second overload can't be annotated with `object`
|
||||
# (A "SupportsDunderDict" protocol doesn't work)
|
||||
# Use a type: ignore to make complaints about overlapping overloads go away
|
||||
@overload
|
||||
def vars(object: type, /) -> types.MappingProxyType[str, Any]: ...
|
||||
def vars(object: type, /) -> types.MappingProxyType[str, Any]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def vars(object: Any = ..., /) -> dict[str, Any]: ...
|
||||
|
||||
|
||||
@@ -55,7 +55,6 @@ class AbstractAsyncContextManager(Protocol[_T_co, _ExitT_co]):
|
||||
) -> _ExitT_co: ...
|
||||
|
||||
class ContextDecorator:
|
||||
def _recreate_cm(self) -> Self: ...
|
||||
def __call__(self, func: _F) -> _F: ...
|
||||
|
||||
class _GeneratorContextManager(AbstractContextManager[_T_co, bool | None], ContextDecorator):
|
||||
@@ -81,7 +80,6 @@ if sys.version_info >= (3, 10):
|
||||
_AF = TypeVar("_AF", bound=Callable[..., Awaitable[Any]])
|
||||
|
||||
class AsyncContextDecorator:
|
||||
def _recreate_cm(self) -> Self: ...
|
||||
def __call__(self, func: _AF) -> _AF: ...
|
||||
|
||||
class _AsyncGeneratorContextManager(AbstractAsyncContextManager[_T_co, bool | None], AsyncContextDecorator):
|
||||
|
||||
@@ -1,5 +1,12 @@
|
||||
import sys
|
||||
from ctypes import Structure, Union
|
||||
from _ctypes import RTLD_GLOBAL as RTLD_GLOBAL, RTLD_LOCAL as RTLD_LOCAL, Structure, Union
|
||||
from ctypes import DEFAULT_MODE as DEFAULT_MODE, cdll as cdll, pydll as pydll, pythonapi as pythonapi
|
||||
|
||||
if sys.version_info >= (3, 12):
|
||||
from _ctypes import SIZEOF_TIME_T as SIZEOF_TIME_T
|
||||
|
||||
if sys.platform == "win32":
|
||||
from ctypes import oledll as oledll, windll as windll
|
||||
|
||||
# At runtime, the native endianness is an alias for Structure,
|
||||
# while the other is a subclass with a metaclass added in.
|
||||
|
||||
@@ -5,7 +5,7 @@ from _typeshed import DataclassInstance
|
||||
from builtins import type as Type # alias to avoid name clashes with fields named "type"
|
||||
from collections.abc import Callable, Iterable, Mapping
|
||||
from typing import Any, Generic, Literal, Protocol, TypeVar, overload
|
||||
from typing_extensions import Never, TypeAlias, TypeIs
|
||||
from typing_extensions import TypeAlias, TypeIs
|
||||
|
||||
if sys.version_info >= (3, 9):
|
||||
from types import GenericAlias
|
||||
@@ -213,10 +213,6 @@ else:
|
||||
) -> Any: ...
|
||||
|
||||
def fields(class_or_instance: DataclassInstance | type[DataclassInstance]) -> tuple[Field[Any], ...]: ...
|
||||
|
||||
# HACK: `obj: Never` typing matches if object argument is using `Any` type.
|
||||
@overload
|
||||
def is_dataclass(obj: Never) -> TypeIs[DataclassInstance | type[DataclassInstance]]: ... # type: ignore[narrowed-type-not-subtype] # pyright: ignore[reportGeneralTypeIssues]
|
||||
@overload
|
||||
def is_dataclass(obj: type) -> TypeIs[type[DataclassInstance]]: ...
|
||||
@overload
|
||||
|
||||
@@ -1,26 +1,6 @@
|
||||
from _typeshed import BytesPath, Incomplete, StrOrBytesPath, StrPath, Unused
|
||||
from abc import abstractmethod
|
||||
from collections.abc import Callable, Iterable
|
||||
from distutils.command.bdist import bdist
|
||||
from distutils.command.bdist_dumb import bdist_dumb
|
||||
from distutils.command.bdist_rpm import bdist_rpm
|
||||
from distutils.command.build import build
|
||||
from distutils.command.build_clib import build_clib
|
||||
from distutils.command.build_ext import build_ext
|
||||
from distutils.command.build_py import build_py
|
||||
from distutils.command.build_scripts import build_scripts
|
||||
from distutils.command.check import check
|
||||
from distutils.command.clean import clean
|
||||
from distutils.command.config import config
|
||||
from distutils.command.install import install
|
||||
from distutils.command.install_data import install_data
|
||||
from distutils.command.install_egg_info import install_egg_info
|
||||
from distutils.command.install_headers import install_headers
|
||||
from distutils.command.install_lib import install_lib
|
||||
from distutils.command.install_scripts import install_scripts
|
||||
from distutils.command.register import register
|
||||
from distutils.command.sdist import sdist
|
||||
from distutils.command.upload import upload
|
||||
from distutils.dist import Distribution
|
||||
from distutils.file_util import _BytesPathT, _StrPathT
|
||||
from typing import Any, ClassVar, Literal, TypeVar, overload
|
||||
@@ -48,108 +28,8 @@ class Command:
|
||||
def ensure_dirname(self, option: str) -> None: ...
|
||||
def get_command_name(self) -> str: ...
|
||||
def set_undefined_options(self, src_cmd: str, *option_pairs: tuple[str, str]) -> None: ...
|
||||
# NOTE: This list comes directly from the distutils/command folder. Minus bdist_msi and bdist_wininst.
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["bdist"], create: bool | Literal[0, 1] = 1) -> bdist: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["bdist_dumb"], create: bool | Literal[0, 1] = 1) -> bdist_dumb: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["bdist_rpm"], create: bool | Literal[0, 1] = 1) -> bdist_rpm: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["build"], create: bool | Literal[0, 1] = 1) -> build: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["build_clib"], create: bool | Literal[0, 1] = 1) -> build_clib: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["build_ext"], create: bool | Literal[0, 1] = 1) -> build_ext: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["build_py"], create: bool | Literal[0, 1] = 1) -> build_py: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["build_scripts"], create: bool | Literal[0, 1] = 1) -> build_scripts: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["check"], create: bool | Literal[0, 1] = 1) -> check: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["clean"], create: bool | Literal[0, 1] = 1) -> clean: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["config"], create: bool | Literal[0, 1] = 1) -> config: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["install"], create: bool | Literal[0, 1] = 1) -> install: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["install_data"], create: bool | Literal[0, 1] = 1) -> install_data: ...
|
||||
@overload
|
||||
def get_finalized_command(
|
||||
self, command: Literal["install_egg_info"], create: bool | Literal[0, 1] = 1
|
||||
) -> install_egg_info: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["install_headers"], create: bool | Literal[0, 1] = 1) -> install_headers: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["install_lib"], create: bool | Literal[0, 1] = 1) -> install_lib: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["install_scripts"], create: bool | Literal[0, 1] = 1) -> install_scripts: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["register"], create: bool | Literal[0, 1] = 1) -> register: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["sdist"], create: bool | Literal[0, 1] = 1) -> sdist: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: Literal["upload"], create: bool | Literal[0, 1] = 1) -> upload: ...
|
||||
@overload
|
||||
def get_finalized_command(self, command: str, create: bool | Literal[0, 1] = 1) -> Command: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["bdist"], reinit_subcommands: bool | Literal[0, 1] = 0) -> bdist: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["bdist_dumb"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> bdist_dumb: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["bdist_rpm"], reinit_subcommands: bool | Literal[0, 1] = 0) -> bdist_rpm: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build"], reinit_subcommands: bool | Literal[0, 1] = 0) -> build: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["build_clib"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> build_clib: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build_ext"], reinit_subcommands: bool | Literal[0, 1] = 0) -> build_ext: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build_py"], reinit_subcommands: bool | Literal[0, 1] = 0) -> build_py: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["build_scripts"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> build_scripts: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["check"], reinit_subcommands: bool | Literal[0, 1] = 0) -> check: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["clean"], reinit_subcommands: bool | Literal[0, 1] = 0) -> clean: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["config"], reinit_subcommands: bool | Literal[0, 1] = 0) -> config: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["install"], reinit_subcommands: bool | Literal[0, 1] = 0) -> install: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["install_data"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> install_data: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["install_egg_info"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> install_egg_info: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["install_headers"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> install_headers: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["install_lib"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> install_lib: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["install_scripts"], reinit_subcommands: bool | Literal[0, 1] = 0
|
||||
) -> install_scripts: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["register"], reinit_subcommands: bool | Literal[0, 1] = 0) -> register: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["sdist"], reinit_subcommands: bool | Literal[0, 1] = 0) -> sdist: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["upload"], reinit_subcommands: bool | Literal[0, 1] = 0) -> upload: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: str, reinit_subcommands: bool | Literal[0, 1] = 0) -> Command: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: _CommandT, reinit_subcommands: bool | Literal[0, 1] = 0) -> _CommandT: ...
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
import sys
|
||||
|
||||
from . import (
|
||||
bdist,
|
||||
bdist_dumb,
|
||||
bdist_rpm,
|
||||
build,
|
||||
build_clib,
|
||||
build_ext,
|
||||
build_py,
|
||||
build_scripts,
|
||||
check,
|
||||
clean,
|
||||
install,
|
||||
install_data,
|
||||
install_headers,
|
||||
install_lib,
|
||||
install_scripts,
|
||||
register,
|
||||
sdist,
|
||||
upload,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"build",
|
||||
"build_py",
|
||||
"build_ext",
|
||||
"build_clib",
|
||||
"build_scripts",
|
||||
"clean",
|
||||
"install",
|
||||
"install_lib",
|
||||
"install_headers",
|
||||
"install_scripts",
|
||||
"install_data",
|
||||
"sdist",
|
||||
"register",
|
||||
"bdist",
|
||||
"bdist_dumb",
|
||||
"bdist_rpm",
|
||||
"check",
|
||||
"upload",
|
||||
]
|
||||
|
||||
if sys.version_info < (3, 10):
|
||||
from . import bdist_wininst
|
||||
|
||||
__all__ += ["bdist_wininst"]
|
||||
|
||||
@@ -1,26 +1,6 @@
|
||||
from _typeshed import Incomplete, StrOrBytesPath, StrPath, SupportsWrite
|
||||
from collections.abc import Iterable, MutableMapping
|
||||
from distutils.cmd import Command
|
||||
from distutils.command.bdist import bdist
|
||||
from distutils.command.bdist_dumb import bdist_dumb
|
||||
from distutils.command.bdist_rpm import bdist_rpm
|
||||
from distutils.command.build import build
|
||||
from distutils.command.build_clib import build_clib
|
||||
from distutils.command.build_ext import build_ext
|
||||
from distutils.command.build_py import build_py
|
||||
from distutils.command.build_scripts import build_scripts
|
||||
from distutils.command.check import check
|
||||
from distutils.command.clean import clean
|
||||
from distutils.command.config import config
|
||||
from distutils.command.install import install
|
||||
from distutils.command.install_data import install_data
|
||||
from distutils.command.install_egg_info import install_egg_info
|
||||
from distutils.command.install_headers import install_headers
|
||||
from distutils.command.install_lib import install_lib
|
||||
from distutils.command.install_scripts import install_scripts
|
||||
from distutils.command.register import register
|
||||
from distutils.command.sdist import sdist
|
||||
from distutils.command.upload import upload
|
||||
from re import Pattern
|
||||
from typing import IO, ClassVar, Literal, TypeVar, overload
|
||||
from typing_extensions import TypeAlias
|
||||
@@ -83,6 +63,10 @@ class Distribution:
|
||||
def __init__(self, attrs: MutableMapping[str, Incomplete] | None = None) -> None: ...
|
||||
def get_option_dict(self, command: str) -> dict[str, tuple[str, str]]: ...
|
||||
def parse_config_files(self, filenames: Iterable[str] | None = None) -> None: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: str, create: Literal[1, True] = 1) -> Command: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: str, create: Literal[0, False]) -> Command | None: ...
|
||||
global_options: ClassVar[_OptionsList]
|
||||
common_usage: ClassVar[str]
|
||||
display_options: ClassVar[_OptionsList]
|
||||
@@ -124,137 +108,8 @@ class Distribution:
|
||||
def print_commands(self) -> None: ...
|
||||
def get_command_list(self): ...
|
||||
def get_command_packages(self): ...
|
||||
# NOTE: This list comes directly from the distutils/command folder. Minus bdist_msi and bdist_wininst.
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["bdist"], create: Literal[1, True] = 1) -> bdist: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["bdist_dumb"], create: Literal[1, True] = 1) -> bdist_dumb: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["bdist_rpm"], create: Literal[1, True] = 1) -> bdist_rpm: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["build"], create: Literal[1, True] = 1) -> build: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["build_clib"], create: Literal[1, True] = 1) -> build_clib: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["build_ext"], create: Literal[1, True] = 1) -> build_ext: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["build_py"], create: Literal[1, True] = 1) -> build_py: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["build_scripts"], create: Literal[1, True] = 1) -> build_scripts: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["check"], create: Literal[1, True] = 1) -> check: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["clean"], create: Literal[1, True] = 1) -> clean: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["config"], create: Literal[1, True] = 1) -> config: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["install"], create: Literal[1, True] = 1) -> install: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["install_data"], create: Literal[1, True] = 1) -> install_data: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["install_egg_info"], create: Literal[1, True] = 1) -> install_egg_info: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["install_headers"], create: Literal[1, True] = 1) -> install_headers: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["install_lib"], create: Literal[1, True] = 1) -> install_lib: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["install_scripts"], create: Literal[1, True] = 1) -> install_scripts: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["register"], create: Literal[1, True] = 1) -> register: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["sdist"], create: Literal[1, True] = 1) -> sdist: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: Literal["upload"], create: Literal[1, True] = 1) -> upload: ...
|
||||
@overload
|
||||
def get_command_obj(self, command: str, create: Literal[1, True] = 1) -> Command: ...
|
||||
# Not replicating the overloads for "Command | None", user may use "isinstance"
|
||||
@overload
|
||||
def get_command_obj(self, command: str, create: Literal[0, False]) -> Command | None: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["bdist"]) -> type[bdist]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["bdist_dumb"]) -> type[bdist_dumb]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["bdist_rpm"]) -> type[bdist_rpm]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["build"]) -> type[build]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["build_clib"]) -> type[build_clib]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["build_ext"]) -> type[build_ext]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["build_py"]) -> type[build_py]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["build_scripts"]) -> type[build_scripts]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["check"]) -> type[check]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["clean"]) -> type[clean]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["config"]) -> type[config]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["install"]) -> type[install]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["install_data"]) -> type[install_data]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["install_egg_info"]) -> type[install_egg_info]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["install_headers"]) -> type[install_headers]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["install_lib"]) -> type[install_lib]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["install_scripts"]) -> type[install_scripts]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["register"]) -> type[register]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["sdist"]) -> type[sdist]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: Literal["upload"]) -> type[upload]: ...
|
||||
@overload
|
||||
def get_command_class(self, command: str) -> type[Command]: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["bdist"], reinit_subcommands: bool = False) -> bdist: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["bdist_dumb"], reinit_subcommands: bool = False) -> bdist_dumb: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["bdist_rpm"], reinit_subcommands: bool = False) -> bdist_rpm: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build"], reinit_subcommands: bool = False) -> build: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build_clib"], reinit_subcommands: bool = False) -> build_clib: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build_ext"], reinit_subcommands: bool = False) -> build_ext: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build_py"], reinit_subcommands: bool = False) -> build_py: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["build_scripts"], reinit_subcommands: bool = False) -> build_scripts: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["check"], reinit_subcommands: bool = False) -> check: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["clean"], reinit_subcommands: bool = False) -> clean: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["config"], reinit_subcommands: bool = False) -> config: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["install"], reinit_subcommands: bool = False) -> install: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["install_data"], reinit_subcommands: bool = False) -> install_data: ...
|
||||
@overload
|
||||
def reinitialize_command(
|
||||
self, command: Literal["install_egg_info"], reinit_subcommands: bool = False
|
||||
) -> install_egg_info: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["install_headers"], reinit_subcommands: bool = False) -> install_headers: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["install_lib"], reinit_subcommands: bool = False) -> install_lib: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["install_scripts"], reinit_subcommands: bool = False) -> install_scripts: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["register"], reinit_subcommands: bool = False) -> register: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["sdist"], reinit_subcommands: bool = False) -> sdist: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: Literal["upload"], reinit_subcommands: bool = False) -> upload: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: str, reinit_subcommands: bool = False) -> Command: ...
|
||||
@overload
|
||||
def reinitialize_command(self, command: _CommandT, reinit_subcommands: bool = False) -> _CommandT: ...
|
||||
|
||||
@@ -66,10 +66,7 @@ def mktime_tz(data: _PDTZ) -> int: ...
|
||||
def formatdate(timeval: float | None = None, localtime: bool = False, usegmt: bool = False) -> str: ...
|
||||
def format_datetime(dt: datetime.datetime, usegmt: bool = False) -> str: ...
|
||||
|
||||
if sys.version_info >= (3, 14):
|
||||
def localtime(dt: datetime.datetime | None = None) -> datetime.datetime: ...
|
||||
|
||||
elif sys.version_info >= (3, 12):
|
||||
if sys.version_info >= (3, 12):
|
||||
@overload
|
||||
def localtime(dt: datetime.datetime | None = None) -> datetime.datetime: ...
|
||||
@overload
|
||||
|
||||
@@ -17,24 +17,13 @@ def cmpfiles(
|
||||
) -> tuple[list[AnyStr], list[AnyStr], list[AnyStr]]: ...
|
||||
|
||||
class dircmp(Generic[AnyStr]):
|
||||
if sys.version_info >= (3, 13):
|
||||
def __init__(
|
||||
self,
|
||||
a: GenericPath[AnyStr],
|
||||
b: GenericPath[AnyStr],
|
||||
ignore: Sequence[AnyStr] | None = None,
|
||||
hide: Sequence[AnyStr] | None = None,
|
||||
*,
|
||||
shallow: bool = True,
|
||||
) -> None: ...
|
||||
else:
|
||||
def __init__(
|
||||
self,
|
||||
a: GenericPath[AnyStr],
|
||||
b: GenericPath[AnyStr],
|
||||
ignore: Sequence[AnyStr] | None = None,
|
||||
hide: Sequence[AnyStr] | None = None,
|
||||
) -> None: ...
|
||||
def __init__(
|
||||
self,
|
||||
a: GenericPath[AnyStr],
|
||||
b: GenericPath[AnyStr],
|
||||
ignore: Sequence[AnyStr] | None = None,
|
||||
hide: Sequence[AnyStr] | None = None,
|
||||
) -> None: ...
|
||||
left: AnyStr
|
||||
right: AnyStr
|
||||
hide: Sequence[AnyStr]
|
||||
|
||||
@@ -155,7 +155,7 @@ if sys.version_info >= (3, 10) and sys.version_info < (3, 12):
|
||||
@property
|
||||
def names(self) -> set[str]: ...
|
||||
@overload
|
||||
def select(self) -> Self: ...
|
||||
def select(self) -> Self: ... # type: ignore[misc]
|
||||
@overload
|
||||
def select(
|
||||
self,
|
||||
@@ -277,7 +277,7 @@ if sys.version_info >= (3, 12):
|
||||
|
||||
elif sys.version_info >= (3, 10):
|
||||
@overload
|
||||
def entry_points() -> SelectableGroups: ...
|
||||
def entry_points() -> SelectableGroups: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def entry_points(
|
||||
*, name: str = ..., value: str = ..., group: str = ..., module: str = ..., attr: str = ..., extras: list[str] = ...
|
||||
|
||||
@@ -6,7 +6,7 @@ from ..pytree import Node
|
||||
|
||||
class FixUnicode(fixer_base.BaseFix):
|
||||
BM_compatible: ClassVar[Literal[True]]
|
||||
PATTERN: ClassVar[str]
|
||||
PATTERN: ClassVar[Literal["STRING | 'unicode' | 'unichr'"]] # type: ignore[name-defined] # Name "STRING" is not defined
|
||||
unicode_literals: bool
|
||||
def start_tree(self, tree: Node, filename: StrPath) -> None: ...
|
||||
def transform(self, node, results): ...
|
||||
|
||||
@@ -55,9 +55,10 @@ __all__ = [
|
||||
"setLogRecordFactory",
|
||||
"lastResort",
|
||||
"raiseExceptions",
|
||||
"warn",
|
||||
]
|
||||
|
||||
if sys.version_info < (3, 13):
|
||||
__all__ += ["warn"]
|
||||
if sys.version_info >= (3, 11):
|
||||
__all__ += ["getLevelNamesMapping"]
|
||||
if sys.version_info >= (3, 12):
|
||||
@@ -156,16 +157,17 @@ class Logger(Filterer):
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
) -> None: ...
|
||||
@deprecated("Deprecated; use warning() instead.")
|
||||
def warn(
|
||||
self,
|
||||
msg: object,
|
||||
*args: object,
|
||||
exc_info: _ExcInfoType = None,
|
||||
stack_info: bool = False,
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
) -> None: ...
|
||||
if sys.version_info < (3, 13):
|
||||
def warn(
|
||||
self,
|
||||
msg: object,
|
||||
*args: object,
|
||||
exc_info: _ExcInfoType = None,
|
||||
stack_info: bool = False,
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
) -> None: ...
|
||||
|
||||
def error(
|
||||
self,
|
||||
msg: object,
|
||||
@@ -410,17 +412,18 @@ class LoggerAdapter(Generic[_L]):
|
||||
extra: Mapping[str, object] | None = None,
|
||||
**kwargs: object,
|
||||
) -> None: ...
|
||||
@deprecated("Deprecated; use warning() instead.")
|
||||
def warn(
|
||||
self,
|
||||
msg: object,
|
||||
*args: object,
|
||||
exc_info: _ExcInfoType = None,
|
||||
stack_info: bool = False,
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
**kwargs: object,
|
||||
) -> None: ...
|
||||
if sys.version_info < (3, 13):
|
||||
def warn(
|
||||
self,
|
||||
msg: object,
|
||||
*args: object,
|
||||
exc_info: _ExcInfoType = None,
|
||||
stack_info: bool = False,
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
**kwargs: object,
|
||||
) -> None: ...
|
||||
|
||||
def error(
|
||||
self,
|
||||
msg: object,
|
||||
@@ -520,15 +523,17 @@ def warning(
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
) -> None: ...
|
||||
@deprecated("Deprecated; use warning() instead.")
|
||||
def warn(
|
||||
msg: object,
|
||||
*args: object,
|
||||
exc_info: _ExcInfoType = None,
|
||||
stack_info: bool = False,
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
) -> None: ...
|
||||
|
||||
if sys.version_info < (3, 13):
|
||||
def warn(
|
||||
msg: object,
|
||||
*args: object,
|
||||
exc_info: _ExcInfoType = None,
|
||||
stack_info: bool = False,
|
||||
stacklevel: int = 1,
|
||||
extra: Mapping[str, object] | None = None,
|
||||
) -> None: ...
|
||||
|
||||
def error(
|
||||
msg: object,
|
||||
*args: object,
|
||||
|
||||
@@ -73,7 +73,7 @@ def copy(obj: _CT) -> _CT: ...
|
||||
@overload
|
||||
def synchronized(obj: _SimpleCData[_T], lock: _LockLike | None = None, ctx: Any | None = None) -> Synchronized[_T]: ...
|
||||
@overload
|
||||
def synchronized(obj: ctypes.Array[c_char], lock: _LockLike | None = None, ctx: Any | None = None) -> SynchronizedString: ...
|
||||
def synchronized(obj: ctypes.Array[c_char], lock: _LockLike | None = None, ctx: Any | None = None) -> SynchronizedString: ... # type: ignore
|
||||
@overload
|
||||
def synchronized(
|
||||
obj: ctypes.Array[_SimpleCData[_T]], lock: _LockLike | None = None, ctx: Any | None = None
|
||||
@@ -115,12 +115,12 @@ class SynchronizedArray(SynchronizedBase[ctypes.Array[_SimpleCData[_T]]], Generi
|
||||
class SynchronizedString(SynchronizedArray[bytes]):
|
||||
@overload # type: ignore[override]
|
||||
def __getitem__(self, i: slice) -> bytes: ...
|
||||
@overload
|
||||
@overload # type: ignore[override]
|
||||
def __getitem__(self, i: int) -> bytes: ...
|
||||
@overload # type: ignore[override]
|
||||
def __setitem__(self, i: slice, value: bytes) -> None: ...
|
||||
@overload
|
||||
def __setitem__(self, i: int, value: bytes) -> None: ...
|
||||
@overload # type: ignore[override]
|
||||
def __setitem__(self, i: int, value: bytes) -> None: ... # type: ignore[override]
|
||||
def __getslice__(self, start: int, stop: int) -> bytes: ... # type: ignore[override]
|
||||
def __setslice__(self, start: int, stop: int, values: bytes) -> None: ... # type: ignore[override]
|
||||
|
||||
|
||||
@@ -159,20 +159,6 @@ class Path(PurePath):
|
||||
def lchmod(self, mode: int) -> None: ...
|
||||
def lstat(self) -> stat_result: ...
|
||||
def mkdir(self, mode: int = 0o777, parents: bool = False, exist_ok: bool = False) -> None: ...
|
||||
|
||||
if sys.version_info >= (3, 14):
|
||||
def copy(self, target: StrPath, *, follow_symlinks: bool = True, preserve_metadata: bool = False) -> None: ...
|
||||
def copytree(
|
||||
self,
|
||||
target: StrPath,
|
||||
*,
|
||||
follow_symlinks: bool = True,
|
||||
preserve_metadata: bool = False,
|
||||
dirs_exist_ok: bool = False,
|
||||
ignore: Callable[[Self], bool] | None = None,
|
||||
on_error: Callable[[OSError], object] | None = None,
|
||||
) -> None: ...
|
||||
|
||||
# Adapted from builtins.open
|
||||
# Text mode: always returns a TextIOWrapper
|
||||
# The Traversable .open in stdlib/importlib/abc.pyi should be kept in sync with this.
|
||||
@@ -246,18 +232,10 @@ class Path(PurePath):
|
||||
if sys.version_info >= (3, 9):
|
||||
def readlink(self) -> Self: ...
|
||||
|
||||
if sys.version_info >= (3, 10):
|
||||
def rename(self, target: StrPath) -> Self: ...
|
||||
def replace(self, target: StrPath) -> Self: ...
|
||||
else:
|
||||
def rename(self, target: str | PurePath) -> Self: ...
|
||||
def replace(self, target: str | PurePath) -> Self: ...
|
||||
|
||||
def rename(self, target: str | PurePath) -> Self: ...
|
||||
def replace(self, target: str | PurePath) -> Self: ...
|
||||
def resolve(self, strict: bool = False) -> Self: ...
|
||||
def rmdir(self) -> None: ...
|
||||
if sys.version_info >= (3, 14):
|
||||
def delete(self, ignore_errors: bool = False, on_error: Callable[[OSError], object] | None = None) -> None: ...
|
||||
|
||||
def symlink_to(self, target: StrOrBytesPath, target_is_directory: bool = False) -> None: ...
|
||||
if sys.version_info >= (3, 10):
|
||||
def hardlink_to(self, target: StrOrBytesPath) -> None: ...
|
||||
@@ -288,9 +266,6 @@ class Path(PurePath):
|
||||
self, top_down: bool = ..., on_error: Callable[[OSError], object] | None = ..., follow_symlinks: bool = ...
|
||||
) -> Iterator[tuple[Self, list[str], list[str]]]: ...
|
||||
|
||||
if sys.version_info >= (3, 14):
|
||||
def rmtree(self, ignore_errors: bool = False, on_error: Callable[[OSError], object] | None = None) -> None: ...
|
||||
|
||||
class PosixPath(Path, PurePosixPath): ...
|
||||
class WindowsPath(Path, PureWindowsPath): ...
|
||||
|
||||
|
||||
@@ -84,7 +84,7 @@ class Pdb(Bdb, Cmd):
|
||||
def _runscript(self, filename: str) -> None: ...
|
||||
|
||||
if sys.version_info >= (3, 13):
|
||||
def completedefault(self, text: str, line: str, begidx: int, endidx: int) -> list[str]: ...
|
||||
def completedefault(self, text: str, line: str, begidx: int, endidx: int) -> list[str]: ... # type: ignore[override]
|
||||
|
||||
def do_commands(self, arg: str) -> bool | None: ...
|
||||
def do_break(self, arg: str, temporary: bool = ...) -> bool | None: ...
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import sys
|
||||
from collections.abc import Callable, Iterable
|
||||
from typing import Final
|
||||
from typing_extensions import TypeAlias, deprecated
|
||||
from typing_extensions import TypeAlias
|
||||
|
||||
if sys.platform != "win32":
|
||||
__all__ = ["openpty", "fork", "spawn"]
|
||||
@@ -13,12 +13,7 @@ if sys.platform != "win32":
|
||||
|
||||
CHILD: Final = 0
|
||||
def openpty() -> tuple[int, int]: ...
|
||||
|
||||
if sys.version_info < (3, 14):
|
||||
@deprecated("Deprecated in 3.12, to be removed in 3.14; use openpty() instead")
|
||||
def master_open() -> tuple[int, str]: ...
|
||||
@deprecated("Deprecated in 3.12, to be removed in 3.14; use openpty() instead")
|
||||
def slave_open(tty_name: str) -> int: ...
|
||||
|
||||
def master_open() -> tuple[int, str]: ... # deprecated, use openpty()
|
||||
def slave_open(tty_name: str) -> int: ... # deprecated, use openpty()
|
||||
def fork() -> tuple[int, int]: ...
|
||||
def spawn(argv: str | Iterable[str], master_read: _Reader = ..., stdin_read: _Reader = ...) -> int: ...
|
||||
|
||||
@@ -74,7 +74,7 @@ class Match(Generic[AnyStr]):
|
||||
@overload
|
||||
def expand(self: Match[str], template: str) -> str: ...
|
||||
@overload
|
||||
def expand(self: Match[bytes], template: ReadableBuffer) -> bytes: ...
|
||||
def expand(self: Match[bytes], template: ReadableBuffer) -> bytes: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def expand(self, template: AnyStr) -> AnyStr: ...
|
||||
# group() returns "AnyStr" or "AnyStr | None", depending on the pattern.
|
||||
@@ -124,21 +124,19 @@ class Pattern(Generic[AnyStr]):
|
||||
@overload
|
||||
def search(self: Pattern[str], string: str, pos: int = 0, endpos: int = sys.maxsize) -> Match[str] | None: ...
|
||||
@overload
|
||||
def search(self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize) -> Match[bytes] | None: ...
|
||||
def search(self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize) -> Match[bytes] | None: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def search(self, string: AnyStr, pos: int = 0, endpos: int = sys.maxsize) -> Match[AnyStr] | None: ...
|
||||
@overload
|
||||
def match(self: Pattern[str], string: str, pos: int = 0, endpos: int = sys.maxsize) -> Match[str] | None: ...
|
||||
@overload
|
||||
def match(self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize) -> Match[bytes] | None: ...
|
||||
def match(self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize) -> Match[bytes] | None: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def match(self, string: AnyStr, pos: int = 0, endpos: int = sys.maxsize) -> Match[AnyStr] | None: ...
|
||||
@overload
|
||||
def fullmatch(self: Pattern[str], string: str, pos: int = 0, endpos: int = sys.maxsize) -> Match[str] | None: ...
|
||||
@overload
|
||||
def fullmatch(
|
||||
self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize
|
||||
) -> Match[bytes] | None: ...
|
||||
def fullmatch(self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize) -> Match[bytes] | None: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def fullmatch(self, string: AnyStr, pos: int = 0, endpos: int = sys.maxsize) -> Match[AnyStr] | None: ...
|
||||
@overload
|
||||
@@ -157,15 +155,13 @@ class Pattern(Generic[AnyStr]):
|
||||
@overload
|
||||
def finditer(self: Pattern[str], string: str, pos: int = 0, endpos: int = sys.maxsize) -> Iterator[Match[str]]: ...
|
||||
@overload
|
||||
def finditer(
|
||||
self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize
|
||||
) -> Iterator[Match[bytes]]: ...
|
||||
def finditer(self: Pattern[bytes], string: ReadableBuffer, pos: int = 0, endpos: int = sys.maxsize) -> Iterator[Match[bytes]]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def finditer(self, string: AnyStr, pos: int = 0, endpos: int = sys.maxsize) -> Iterator[Match[AnyStr]]: ...
|
||||
@overload
|
||||
def sub(self: Pattern[str], repl: str | Callable[[Match[str]], str], string: str, count: int = 0) -> str: ...
|
||||
@overload
|
||||
def sub(
|
||||
def sub( # type: ignore[overload-overlap]
|
||||
self: Pattern[bytes],
|
||||
repl: ReadableBuffer | Callable[[Match[bytes]], ReadableBuffer],
|
||||
string: ReadableBuffer,
|
||||
@@ -176,7 +172,7 @@ class Pattern(Generic[AnyStr]):
|
||||
@overload
|
||||
def subn(self: Pattern[str], repl: str | Callable[[Match[str]], str], string: str, count: int = 0) -> tuple[str, int]: ...
|
||||
@overload
|
||||
def subn(
|
||||
def subn( # type: ignore[overload-overlap]
|
||||
self: Pattern[bytes],
|
||||
repl: ReadableBuffer | Callable[[Match[bytes]], ReadableBuffer],
|
||||
string: ReadableBuffer,
|
||||
|
||||
@@ -29,10 +29,7 @@ def DateFromTicks(ticks: float) -> Date: ...
|
||||
def TimeFromTicks(ticks: float) -> Time: ...
|
||||
def TimestampFromTicks(ticks: float) -> Timestamp: ...
|
||||
|
||||
if sys.version_info < (3, 14):
|
||||
# Deprecated in 3.12, removed in 3.14.
|
||||
version_info: tuple[int, int, int]
|
||||
|
||||
version_info: tuple[int, int, int]
|
||||
sqlite_version_info: tuple[int, int, int]
|
||||
Binary = memoryview
|
||||
|
||||
@@ -93,10 +90,7 @@ SQLITE_UPDATE: Final[int]
|
||||
adapters: dict[tuple[type[Any], type[Any]], _Adapter[Any]]
|
||||
converters: dict[str, _Converter]
|
||||
sqlite_version: str
|
||||
|
||||
if sys.version_info < (3, 14):
|
||||
# Deprecated in 3.12, removed in 3.14.
|
||||
version: str
|
||||
version: str
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
SQLITE_ABORT: Final[int]
|
||||
|
||||
@@ -2,7 +2,6 @@ import sys
|
||||
from _collections_abc import dict_keys
|
||||
from collections.abc import Sequence
|
||||
from typing import Any
|
||||
from typing_extensions import deprecated
|
||||
|
||||
__all__ = ["symtable", "SymbolTable", "Class", "Function", "Symbol"]
|
||||
|
||||
@@ -52,9 +51,7 @@ class Function(SymbolTable):
|
||||
def get_nonlocals(self) -> tuple[str, ...]: ...
|
||||
|
||||
class Class(SymbolTable):
|
||||
if sys.version_info < (3, 16):
|
||||
@deprecated("deprecated in Python 3.14, will be removed in Python 3.16")
|
||||
def get_methods(self) -> tuple[str, ...]: ...
|
||||
def get_methods(self) -> tuple[str, ...]: ...
|
||||
|
||||
class Symbol:
|
||||
def __init__(
|
||||
|
||||
@@ -423,7 +423,7 @@ class TarInfo:
|
||||
name: str
|
||||
path: str
|
||||
size: int
|
||||
mtime: int | float
|
||||
mtime: int
|
||||
chksum: int
|
||||
devmajor: int
|
||||
devminor: int
|
||||
|
||||
@@ -463,7 +463,7 @@ class TemporaryDirectory(Generic[AnyStr]):
|
||||
|
||||
# The overloads overlap, but they should still work fine.
|
||||
@overload
|
||||
def mkstemp(
|
||||
def mkstemp( # type: ignore[overload-overlap]
|
||||
suffix: str | None = None, prefix: str | None = None, dir: StrPath | None = None, text: bool = False
|
||||
) -> tuple[int, str]: ...
|
||||
@overload
|
||||
@@ -473,7 +473,7 @@ def mkstemp(
|
||||
|
||||
# The overloads overlap, but they should still work fine.
|
||||
@overload
|
||||
def mkdtemp(suffix: str | None = None, prefix: str | None = None, dir: StrPath | None = None) -> str: ...
|
||||
def mkdtemp(suffix: str | None = None, prefix: str | None = None, dir: StrPath | None = None) -> str: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def mkdtemp(suffix: bytes | None = None, prefix: bytes | None = None, dir: BytesPath | None = None) -> bytes: ...
|
||||
def mktemp(suffix: str = "", prefix: str = "tmp", dir: StrPath | None = None) -> str: ...
|
||||
|
||||
@@ -2148,12 +2148,11 @@ class Listbox(Widget, XView, YView):
|
||||
selectborderwidth: _ScreenUnits = 0,
|
||||
selectforeground: str = ...,
|
||||
# from listbox man page: "The value of the [selectmode] option may be
|
||||
# arbitrary, but the default bindings expect it to be either single,
|
||||
# browse, multiple, or extended"
|
||||
# arbitrary, but the default bindings expect it to be ..."
|
||||
#
|
||||
# I have never seen anyone setting this to something else than what
|
||||
# "the default bindings expect", but let's support it anyway.
|
||||
selectmode: str | Literal["single", "browse", "multiple", "extended"] = "browse", # noqa: Y051
|
||||
selectmode: str = "browse",
|
||||
setgrid: bool = False,
|
||||
state: Literal["normal", "disabled"] = "normal",
|
||||
takefocus: _TakeFocusValue = "",
|
||||
@@ -2188,7 +2187,7 @@ class Listbox(Widget, XView, YView):
|
||||
selectbackground: str = ...,
|
||||
selectborderwidth: _ScreenUnits = ...,
|
||||
selectforeground: str = ...,
|
||||
selectmode: str | Literal["single", "browse", "multiple", "extended"] = ..., # noqa: Y051
|
||||
selectmode: str = ...,
|
||||
setgrid: bool = ...,
|
||||
state: Literal["normal", "disabled"] = ...,
|
||||
takefocus: _TakeFocusValue = ...,
|
||||
@@ -2908,9 +2907,6 @@ class Scrollbar(Widget):
|
||||
def set(self, first: float | str, last: float | str) -> None: ...
|
||||
|
||||
_TextIndex: TypeAlias = _tkinter.Tcl_Obj | str | float | Misc
|
||||
_WhatToCount: TypeAlias = Literal[
|
||||
"chars", "displaychars", "displayindices", "displaylines", "indices", "lines", "xpixels", "ypixels"
|
||||
]
|
||||
|
||||
class Text(Widget, XView, YView):
|
||||
def __init__(
|
||||
@@ -3025,27 +3021,7 @@ class Text(Widget, XView, YView):
|
||||
config = configure
|
||||
def bbox(self, index: _TextIndex) -> tuple[int, int, int, int] | None: ... # type: ignore[override]
|
||||
def compare(self, index1: _TextIndex, op: Literal["<", "<=", "==", ">=", ">", "!="], index2: _TextIndex) -> bool: ...
|
||||
@overload
|
||||
def count(self, index1: _TextIndex, index2: _TextIndex) -> tuple[int] | None: ...
|
||||
@overload
|
||||
def count(self, index1: _TextIndex, index2: _TextIndex, arg: _WhatToCount | Literal["update"], /) -> tuple[int] | None: ...
|
||||
@overload
|
||||
def count(self, index1: _TextIndex, index2: _TextIndex, arg1: Literal["update"], arg2: _WhatToCount, /) -> int | None: ...
|
||||
@overload
|
||||
def count(self, index1: _TextIndex, index2: _TextIndex, arg1: _WhatToCount, arg2: Literal["update"], /) -> int | None: ...
|
||||
@overload
|
||||
def count(self, index1: _TextIndex, index2: _TextIndex, arg1: _WhatToCount, arg2: _WhatToCount, /) -> tuple[int, int]: ...
|
||||
@overload
|
||||
def count(
|
||||
self,
|
||||
index1: _TextIndex,
|
||||
index2: _TextIndex,
|
||||
arg1: _WhatToCount | Literal["update"],
|
||||
arg2: _WhatToCount | Literal["update"],
|
||||
arg3: _WhatToCount | Literal["update"],
|
||||
/,
|
||||
*args: _WhatToCount | Literal["update"],
|
||||
) -> tuple[int, ...]: ...
|
||||
def count(self, index1, index2, *args): ... # TODO
|
||||
@overload
|
||||
def debug(self, boolean: None = None) -> bool: ...
|
||||
@overload
|
||||
@@ -3588,7 +3564,7 @@ class Spinbox(Widget, XView):
|
||||
def scan_dragto(self, x): ...
|
||||
def selection(self, *args) -> tuple[int, ...]: ...
|
||||
def selection_adjust(self, index): ...
|
||||
def selection_clear(self): ... # type: ignore[override]
|
||||
def selection_clear(self): ...
|
||||
def selection_element(self, element: Incomplete | None = None): ...
|
||||
def selection_from(self, index: int) -> None: ...
|
||||
def selection_present(self) -> None: ...
|
||||
|
||||
@@ -1040,7 +1040,7 @@ class Treeview(Widget, tkinter.XView, tkinter.YView):
|
||||
@overload
|
||||
def heading(self, column: str | int, option: str) -> Any: ...
|
||||
@overload
|
||||
def heading(self, column: str | int, option: None = None) -> _TreeviewHeaderDict: ...
|
||||
def heading(self, column: str | int, option: None = None) -> _TreeviewHeaderDict: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def heading(
|
||||
self,
|
||||
@@ -1052,8 +1052,7 @@ class Treeview(Widget, tkinter.XView, tkinter.YView):
|
||||
anchor: tkinter._Anchor = ...,
|
||||
command: str | Callable[[], object] = ...,
|
||||
) -> None: ...
|
||||
# Internal Method. Leave untyped:
|
||||
def identify(self, component, x, y): ... # type: ignore[override]
|
||||
def identify(self, component, x, y): ... # Internal Method. Leave untyped
|
||||
def identify_row(self, y: int) -> str: ...
|
||||
def identify_column(self, x: int) -> str: ...
|
||||
def identify_region(self, x: int, y: int) -> Literal["heading", "separator", "tree", "cell", "nothing"]: ...
|
||||
@@ -1085,7 +1084,7 @@ class Treeview(Widget, tkinter.XView, tkinter.YView):
|
||||
@overload
|
||||
def item(self, item: str | int, option: str) -> Any: ...
|
||||
@overload
|
||||
def item(self, item: str | int, option: None = None) -> _TreeviewItemDict: ...
|
||||
def item(self, item: str | int, option: None = None) -> _TreeviewItemDict: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def item(
|
||||
self,
|
||||
|
||||
@@ -338,7 +338,7 @@ class TPen:
|
||||
def isvisible(self) -> bool: ...
|
||||
# Note: signatures 1 and 2 overlap unsafely when no arguments are provided
|
||||
@overload
|
||||
def pen(self) -> _PenState: ...
|
||||
def pen(self) -> _PenState: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def pen(
|
||||
self,
|
||||
@@ -384,7 +384,7 @@ class RawTurtle(TPen, TNavigator):
|
||||
def shape(self, name: str) -> None: ...
|
||||
# Unsafely overlaps when no arguments are provided
|
||||
@overload
|
||||
def shapesize(self) -> tuple[float, float, float]: ...
|
||||
def shapesize(self) -> tuple[float, float, float]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def shapesize(
|
||||
self, stretch_wid: float | None = None, stretch_len: float | None = None, outline: float | None = None
|
||||
@@ -395,7 +395,7 @@ class RawTurtle(TPen, TNavigator):
|
||||
def shearfactor(self, shear: float) -> None: ...
|
||||
# Unsafely overlaps when no arguments are provided
|
||||
@overload
|
||||
def shapetransform(self) -> tuple[float, float, float, float]: ...
|
||||
def shapetransform(self) -> tuple[float, float, float, float]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def shapetransform(
|
||||
self, t11: float | None = None, t12: float | None = None, t21: float | None = None, t22: float | None = None
|
||||
@@ -622,7 +622,7 @@ def isvisible() -> bool: ...
|
||||
|
||||
# Note: signatures 1 and 2 overlap unsafely when no arguments are provided
|
||||
@overload
|
||||
def pen() -> _PenState: ...
|
||||
def pen() -> _PenState: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def pen(
|
||||
pen: _PenState | None = None,
|
||||
@@ -661,7 +661,7 @@ if sys.version_info >= (3, 12):
|
||||
|
||||
# Unsafely overlaps when no arguments are provided
|
||||
@overload
|
||||
def shapesize() -> tuple[float, float, float]: ...
|
||||
def shapesize() -> tuple[float, float, float]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def shapesize(stretch_wid: float | None = None, stretch_len: float | None = None, outline: float | None = None) -> None: ...
|
||||
@overload
|
||||
@@ -671,7 +671,7 @@ def shearfactor(shear: float) -> None: ...
|
||||
|
||||
# Unsafely overlaps when no arguments are provided
|
||||
@overload
|
||||
def shapetransform() -> tuple[float, float, float, float]: ...
|
||||
def shapetransform() -> tuple[float, float, float, float]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def shapetransform(
|
||||
t11: float | None = None, t12: float | None = None, t21: float | None = None, t22: float | None = None
|
||||
|
||||
@@ -305,9 +305,9 @@ class MappingProxyType(Mapping[_KT, _VT_co]):
|
||||
def values(self) -> ValuesView[_VT_co]: ...
|
||||
def items(self) -> ItemsView[_KT, _VT_co]: ...
|
||||
@overload
|
||||
def get(self, key: _KT, /) -> _VT_co | None: ...
|
||||
def get(self, key: _KT, /) -> _VT_co | None: ... # type: ignore[override]
|
||||
@overload
|
||||
def get(self, key: _KT, default: _VT_co | _T2, /) -> _VT_co | _T2: ...
|
||||
def get(self, key: _KT, default: _VT_co | _T2, /) -> _VT_co | _T2: ... # type: ignore[override]
|
||||
if sys.version_info >= (3, 9):
|
||||
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
|
||||
def __reversed__(self) -> Iterator[_KT]: ...
|
||||
@@ -583,7 +583,7 @@ _P = ParamSpec("_P")
|
||||
|
||||
# it's not really an Awaitable, but can be used in an await expression. Real type: Generator & Awaitable
|
||||
@overload
|
||||
def coroutine(func: Callable[_P, Generator[Any, Any, _R]]) -> Callable[_P, Awaitable[_R]]: ...
|
||||
def coroutine(func: Callable[_P, Generator[Any, Any, _R]]) -> Callable[_P, Awaitable[_R]]: ... # type: ignore[overload-overlap]
|
||||
@overload
|
||||
def coroutine(func: _Fn) -> _Fn: ...
|
||||
|
||||
|
||||
@@ -846,8 +846,7 @@ class TextIO(IO[str]):
|
||||
@abstractmethod
|
||||
def __enter__(self) -> TextIO: ...
|
||||
|
||||
if sys.version_info < (3, 14):
|
||||
ByteString: typing_extensions.TypeAlias = bytes | bytearray | memoryview
|
||||
ByteString: typing_extensions.TypeAlias = bytes | bytearray | memoryview
|
||||
|
||||
# Functions
|
||||
|
||||
|
||||
@@ -299,7 +299,7 @@ class _patcher:
|
||||
# Ideally we'd be able to add an overload for it so that the return type is _patch[MagicMock],
|
||||
# but that's impossible with the current type system.
|
||||
@overload
|
||||
def __call__(
|
||||
def __call__( # type: ignore[overload-overlap]
|
||||
self,
|
||||
target: str,
|
||||
new: _T,
|
||||
|
||||
@@ -198,13 +198,13 @@ else:
|
||||
|
||||
# Requires an iterable of length 6
|
||||
@overload
|
||||
def urlunparse(components: Iterable[None]) -> Literal[b""]: ... # type: ignore[overload-overlap]
|
||||
def urlunparse(components: Iterable[None]) -> Literal[b""]: ...
|
||||
@overload
|
||||
def urlunparse(components: Iterable[AnyStr | None]) -> AnyStr: ...
|
||||
|
||||
# Requires an iterable of length 5
|
||||
@overload
|
||||
def urlunsplit(components: Iterable[None]) -> Literal[b""]: ... # type: ignore[overload-overlap]
|
||||
def urlunsplit(components: Iterable[None]) -> Literal[b""]: ...
|
||||
@overload
|
||||
def urlunsplit(components: Iterable[AnyStr | None]) -> AnyStr: ...
|
||||
def unwrap(url: str) -> str: ...
|
||||
|
||||
@@ -79,7 +79,6 @@ else:
|
||||
def pathname2url(pathname: str) -> str: ...
|
||||
|
||||
def getproxies() -> dict[str, str]: ...
|
||||
def getproxies_environment() -> dict[str, str]: ...
|
||||
def parse_http_list(s: str) -> list[str]: ...
|
||||
def parse_keqv_list(l: list[str]) -> dict[str, str]: ...
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, Final
|
||||
from typing import Any
|
||||
|
||||
from .domreg import getDOMImplementation as getDOMImplementation, registerDOMImplementation as registerDOMImplementation
|
||||
|
||||
@@ -17,22 +17,22 @@ class Node:
|
||||
NOTATION_NODE: int
|
||||
|
||||
# ExceptionCode
|
||||
INDEX_SIZE_ERR: Final[int]
|
||||
DOMSTRING_SIZE_ERR: Final[int]
|
||||
HIERARCHY_REQUEST_ERR: Final[int]
|
||||
WRONG_DOCUMENT_ERR: Final[int]
|
||||
INVALID_CHARACTER_ERR: Final[int]
|
||||
NO_DATA_ALLOWED_ERR: Final[int]
|
||||
NO_MODIFICATION_ALLOWED_ERR: Final[int]
|
||||
NOT_FOUND_ERR: Final[int]
|
||||
NOT_SUPPORTED_ERR: Final[int]
|
||||
INUSE_ATTRIBUTE_ERR: Final[int]
|
||||
INVALID_STATE_ERR: Final[int]
|
||||
SYNTAX_ERR: Final[int]
|
||||
INVALID_MODIFICATION_ERR: Final[int]
|
||||
NAMESPACE_ERR: Final[int]
|
||||
INVALID_ACCESS_ERR: Final[int]
|
||||
VALIDATION_ERR: Final[int]
|
||||
INDEX_SIZE_ERR: int
|
||||
DOMSTRING_SIZE_ERR: int
|
||||
HIERARCHY_REQUEST_ERR: int
|
||||
WRONG_DOCUMENT_ERR: int
|
||||
INVALID_CHARACTER_ERR: int
|
||||
NO_DATA_ALLOWED_ERR: int
|
||||
NO_MODIFICATION_ALLOWED_ERR: int
|
||||
NOT_FOUND_ERR: int
|
||||
NOT_SUPPORTED_ERR: int
|
||||
INUSE_ATTRIBUTE_ERR: int
|
||||
INVALID_STATE_ERR: int
|
||||
SYNTAX_ERR: int
|
||||
INVALID_MODIFICATION_ERR: int
|
||||
NAMESPACE_ERR: int
|
||||
INVALID_ACCESS_ERR: int
|
||||
VALIDATION_ERR: int
|
||||
|
||||
class DOMException(Exception):
|
||||
code: int
|
||||
@@ -62,8 +62,8 @@ class UserDataHandler:
|
||||
NODE_DELETED: int
|
||||
NODE_RENAMED: int
|
||||
|
||||
XML_NAMESPACE: Final[str]
|
||||
XMLNS_NAMESPACE: Final[str]
|
||||
XHTML_NAMESPACE: Final[str]
|
||||
EMPTY_NAMESPACE: Final[None]
|
||||
EMPTY_PREFIX: Final[None]
|
||||
XML_NAMESPACE: str
|
||||
XMLNS_NAMESPACE: str
|
||||
XHTML_NAMESPACE: str
|
||||
EMPTY_NAMESPACE: None
|
||||
EMPTY_PREFIX: None
|
||||
|
||||
@@ -1,15 +1,14 @@
|
||||
import sys
|
||||
from _typeshed import FileDescriptorOrPath
|
||||
from collections.abc import Callable
|
||||
from typing import Final
|
||||
from xml.etree.ElementTree import Element
|
||||
|
||||
XINCLUDE: Final[str]
|
||||
XINCLUDE_INCLUDE: Final[str]
|
||||
XINCLUDE_FALLBACK: Final[str]
|
||||
XINCLUDE: str
|
||||
XINCLUDE_INCLUDE: str
|
||||
XINCLUDE_FALLBACK: str
|
||||
|
||||
if sys.version_info >= (3, 9):
|
||||
DEFAULT_MAX_INCLUSION_DEPTH: Final = 6
|
||||
DEFAULT_MAX_INCLUSION_DEPTH: int
|
||||
|
||||
class FatalIncludeError(SyntaxError): ...
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ import sys
|
||||
from _collections_abc import dict_keys
|
||||
from _typeshed import FileDescriptorOrPath, ReadableBuffer, SupportsRead, SupportsWrite
|
||||
from collections.abc import Callable, Generator, ItemsView, Iterable, Iterator, Mapping, Sequence
|
||||
from typing import Any, Final, Literal, SupportsIndex, TypeVar, overload
|
||||
from typing import Any, Literal, SupportsIndex, TypeVar, overload
|
||||
from typing_extensions import TypeAlias, TypeGuard, deprecated
|
||||
|
||||
__all__ = [
|
||||
@@ -41,7 +41,7 @@ _FileRead: TypeAlias = FileDescriptorOrPath | SupportsRead[bytes] | SupportsRead
|
||||
_FileWriteC14N: TypeAlias = FileDescriptorOrPath | SupportsWrite[bytes]
|
||||
_FileWrite: TypeAlias = _FileWriteC14N | SupportsWrite[str]
|
||||
|
||||
VERSION: Final[str]
|
||||
VERSION: str
|
||||
|
||||
class ParseError(SyntaxError):
|
||||
code: int
|
||||
|
||||
@@ -94,20 +94,6 @@ class ZipExtFile(io.BufferedIOBase):
|
||||
class _Writer(Protocol):
|
||||
def write(self, s: str, /) -> object: ...
|
||||
|
||||
class _ZipReadable(Protocol):
|
||||
def seek(self, offset: int, whence: int = 0, /) -> int: ...
|
||||
def read(self, n: int = -1, /) -> bytes: ...
|
||||
|
||||
class _ZipTellable(Protocol):
|
||||
def tell(self) -> int: ...
|
||||
|
||||
class _ZipReadableTellable(_ZipReadable, _ZipTellable, Protocol): ...
|
||||
|
||||
class _ZipWritable(Protocol):
|
||||
def flush(self) -> None: ...
|
||||
def close(self) -> None: ...
|
||||
def write(self, b: bytes, /) -> int: ...
|
||||
|
||||
class ZipFile:
|
||||
filename: str | None
|
||||
debug: int
|
||||
@@ -120,50 +106,24 @@ class ZipFile:
|
||||
compresslevel: int | None # undocumented
|
||||
mode: _ZipFileMode # undocumented
|
||||
pwd: bytes | None # undocumented
|
||||
# metadata_encoding is new in 3.11
|
||||
if sys.version_info >= (3, 11):
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | IO[bytes],
|
||||
mode: _ZipFileMode = "r",
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
metadata_encoding: str | None = None,
|
||||
) -> None: ...
|
||||
# metadata_encoding is only allowed for read mode
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | _ZipReadable,
|
||||
mode: Literal["r"] = "r",
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
metadata_encoding: str | None = None,
|
||||
metadata_encoding: str | None,
|
||||
) -> None: ...
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | _ZipWritable,
|
||||
mode: Literal["w", "x"] = ...,
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
metadata_encoding: None = None,
|
||||
) -> None: ...
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | _ZipReadableTellable,
|
||||
mode: Literal["a"] = ...,
|
||||
file: StrPath | IO[bytes],
|
||||
mode: _ZipFileMode = "r",
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
@@ -172,7 +132,6 @@ class ZipFile:
|
||||
metadata_encoding: None = None,
|
||||
) -> None: ...
|
||||
else:
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | IO[bytes],
|
||||
@@ -183,39 +142,6 @@ class ZipFile:
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
) -> None: ...
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | _ZipReadable,
|
||||
mode: Literal["r"] = "r",
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
) -> None: ...
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | _ZipWritable,
|
||||
mode: Literal["w", "x"] = ...,
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
) -> None: ...
|
||||
@overload
|
||||
def __init__(
|
||||
self,
|
||||
file: StrPath | _ZipReadableTellable,
|
||||
mode: Literal["a"] = ...,
|
||||
compression: int = 0,
|
||||
allowZip64: bool = True,
|
||||
compresslevel: int | None = None,
|
||||
*,
|
||||
strict_timestamps: bool = True,
|
||||
) -> None: ...
|
||||
|
||||
def __enter__(self) -> Self: ...
|
||||
def __exit__(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
publish = true
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff_linter"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -89,26 +89,3 @@ async def func():
|
||||
async def func():
|
||||
async with asyncio.timeout(delay=0.2), asyncio.timeout(delay=0.2):
|
||||
...
|
||||
|
||||
|
||||
# Don't trigger for blocks with a yield statement
|
||||
async def foo():
|
||||
with trio.fail_after(1):
|
||||
yield
|
||||
|
||||
|
||||
async def foo(): # even if only one branch contains a yield, we skip the lint
|
||||
with trio.fail_after(1):
|
||||
if something:
|
||||
...
|
||||
else:
|
||||
yield
|
||||
|
||||
|
||||
# https://github.com/astral-sh/ruff/issues/12873
|
||||
@asynccontextmanager
|
||||
async def good_code():
|
||||
with anyio.fail_after(10):
|
||||
# There's no await keyword here, but we presume that there
|
||||
# will be in the caller we yield to, so this is safe.
|
||||
yield
|
||||
|
||||
@@ -55,14 +55,3 @@ max({x.id for x in bar})
|
||||
|
||||
# should not be linted...
|
||||
sum({x.id for x in bar})
|
||||
|
||||
|
||||
# https://github.com/astral-sh/ruff/issues/12891
|
||||
from collections.abc import AsyncGenerator
|
||||
|
||||
|
||||
async def test() -> None:
|
||||
async def async_gen() -> AsyncGenerator[bool, None]:
|
||||
yield True
|
||||
|
||||
assert all([v async for v in async_gen()]) # OK
|
||||
|
||||
@@ -263,8 +263,8 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
|
||||
if checker.enabled(Rule::TooManyArguments) {
|
||||
pylint::rules::too_many_arguments(checker, function_def);
|
||||
}
|
||||
if checker.enabled(Rule::TooManyPositionalArguments) {
|
||||
pylint::rules::too_many_positional_arguments(checker, function_def);
|
||||
if checker.enabled(Rule::TooManyPositional) {
|
||||
pylint::rules::too_many_positional(checker, function_def);
|
||||
}
|
||||
if checker.enabled(Rule::TooManyReturnStatements) {
|
||||
if let Some(diagnostic) = pylint::rules::too_many_return_statements(
|
||||
|
||||
@@ -248,7 +248,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
|
||||
(Pylint, "R0914") => (RuleGroup::Preview, rules::pylint::rules::TooManyLocals),
|
||||
(Pylint, "R0915") => (RuleGroup::Stable, rules::pylint::rules::TooManyStatements),
|
||||
(Pylint, "R0916") => (RuleGroup::Preview, rules::pylint::rules::TooManyBooleanExpressions),
|
||||
(Pylint, "R0917") => (RuleGroup::Preview, rules::pylint::rules::TooManyPositionalArguments),
|
||||
(Pylint, "R0917") => (RuleGroup::Preview, rules::pylint::rules::TooManyPositional),
|
||||
(Pylint, "R1701") => (RuleGroup::Removed, rules::pylint::rules::RepeatedIsinstanceCalls),
|
||||
(Pylint, "R1702") => (RuleGroup::Preview, rules::pylint::rules::TooManyNestedBlocks),
|
||||
(Pylint, "R1704") => (RuleGroup::Stable, rules::pylint::rules::RedefinedArgumentFromLocal),
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
use ruff_diagnostics::{Diagnostic, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::helpers::{any_over_body, AwaitVisitor};
|
||||
use ruff_python_ast::helpers::AwaitVisitor;
|
||||
use ruff_python_ast::visitor::Visitor;
|
||||
use ruff_python_ast::{Expr, StmtWith, WithItem};
|
||||
use ruff_python_ast::{StmtWith, WithItem};
|
||||
|
||||
use crate::checkers::ast::Checker;
|
||||
use crate::rules::flake8_async::helpers::MethodName;
|
||||
@@ -10,9 +10,6 @@ use crate::rules::flake8_async::helpers::MethodName;
|
||||
/// ## What it does
|
||||
/// Checks for timeout context managers which do not contain a checkpoint.
|
||||
///
|
||||
/// For the purposes of this check, `yield` is considered a checkpoint,
|
||||
/// since checkpoints may occur in the caller to which we yield.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// Some asynchronous context managers, such as `asyncio.timeout` and
|
||||
/// `trio.move_on_after`, have no effect unless they contain a checkpoint.
|
||||
@@ -83,14 +80,6 @@ pub(crate) fn cancel_scope_no_checkpoint(
|
||||
return;
|
||||
}
|
||||
|
||||
// Treat yields as checkpoints, since checkpoints can happen
|
||||
// in the caller yielded to.
|
||||
// See: https://flake8-async.readthedocs.io/en/latest/rules.html#async100
|
||||
// See: https://github.com/astral-sh/ruff/issues/12873
|
||||
if any_over_body(&with_stmt.body, &Expr::is_yield_expr) {
|
||||
return;
|
||||
}
|
||||
|
||||
// If the body contains an `await` statement, the context manager is used correctly.
|
||||
let mut visitor = AwaitVisitor::default();
|
||||
visitor.visit_body(&with_stmt.body);
|
||||
|
||||
@@ -100,21 +100,14 @@ pub(crate) fn unnecessary_comprehension_in_call(
|
||||
let Some(arg) = args.first() else {
|
||||
return;
|
||||
};
|
||||
let (Expr::ListComp(ast::ExprListComp {
|
||||
elt, generators, ..
|
||||
})
|
||||
| Expr::SetComp(ast::ExprSetComp {
|
||||
elt, generators, ..
|
||||
})) = arg
|
||||
let (Expr::ListComp(ast::ExprListComp { elt, .. })
|
||||
| Expr::SetComp(ast::ExprSetComp { elt, .. })) = arg
|
||||
else {
|
||||
return;
|
||||
};
|
||||
if contains_await(elt) {
|
||||
return;
|
||||
}
|
||||
if generators.iter().any(|generator| generator.is_async) {
|
||||
return;
|
||||
}
|
||||
let Some(Ok(builtin_function)) = checker
|
||||
.semantic()
|
||||
.resolve_builtin_symbol(func)
|
||||
|
||||
@@ -123,10 +123,7 @@ mod tests {
|
||||
#[test_case(Rule::RedefinedLoopName, Path::new("redefined_loop_name.py"))]
|
||||
#[test_case(Rule::ReturnInInit, Path::new("return_in_init.py"))]
|
||||
#[test_case(Rule::TooManyArguments, Path::new("too_many_arguments.py"))]
|
||||
#[test_case(
|
||||
Rule::TooManyPositionalArguments,
|
||||
Path::new("too_many_positional_arguments.py")
|
||||
)]
|
||||
#[test_case(Rule::TooManyPositional, Path::new("too_many_positional.py"))]
|
||||
#[test_case(Rule::TooManyBranches, Path::new("too_many_branches.py"))]
|
||||
#[test_case(
|
||||
Rule::TooManyReturnStatements,
|
||||
@@ -297,7 +294,7 @@ mod tests {
|
||||
max_positional_args: 4,
|
||||
..pylint::settings::Settings::default()
|
||||
},
|
||||
..LinterSettings::for_rule(Rule::TooManyPositionalArguments)
|
||||
..LinterSettings::for_rule(Rule::TooManyPositional)
|
||||
},
|
||||
)?;
|
||||
assert_messages!(diagnostics);
|
||||
|
||||
@@ -79,7 +79,7 @@ pub(crate) use too_many_boolean_expressions::*;
|
||||
pub(crate) use too_many_branches::*;
|
||||
pub(crate) use too_many_locals::*;
|
||||
pub(crate) use too_many_nested_blocks::*;
|
||||
pub(crate) use too_many_positional_arguments::*;
|
||||
pub(crate) use too_many_positional::*;
|
||||
pub(crate) use too_many_public_methods::*;
|
||||
pub(crate) use too_many_return_statements::*;
|
||||
pub(crate) use too_many_statements::*;
|
||||
@@ -182,7 +182,7 @@ mod too_many_boolean_expressions;
|
||||
mod too_many_branches;
|
||||
mod too_many_locals;
|
||||
mod too_many_nested_blocks;
|
||||
mod too_many_positional_arguments;
|
||||
mod too_many_positional;
|
||||
mod too_many_public_methods;
|
||||
mod too_many_return_statements;
|
||||
mod too_many_statements;
|
||||
|
||||
@@ -42,24 +42,21 @@ use crate::checkers::ast::Checker;
|
||||
/// ## Options
|
||||
/// - `lint.pylint.max-positional-args`
|
||||
#[violation]
|
||||
pub struct TooManyPositionalArguments {
|
||||
pub struct TooManyPositional {
|
||||
c_pos: usize,
|
||||
max_pos: usize,
|
||||
}
|
||||
|
||||
impl Violation for TooManyPositionalArguments {
|
||||
impl Violation for TooManyPositional {
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
let TooManyPositionalArguments { c_pos, max_pos } = self;
|
||||
let TooManyPositional { c_pos, max_pos } = self;
|
||||
format!("Too many positional arguments ({c_pos}/{max_pos})")
|
||||
}
|
||||
}
|
||||
|
||||
/// PLR0917
|
||||
pub(crate) fn too_many_positional_arguments(
|
||||
checker: &mut Checker,
|
||||
function_def: &ast::StmtFunctionDef,
|
||||
) {
|
||||
pub(crate) fn too_many_positional(checker: &mut Checker, function_def: &ast::StmtFunctionDef) {
|
||||
let semantic = checker.semantic();
|
||||
|
||||
// Count the number of positional arguments.
|
||||
@@ -112,7 +109,7 @@ pub(crate) fn too_many_positional_arguments(
|
||||
}
|
||||
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
TooManyPositionalArguments {
|
||||
TooManyPositional {
|
||||
c_pos: num_positional_args,
|
||||
max_pos: checker.settings.pylint.max_positional_args,
|
||||
},
|
||||
@@ -1,28 +1,28 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pylint/mod.rs
|
||||
---
|
||||
too_many_positional_arguments.py:1:5: PLR0917 Too many positional arguments (8/5)
|
||||
too_many_positional.py:1:5: PLR0917 Too many positional arguments (8/5)
|
||||
|
|
||||
1 | def f(x, y, z, t, u, v, w, r): # Too many positional arguments (8/5)
|
||||
| ^ PLR0917
|
||||
2 | pass
|
||||
|
|
||||
|
||||
too_many_positional_arguments.py:21:5: PLR0917 Too many positional arguments (6/5)
|
||||
too_many_positional.py:21:5: PLR0917 Too many positional arguments (6/5)
|
||||
|
|
||||
21 | def f(x, y, z, /, u, v, w): # Too many positional arguments (6/5)
|
||||
| ^ PLR0917
|
||||
22 | pass
|
||||
|
|
||||
|
||||
too_many_positional_arguments.py:29:5: PLR0917 Too many positional arguments (6/5)
|
||||
too_many_positional.py:29:5: PLR0917 Too many positional arguments (6/5)
|
||||
|
|
||||
29 | def f(x, y, z, a, b, c, *, u, v, w): # Too many positional arguments (6/5)
|
||||
| ^ PLR0917
|
||||
30 | pass
|
||||
|
|
||||
|
||||
too_many_positional_arguments.py:43:9: PLR0917 Too many positional arguments (6/5)
|
||||
too_many_positional.py:43:9: PLR0917 Too many positional arguments (6/5)
|
||||
|
|
||||
41 | pass
|
||||
42 |
|
||||
@@ -31,10 +31,12 @@ too_many_positional_arguments.py:43:9: PLR0917 Too many positional arguments (6/
|
||||
44 | pass
|
||||
|
|
||||
|
||||
too_many_positional_arguments.py:47:9: PLR0917 Too many positional arguments (6/5)
|
||||
too_many_positional.py:47:9: PLR0917 Too many positional arguments (6/5)
|
||||
|
|
||||
46 | @staticmethod
|
||||
47 | def f(self, a, b, c, d, e): # Too many positional arguments (6/5)
|
||||
| ^ PLR0917
|
||||
48 | pass
|
||||
|
|
||||
|
||||
|
||||
@@ -84,13 +84,13 @@ impl FormatNodeRule<Comprehension> for FormatComprehension {
|
||||
let (trailing_in_comments, dangling_if_comments) = dangling_comments
|
||||
.split_at(dangling_comments.partition_point(|comment| comment.start() < iter.start()));
|
||||
|
||||
// let in_spacer = format_with(|f| {
|
||||
// // if before_in_comments.is_empty() {
|
||||
// // space().fmt(f)
|
||||
// // } else {
|
||||
// soft_line_break_or_space().fmt(f)
|
||||
// // }
|
||||
// });
|
||||
let in_spacer = format_with(|f| {
|
||||
if before_in_comments.is_empty() {
|
||||
space().fmt(f)
|
||||
} else {
|
||||
soft_line_break_or_space().fmt(f)
|
||||
}
|
||||
});
|
||||
|
||||
write!(
|
||||
f,
|
||||
@@ -101,12 +101,10 @@ impl FormatNodeRule<Comprehension> for FormatComprehension {
|
||||
expression: target,
|
||||
preserve_parentheses: !target.is_tuple_expr()
|
||||
},
|
||||
group(&format_args![
|
||||
ExprTupleWithoutParentheses(target),
|
||||
soft_line_break_or_space(),
|
||||
leading_comments(before_in_comments),
|
||||
token("in"),
|
||||
]),
|
||||
ExprTupleWithoutParentheses(target),
|
||||
in_spacer,
|
||||
leading_comments(before_in_comments),
|
||||
token("in"),
|
||||
trailing_comments(trailing_in_comments),
|
||||
Spacer {
|
||||
expression: iter,
|
||||
|
||||
@@ -266,17 +266,7 @@ last_call()
|
||||
```diff
|
||||
--- Black
|
||||
+++ Ruff
|
||||
@@ -101,7 +101,8 @@
|
||||
{a: b * -2 for a, b in dictionary.items()}
|
||||
{
|
||||
k: v
|
||||
- for k, v in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
+ for k, v
|
||||
+ in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
}
|
||||
Python3 > Python2 > COBOL
|
||||
Life is Life
|
||||
@@ -115,7 +116,7 @@
|
||||
@@ -115,7 +115,7 @@
|
||||
arg,
|
||||
another,
|
||||
kwarg="hey",
|
||||
@@ -393,8 +383,7 @@ str or None if (1 if True else 2) else str or bytes or None
|
||||
{a: b * -2 for a, b in dictionary.items()}
|
||||
{
|
||||
k: v
|
||||
for k, v
|
||||
in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
for k, v in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
}
|
||||
Python3 > Python2 > COBOL
|
||||
Life is Life
|
||||
@@ -1039,3 +1028,5 @@ bbbb >> bbbb * bbbb
|
||||
last_call()
|
||||
# standalone comment at ENDMARKER
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -373,7 +373,7 @@ for foo in ["a", "b"]:
|
||||
|
||||
func(
|
||||
[
|
||||
@@ -114,13 +140,16 @@
|
||||
@@ -114,13 +140,15 @@
|
||||
func(
|
||||
[x for x in "long line long line long line long line long line long line long line"]
|
||||
)
|
||||
@@ -386,8 +386,7 @@ for foo in ["a", "b"]:
|
||||
- for x in "long line long line long line long line long line long line long line"
|
||||
+ for x in [
|
||||
+ x
|
||||
+ for x
|
||||
+ in "long line long line long line long line long line long line long line"
|
||||
+ for x in "long line long line long line long line long line long line long line"
|
||||
+ ]
|
||||
]
|
||||
-])
|
||||
@@ -395,7 +394,7 @@ for foo in ["a", "b"]:
|
||||
|
||||
foooooooooooooooooooo(
|
||||
[{c: n + 1 for c in range(256)} for n in range(100)] + [{}], {size}
|
||||
@@ -131,10 +160,12 @@
|
||||
@@ -131,10 +159,12 @@
|
||||
)
|
||||
|
||||
nested_mapping = {
|
||||
@@ -412,7 +411,7 @@ for foo in ["a", "b"]:
|
||||
}
|
||||
explicit_exploding = [
|
||||
[
|
||||
@@ -144,24 +175,34 @@
|
||||
@@ -144,24 +174,34 @@
|
||||
],
|
||||
],
|
||||
]
|
||||
@@ -462,7 +461,7 @@ for foo in ["a", "b"]:
|
||||
|
||||
# Edge case when deciding whether to hug the brackets without inner content.
|
||||
very_very_very_long_variable = very_very_very_long_module.VeryVeryVeryVeryLongClassName(
|
||||
@@ -169,11 +210,14 @@
|
||||
@@ -169,11 +209,13 @@
|
||||
)
|
||||
|
||||
for foo in ["a", "b"]:
|
||||
@@ -479,8 +478,7 @@ for foo in ["a", "b"]:
|
||||
+ individual
|
||||
+ for
|
||||
+ # Foobar
|
||||
+ container
|
||||
+ in xs_by_y[foo]
|
||||
+ container in xs_by_y[foo]
|
||||
+ # Foobar
|
||||
+ for individual in container["nested"]
|
||||
+ ]
|
||||
@@ -637,8 +635,7 @@ func(
|
||||
x
|
||||
for x in [
|
||||
x
|
||||
for x
|
||||
in "long line long line long line long line long line long line long line"
|
||||
for x in "long line long line long line long line long line long line long line"
|
||||
]
|
||||
]
|
||||
)
|
||||
@@ -707,8 +704,7 @@ for foo in ["a", "b"]:
|
||||
individual
|
||||
for
|
||||
# Foobar
|
||||
container
|
||||
in xs_by_y[foo]
|
||||
container in xs_by_y[foo]
|
||||
# Foobar
|
||||
for individual in container["nested"]
|
||||
]
|
||||
|
||||
@@ -198,8 +198,7 @@ query = {
|
||||
|
||||
{
|
||||
a: a # a
|
||||
for c # for # c
|
||||
in e # in # e
|
||||
for c in e # for # c # in # e
|
||||
}
|
||||
|
||||
{
|
||||
@@ -233,8 +232,7 @@ query = {
|
||||
for ccccccccccccccccccccccccccccccccccccccc, ddddddddddddddddddd, [
|
||||
eeeeeeeeeeeeeeeeeeeeee,
|
||||
fffffffffffffffffffffffff,
|
||||
]
|
||||
in eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffggggggggggggggggggggghhhhhhhhhhhhhhothermoreeand_even_moreddddddddddddddddddddd
|
||||
] in eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffggggggggggggggggggggghhhhhhhhhhhhhhothermoreeand_even_moreddddddddddddddddddddd
|
||||
if fffffffffffffffffffffffffffffffffffffffffff
|
||||
< gggggggggggggggggggggggggggggggggggggggggggggg
|
||||
< hhhhhhhhhhhhhhhhhhhhhhhhhh
|
||||
@@ -265,8 +263,7 @@ query = {
|
||||
a,
|
||||
a,
|
||||
a,
|
||||
]
|
||||
in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
] in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
}
|
||||
{
|
||||
k: v
|
||||
@@ -291,8 +288,7 @@ query = {
|
||||
a,
|
||||
a,
|
||||
a,
|
||||
)
|
||||
in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
) in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
|
||||
}
|
||||
|
||||
# Leading
|
||||
@@ -319,8 +315,7 @@ query = {
|
||||
a,
|
||||
a,
|
||||
a, # Trailing
|
||||
)
|
||||
in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension # Trailing
|
||||
) in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension # Trailing
|
||||
} # Trailing
|
||||
# Trailing
|
||||
|
||||
@@ -341,8 +336,7 @@ selected_choices = {
|
||||
for ( # foo
|
||||
x,
|
||||
aaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaayaaaay,
|
||||
)
|
||||
in z
|
||||
) in z
|
||||
}
|
||||
|
||||
a = {
|
||||
@@ -409,3 +403,6 @@ query = {
|
||||
for key, queries in self._filters.items()
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -189,8 +189,7 @@ y = [
|
||||
|
||||
[
|
||||
a # a
|
||||
for c # for # c
|
||||
in e # in # e
|
||||
for c in e # for # c # in # e
|
||||
]
|
||||
|
||||
[
|
||||
@@ -221,8 +220,7 @@ y = [
|
||||
for ccccccccccccccccccccccccccccccccccccccc, ddddddddddddddddddd, [
|
||||
eeeeeeeeeeeeeeeeeeeeee,
|
||||
fffffffffffffffffffffffff,
|
||||
]
|
||||
in eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffggggggggggggggggggggghhhhhhhhhhhhhhothermoreeand_even_moreddddddddddddddddddddd
|
||||
] in eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffggggggggggggggggggggghhhhhhhhhhhhhhothermoreeand_even_moreddddddddddddddddddddd
|
||||
if fffffffffffffffffffffffffffffffffffffffffff
|
||||
< gggggggggggggggggggggggggggggggggggggggggggggg
|
||||
< hhhhhhhhhhhhhhhhhhhhhhhhhh
|
||||
@@ -306,8 +304,7 @@ aaaaaaaaaaaaaaaaaaaaa = [
|
||||
for (
|
||||
x,
|
||||
y,
|
||||
)
|
||||
in z
|
||||
) in z
|
||||
if head_name
|
||||
]
|
||||
|
||||
@@ -329,8 +326,7 @@ y = [
|
||||
(
|
||||
# comment
|
||||
a
|
||||
)
|
||||
in
|
||||
) in
|
||||
(
|
||||
# comment
|
||||
x
|
||||
@@ -354,8 +350,7 @@ y = [
|
||||
a
|
||||
for
|
||||
# comment
|
||||
a, b
|
||||
in x
|
||||
a, b in x
|
||||
if True
|
||||
]
|
||||
|
||||
@@ -366,8 +361,7 @@ y = [
|
||||
# comment
|
||||
a,
|
||||
b,
|
||||
)
|
||||
in x
|
||||
) in x
|
||||
if True
|
||||
]
|
||||
|
||||
@@ -376,8 +370,7 @@ y = [
|
||||
a
|
||||
for
|
||||
# comment
|
||||
a
|
||||
in
|
||||
a in
|
||||
# comment
|
||||
x
|
||||
if
|
||||
@@ -395,7 +388,7 @@ y = [
|
||||
```diff
|
||||
--- Stable
|
||||
+++ Preview
|
||||
@@ -145,25 +145,21 @@
|
||||
@@ -142,24 +142,20 @@
|
||||
# Leading expression comments:
|
||||
y = [
|
||||
a
|
||||
@@ -404,10 +397,9 @@ y = [
|
||||
+ for (
|
||||
# comment
|
||||
a
|
||||
)
|
||||
- in
|
||||
- ) in
|
||||
- (
|
||||
+ in (
|
||||
+ ) in (
|
||||
# comment
|
||||
x
|
||||
)
|
||||
|
||||
@@ -74,8 +74,7 @@ selected_choices = {
|
||||
|
||||
{
|
||||
a # a
|
||||
for c # for # c
|
||||
in e # in # e
|
||||
for c in e # for # c # in # e
|
||||
}
|
||||
|
||||
{
|
||||
@@ -106,8 +105,7 @@ selected_choices = {
|
||||
for ccccccccccccccccccccccccccccccccccccccc, ddddddddddddddddddd, [
|
||||
eeeeeeeeeeeeeeeeeeeeee,
|
||||
fffffffffffffffffffffffff,
|
||||
]
|
||||
in eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffggggggggggggggggggggghhhhhhhhhhhhhhothermoreeand_even_moreddddddddddddddddddddd
|
||||
] in eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeffffffffffffffffffffffffggggggggggggggggggggghhhhhhhhhhhhhhothermoreeand_even_moreddddddddddddddddddddd
|
||||
if fffffffffffffffffffffffffffffffffffffffffff
|
||||
< gggggggggggggggggggggggggggggggggggggggggggggg
|
||||
< hhhhhhhhhhhhhhhhhhhhhhhhhh
|
||||
@@ -125,3 +123,6 @@ selected_choices = {
|
||||
if str(v) not in self.choices.field.empty_values
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff_wasm"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -1388,10 +1388,10 @@ impl Flake8ImportConventionsOptions {
|
||||
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
|
||||
pub struct Flake8PytestStyleOptions {
|
||||
/// Boolean flag specifying whether `@pytest.fixture()` without parameters
|
||||
/// should have parentheses. If the option is set to `false` (the default),
|
||||
/// `@pytest.fixture` is valid and `@pytest.fixture()` is invalid. If set
|
||||
/// to `true`, `@pytest.fixture()` is valid and `@pytest.fixture` is
|
||||
/// invalid.
|
||||
/// should have parentheses. If the option is set to `true` (the
|
||||
/// default), `@pytest.fixture()` is valid and `@pytest.fixture` is
|
||||
/// invalid. If set to `false`, `@pytest.fixture` is valid and
|
||||
/// `@pytest.fixture()` is invalid.
|
||||
#[option(
|
||||
default = "false",
|
||||
value_type = "bool",
|
||||
@@ -1471,10 +1471,10 @@ pub struct Flake8PytestStyleOptions {
|
||||
pub raises_extend_require_match_for: Option<Vec<String>>,
|
||||
|
||||
/// Boolean flag specifying whether `@pytest.mark.foo()` without parameters
|
||||
/// should have parentheses. If the option is set to `false` (the
|
||||
/// default), `@pytest.mark.foo` is valid and `@pytest.mark.foo()` is
|
||||
/// invalid. If set to `true`, `@pytest.mark.foo()` is valid and
|
||||
/// `@pytest.mark.foo` is invalid.
|
||||
/// should have parentheses. If the option is set to `true` (the
|
||||
/// default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is
|
||||
/// invalid. If set to `false`, `@pytest.mark.foo` is valid and
|
||||
/// `@pytest.mark.foo()` is invalid.
|
||||
#[option(
|
||||
default = "false",
|
||||
value_type = "bool",
|
||||
|
||||
@@ -78,7 +78,7 @@ Ruff can be used as a [pre-commit](https://pre-commit.com) hook via [`ruff-pre-c
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.6.0
|
||||
rev: v0.5.7
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
@@ -91,7 +91,7 @@ To enable lint fixes, add the `--fix` argument to the lint hook:
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.6.0
|
||||
rev: v0.5.7
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
@@ -105,7 +105,7 @@ To run the hooks over Jupyter Notebooks too, add `jupyter` to the list of allowe
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.6.0
|
||||
rev: v0.5.7
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "maturin"
|
||||
|
||||
[project]
|
||||
name = "ruff"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
authors = [{ name = "Astral Software Inc.", email = "hey@astral.sh" }]
|
||||
readme = "README.md"
|
||||
|
||||
4
ruff.schema.json
generated
4
ruff.schema.json
generated
@@ -1103,14 +1103,14 @@
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"fixture-parentheses": {
|
||||
"description": "Boolean flag specifying whether `@pytest.fixture()` without parameters should have parentheses. If the option is set to `false` (the default), `@pytest.fixture` is valid and `@pytest.fixture()` is invalid. If set to `true`, `@pytest.fixture()` is valid and `@pytest.fixture` is invalid.",
|
||||
"description": "Boolean flag specifying whether `@pytest.fixture()` without parameters should have parentheses. If the option is set to `true` (the default), `@pytest.fixture()` is valid and `@pytest.fixture` is invalid. If set to `false`, `@pytest.fixture` is valid and `@pytest.fixture()` is invalid.",
|
||||
"type": [
|
||||
"boolean",
|
||||
"null"
|
||||
]
|
||||
},
|
||||
"mark-parentheses": {
|
||||
"description": "Boolean flag specifying whether `@pytest.mark.foo()` without parameters should have parentheses. If the option is set to `false` (the default), `@pytest.mark.foo` is valid and `@pytest.mark.foo()` is invalid. If set to `true`, `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is invalid.",
|
||||
"description": "Boolean flag specifying whether `@pytest.mark.foo()` without parameters should have parentheses. If the option is set to `true` (the default), `@pytest.mark.foo()` is valid and `@pytest.mark.foo` is invalid. If set to `false`, `@pytest.mark.foo` is valid and `@pytest.mark.foo()` is invalid.",
|
||||
"type": [
|
||||
"boolean",
|
||||
"null"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "scripts"
|
||||
version = "0.6.0"
|
||||
version = "0.5.7"
|
||||
description = ""
|
||||
authors = ["Charles Marsh <charlie.r.marsh@gmail.com>"]
|
||||
|
||||
|
||||
Reference in New Issue
Block a user