Compare commits
25 Commits
zb/dev-dri
...
alex/dynam
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2d04d99315 | ||
|
|
08d8819c8a | ||
|
|
44f2f77748 | ||
|
|
1c710c2840 | ||
|
|
f4bd74ab6a | ||
|
|
a33cff2b12 | ||
|
|
f48a34fbab | ||
|
|
411cccb35e | ||
|
|
7712c2fd15 | ||
|
|
25bdb67d9a | ||
|
|
3be83d36a5 | ||
|
|
333191b7f7 | ||
|
|
77a5c5ac80 | ||
|
|
9bee8376a1 | ||
|
|
1c6717b149 | ||
|
|
1b813cd5f1 | ||
|
|
b00f68a23c | ||
|
|
710c60f713 | ||
|
|
811e25d16e | ||
|
|
b78af2db48 | ||
|
|
4f36f0677f | ||
|
|
2589a2938e | ||
|
|
26bb8f7b71 | ||
|
|
bf88fee428 | ||
|
|
fc43d3c83e |
56
CHANGELOG.md
56
CHANGELOG.md
@@ -1,5 +1,61 @@
|
||||
# Changelog
|
||||
|
||||
## 0.12.2
|
||||
|
||||
### Preview features
|
||||
|
||||
- \[`flake8-pyi`\] Expand `Optional[A]` to `A | None` (`PYI016`) ([#18572](https://github.com/astral-sh/ruff/pull/18572))
|
||||
- \[`pyupgrade`\] Mark `UP008` fix safe if no comments are in range ([#18683](https://github.com/astral-sh/ruff/pull/18683))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- \[`flake8-comprehensions`\] Fix `C420` to prepend whitespace when needed ([#18616](https://github.com/astral-sh/ruff/pull/18616))
|
||||
- \[`perflint`\] Fix `PERF403` panic on attribute or subscription loop variable ([#19042](https://github.com/astral-sh/ruff/pull/19042))
|
||||
- \[`pydocstyle`\] Fix `D413` infinite loop for parenthesized docstring ([#18930](https://github.com/astral-sh/ruff/pull/18930))
|
||||
- \[`pylint`\] Fix `PLW0108` autofix introducing a syntax error when the lambda's body contains an assignment expression ([#18678](https://github.com/astral-sh/ruff/pull/18678))
|
||||
- \[`refurb`\] Fix false positive on empty tuples (`FURB168`) ([#19058](https://github.com/astral-sh/ruff/pull/19058))
|
||||
- \[`ruff`\] Allow more `field` calls from `attrs` (`RUF009`) ([#19021](https://github.com/astral-sh/ruff/pull/19021))
|
||||
- \[`ruff`\] Fix syntax error introduced for an empty string followed by a u-prefixed string (`UP025`) ([#18899](https://github.com/astral-sh/ruff/pull/18899))
|
||||
|
||||
### Rule changes
|
||||
|
||||
- \[`flake8-executable`\] Allow `uvx` in shebang line (`EXE003`) ([#18967](https://github.com/astral-sh/ruff/pull/18967))
|
||||
- \[`pandas`\] Avoid flagging `PD002` if `pandas` is not imported ([#18963](https://github.com/astral-sh/ruff/pull/18963))
|
||||
- \[`pyupgrade`\] Avoid PEP-604 unions with `typing.NamedTuple` (`UP007`, `UP045`) ([#18682](https://github.com/astral-sh/ruff/pull/18682))
|
||||
|
||||
### Documentation
|
||||
|
||||
- Document link between `import-outside-top-level (PLC0415)` and `lint.flake8-tidy-imports.banned-module-level-imports` ([#18733](https://github.com/astral-sh/ruff/pull/18733))
|
||||
- Fix description of the `format.skip-magic-trailing-comma` example ([#19095](https://github.com/astral-sh/ruff/pull/19095))
|
||||
- \[`airflow`\] Make `AIR302` example error out-of-the-box ([#18988](https://github.com/astral-sh/ruff/pull/18988))
|
||||
- \[`airflow`\] Make `AIR312` example error out-of-the-box ([#18989](https://github.com/astral-sh/ruff/pull/18989))
|
||||
- \[`flake8-annotations`\] Make `ANN401` example error out-of-the-box ([#18974](https://github.com/astral-sh/ruff/pull/18974))
|
||||
- \[`flake8-async`\] Make `ASYNC100` example error out-of-the-box ([#18993](https://github.com/astral-sh/ruff/pull/18993))
|
||||
- \[`flake8-async`\] Make `ASYNC105` example error out-of-the-box ([#19002](https://github.com/astral-sh/ruff/pull/19002))
|
||||
- \[`flake8-async`\] Make `ASYNC110` example error out-of-the-box ([#18975](https://github.com/astral-sh/ruff/pull/18975))
|
||||
- \[`flake8-async`\] Make `ASYNC210` example error out-of-the-box ([#18977](https://github.com/astral-sh/ruff/pull/18977))
|
||||
- \[`flake8-async`\] Make `ASYNC220`, `ASYNC221`, and `ASYNC222` examples error out-of-the-box ([#18978](https://github.com/astral-sh/ruff/pull/18978))
|
||||
- \[`flake8-async`\] Make `ASYNC251` example error out-of-the-box ([#18990](https://github.com/astral-sh/ruff/pull/18990))
|
||||
- \[`flake8-bandit`\] Make `S201` example error out-of-the-box ([#19017](https://github.com/astral-sh/ruff/pull/19017))
|
||||
- \[`flake8-bandit`\] Make `S604` and `S609` examples error out-of-the-box ([#19049](https://github.com/astral-sh/ruff/pull/19049))
|
||||
- \[`flake8-bugbear`\] Make `B028` example error out-of-the-box ([#19054](https://github.com/astral-sh/ruff/pull/19054))
|
||||
- \[`flake8-bugbear`\] Make `B911` example error out-of-the-box ([#19051](https://github.com/astral-sh/ruff/pull/19051))
|
||||
- \[`flake8-datetimez`\] Make `DTZ011` example error out-of-the-box ([#19055](https://github.com/astral-sh/ruff/pull/19055))
|
||||
- \[`flake8-datetimez`\] Make `DTZ901` example error out-of-the-box ([#19056](https://github.com/astral-sh/ruff/pull/19056))
|
||||
- \[`flake8-pyi`\] Make `PYI032` example error out-of-the-box ([#19061](https://github.com/astral-sh/ruff/pull/19061))
|
||||
- \[`flake8-pyi`\] Make example error out-of-the-box (`PYI014`, `PYI015`) ([#19097](https://github.com/astral-sh/ruff/pull/19097))
|
||||
- \[`flake8-pyi`\] Make example error out-of-the-box (`PYI042`) ([#19101](https://github.com/astral-sh/ruff/pull/19101))
|
||||
- \[`flake8-pyi`\] Make example error out-of-the-box (`PYI059`) ([#19080](https://github.com/astral-sh/ruff/pull/19080))
|
||||
- \[`flake8-pyi`\] Make example error out-of-the-box (`PYI062`) ([#19079](https://github.com/astral-sh/ruff/pull/19079))
|
||||
- \[`flake8-pytest-style`\] Make example error out-of-the-box (`PT023`) ([#19104](https://github.com/astral-sh/ruff/pull/19104))
|
||||
- \[`flake8-pytest-style`\] Make example error out-of-the-box (`PT030`) ([#19105](https://github.com/astral-sh/ruff/pull/19105))
|
||||
- \[`flake8-quotes`\] Make example error out-of-the-box (`Q003`) ([#19106](https://github.com/astral-sh/ruff/pull/19106))
|
||||
- \[`flake8-simplify`\] Make example error out-of-the-box (`SIM110`) ([#19113](https://github.com/astral-sh/ruff/pull/19113))
|
||||
- \[`flake8-simplify`\] Make example error out-of-the-box (`SIM113`) ([#19109](https://github.com/astral-sh/ruff/pull/19109))
|
||||
- \[`flake8-simplify`\] Make example error out-of-the-box (`SIM401`) ([#19110](https://github.com/astral-sh/ruff/pull/19110))
|
||||
- \[`pyflakes`\] Fix backslash in docs (`F621`) ([#19098](https://github.com/astral-sh/ruff/pull/19098))
|
||||
- \[`pylint`\] Fix `PLC0415` example ([#18970](https://github.com/astral-sh/ruff/pull/18970))
|
||||
|
||||
## 0.12.1
|
||||
|
||||
### Preview features
|
||||
|
||||
9
Cargo.lock
generated
9
Cargo.lock
generated
@@ -2724,7 +2724,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.12.1"
|
||||
version = "0.12.2"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"argfile",
|
||||
@@ -2854,6 +2854,7 @@ dependencies = [
|
||||
"path-slash",
|
||||
"ruff_annotate_snippets",
|
||||
"ruff_cache",
|
||||
"ruff_diagnostics",
|
||||
"ruff_notebook",
|
||||
"ruff_python_ast",
|
||||
"ruff_python_parser",
|
||||
@@ -2918,6 +2919,7 @@ dependencies = [
|
||||
name = "ruff_diagnostics"
|
||||
version = "0.0.0"
|
||||
dependencies = [
|
||||
"get-size2",
|
||||
"is-macro",
|
||||
"ruff_text_size",
|
||||
"serde",
|
||||
@@ -2970,7 +2972,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff_linter"
|
||||
version = "0.12.1"
|
||||
version = "0.12.2"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"anyhow",
|
||||
@@ -3256,6 +3258,7 @@ dependencies = [
|
||||
"lsp-server",
|
||||
"lsp-types",
|
||||
"regex",
|
||||
"ruff_db",
|
||||
"ruff_diagnostics",
|
||||
"ruff_formatter",
|
||||
"ruff_linter",
|
||||
@@ -3302,7 +3305,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff_wasm"
|
||||
version = "0.12.1"
|
||||
version = "0.12.2"
|
||||
dependencies = [
|
||||
"console_error_panic_hook",
|
||||
"console_log",
|
||||
|
||||
@@ -148,8 +148,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
|
||||
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
|
||||
|
||||
# For a specific version.
|
||||
curl -LsSf https://astral.sh/ruff/0.12.1/install.sh | sh
|
||||
powershell -c "irm https://astral.sh/ruff/0.12.1/install.ps1 | iex"
|
||||
curl -LsSf https://astral.sh/ruff/0.12.2/install.sh | sh
|
||||
powershell -c "irm https://astral.sh/ruff/0.12.2/install.ps1 | iex"
|
||||
```
|
||||
|
||||
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
|
||||
@@ -182,7 +182,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.12.1
|
||||
rev: v0.12.2
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff-check
|
||||
@@ -506,6 +506,7 @@ Ruff is used by a number of major open-source projects and companies, including:
|
||||
- [Streamlit](https://github.com/streamlit/streamlit)
|
||||
- [The Algorithms](https://github.com/TheAlgorithms/Python)
|
||||
- [Vega-Altair](https://github.com/altair-viz/altair)
|
||||
- [Weblate](https://weblate.org/)
|
||||
- WordPress ([Openverse](https://github.com/WordPress/openverse))
|
||||
- [ZenML](https://github.com/zenml-io/zenml)
|
||||
- [Zulip](https://github.com/zulip/zulip)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff"
|
||||
version = "0.12.1"
|
||||
version = "0.12.2"
|
||||
publish = true
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -18,14 +18,15 @@ use rustc_hash::FxHashMap;
|
||||
use tempfile::NamedTempFile;
|
||||
|
||||
use ruff_cache::{CacheKey, CacheKeyHasher};
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_diagnostics::Fix;
|
||||
use ruff_linter::message::OldDiagnostic;
|
||||
use ruff_linter::message::create_lint_diagnostic;
|
||||
use ruff_linter::package::PackageRoot;
|
||||
use ruff_linter::{VERSION, warn_user};
|
||||
use ruff_macros::CacheKey;
|
||||
use ruff_notebook::NotebookIndex;
|
||||
use ruff_source_file::SourceFileBuilder;
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
use ruff_text_size::{TextRange, TextSize};
|
||||
use ruff_workspace::Settings;
|
||||
use ruff_workspace::resolver::Resolver;
|
||||
|
||||
@@ -348,7 +349,7 @@ impl FileCache {
|
||||
lint.messages
|
||||
.iter()
|
||||
.map(|msg| {
|
||||
OldDiagnostic::lint(
|
||||
create_lint_diagnostic(
|
||||
&msg.body,
|
||||
msg.suggestion.as_ref(),
|
||||
msg.range,
|
||||
@@ -428,11 +429,11 @@ pub(crate) struct LintCacheData {
|
||||
|
||||
impl LintCacheData {
|
||||
pub(crate) fn from_diagnostics(
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
notebook_index: Option<NotebookIndex>,
|
||||
) -> Self {
|
||||
let source = if let Some(msg) = diagnostics.first() {
|
||||
msg.source_file().source_text().to_owned()
|
||||
msg.expect_ruff_source_file().source_text().to_owned()
|
||||
} else {
|
||||
String::new() // No messages, no need to keep the source!
|
||||
};
|
||||
@@ -446,16 +447,16 @@ impl LintCacheData {
|
||||
.map(|(rule, msg)| {
|
||||
// Make sure that all message use the same source file.
|
||||
assert_eq!(
|
||||
msg.source_file(),
|
||||
diagnostics.first().unwrap().source_file(),
|
||||
msg.expect_ruff_source_file(),
|
||||
diagnostics.first().unwrap().expect_ruff_source_file(),
|
||||
"message uses a different source file"
|
||||
);
|
||||
CacheMessage {
|
||||
rule,
|
||||
body: msg.body().to_string(),
|
||||
suggestion: msg.suggestion().map(ToString::to_string),
|
||||
range: msg.range(),
|
||||
parent: msg.parent,
|
||||
range: msg.expect_range(),
|
||||
parent: msg.parent(),
|
||||
fix: msg.fix().cloned(),
|
||||
noqa_offset: msg.noqa_offset(),
|
||||
}
|
||||
@@ -608,12 +609,12 @@ mod tests {
|
||||
use anyhow::Result;
|
||||
use filetime::{FileTime, set_file_mtime};
|
||||
use itertools::Itertools;
|
||||
use ruff_linter::settings::LinterSettings;
|
||||
use test_case::test_case;
|
||||
|
||||
use ruff_cache::CACHE_DIR_NAME;
|
||||
use ruff_linter::message::OldDiagnostic;
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_linter::package::PackageRoot;
|
||||
use ruff_linter::settings::LinterSettings;
|
||||
use ruff_linter::settings::flags;
|
||||
use ruff_linter::settings::types::UnsafeFixes;
|
||||
use ruff_python_ast::{PySourceType, PythonVersion};
|
||||
@@ -680,7 +681,7 @@ mod tests {
|
||||
UnsafeFixes::Enabled,
|
||||
)
|
||||
.unwrap();
|
||||
if diagnostics.inner.iter().any(OldDiagnostic::is_syntax_error) {
|
||||
if diagnostics.inner.iter().any(Diagnostic::is_syntax_error) {
|
||||
parse_errors.push(path.clone());
|
||||
}
|
||||
paths.push(path);
|
||||
|
||||
@@ -9,10 +9,10 @@ use ignore::Error;
|
||||
use log::{debug, error, warn};
|
||||
#[cfg(not(target_family = "wasm"))]
|
||||
use rayon::prelude::*;
|
||||
use ruff_linter::message::diagnostic_from_violation;
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
use ruff_db::panic::catch_unwind;
|
||||
use ruff_linter::OldDiagnostic;
|
||||
use ruff_linter::package::PackageRoot;
|
||||
use ruff_linter::registry::Rule;
|
||||
use ruff_linter::settings::types::UnsafeFixes;
|
||||
@@ -129,7 +129,7 @@ pub(crate) fn check(
|
||||
SourceFileBuilder::new(path.to_string_lossy().as_ref(), "").finish();
|
||||
|
||||
Diagnostics::new(
|
||||
vec![OldDiagnostic::new(
|
||||
vec![diagnostic_from_violation(
|
||||
IOError { message },
|
||||
TextRange::default(),
|
||||
&dummy,
|
||||
|
||||
@@ -10,11 +10,10 @@ use std::path::Path;
|
||||
use anyhow::{Context, Result};
|
||||
use colored::Colorize;
|
||||
use log::{debug, warn};
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
use ruff_linter::OldDiagnostic;
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_linter::codes::Rule;
|
||||
use ruff_linter::linter::{FixTable, FixerResult, LinterResult, ParseSource, lint_fix, lint_only};
|
||||
use ruff_linter::message::{create_syntax_error_diagnostic, diagnostic_from_violation};
|
||||
use ruff_linter::package::PackageRoot;
|
||||
use ruff_linter::pyproject_toml::lint_pyproject_toml;
|
||||
use ruff_linter::settings::types::UnsafeFixes;
|
||||
@@ -26,19 +25,20 @@ use ruff_python_ast::{PySourceType, SourceType, TomlSourceType};
|
||||
use ruff_source_file::SourceFileBuilder;
|
||||
use ruff_text_size::TextRange;
|
||||
use ruff_workspace::Settings;
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
use crate::cache::{Cache, FileCacheKey, LintCacheData};
|
||||
|
||||
#[derive(Debug, Default, PartialEq)]
|
||||
pub(crate) struct Diagnostics {
|
||||
pub(crate) inner: Vec<OldDiagnostic>,
|
||||
pub(crate) inner: Vec<Diagnostic>,
|
||||
pub(crate) fixed: FixMap,
|
||||
pub(crate) notebook_indexes: FxHashMap<String, NotebookIndex>,
|
||||
}
|
||||
|
||||
impl Diagnostics {
|
||||
pub(crate) fn new(
|
||||
diagnostics: Vec<OldDiagnostic>,
|
||||
diagnostics: Vec<Diagnostic>,
|
||||
notebook_indexes: FxHashMap<String, NotebookIndex>,
|
||||
) -> Self {
|
||||
Self {
|
||||
@@ -62,7 +62,7 @@ impl Diagnostics {
|
||||
let name = path.map_or_else(|| "-".into(), Path::to_string_lossy);
|
||||
let source_file = SourceFileBuilder::new(name, "").finish();
|
||||
Self::new(
|
||||
vec![OldDiagnostic::new(
|
||||
vec![diagnostic_from_violation(
|
||||
IOError {
|
||||
message: err.to_string(),
|
||||
},
|
||||
@@ -98,10 +98,10 @@ impl Diagnostics {
|
||||
let name = path.map_or_else(|| "-".into(), Path::to_string_lossy);
|
||||
let dummy = SourceFileBuilder::new(name, "").finish();
|
||||
Self::new(
|
||||
vec![OldDiagnostic::syntax_error(
|
||||
vec![create_syntax_error_diagnostic(
|
||||
dummy,
|
||||
err,
|
||||
TextRange::default(),
|
||||
dummy,
|
||||
)],
|
||||
FxHashMap::default(),
|
||||
)
|
||||
|
||||
@@ -9,12 +9,13 @@ use itertools::{Itertools, iterate};
|
||||
use ruff_linter::linter::FixTable;
|
||||
use serde::Serialize;
|
||||
|
||||
use ruff_db::diagnostic::{Diagnostic, SecondaryCode};
|
||||
use ruff_linter::fs::relativize_path;
|
||||
use ruff_linter::logging::LogLevel;
|
||||
use ruff_linter::message::{
|
||||
AzureEmitter, Emitter, EmitterContext, GithubEmitter, GitlabEmitter, GroupedEmitter,
|
||||
JsonEmitter, JsonLinesEmitter, JunitEmitter, OldDiagnostic, PylintEmitter, RdjsonEmitter,
|
||||
SarifEmitter, SecondaryCode, TextEmitter,
|
||||
JsonEmitter, JsonLinesEmitter, JunitEmitter, PylintEmitter, RdjsonEmitter, SarifEmitter,
|
||||
TextEmitter,
|
||||
};
|
||||
use ruff_linter::notify_user;
|
||||
use ruff_linter::settings::flags::{self};
|
||||
@@ -306,8 +307,7 @@ impl Printer {
|
||||
.sorted_by_key(|(code, message)| (*code, message.fixable()))
|
||||
.fold(
|
||||
vec![],
|
||||
|mut acc: Vec<((Option<&SecondaryCode>, &OldDiagnostic), usize)>,
|
||||
(code, message)| {
|
||||
|mut acc: Vec<((Option<&SecondaryCode>, &Diagnostic), usize)>, (code, message)| {
|
||||
if let Some(((prev_code, _prev_message), count)) = acc.last_mut() {
|
||||
if *prev_code == code {
|
||||
*count += 1;
|
||||
|
||||
@@ -1067,7 +1067,7 @@ fn show_statistics_syntax_errors() {
|
||||
success: false
|
||||
exit_code: 1
|
||||
----- stdout -----
|
||||
1 syntax-error
|
||||
1 invalid-syntax
|
||||
Found 1 error.
|
||||
|
||||
----- stderr -----
|
||||
@@ -1080,7 +1080,7 @@ fn show_statistics_syntax_errors() {
|
||||
success: false
|
||||
exit_code: 1
|
||||
----- stdout -----
|
||||
1 syntax-error
|
||||
1 invalid-syntax
|
||||
Found 1 error.
|
||||
|
||||
----- stderr -----
|
||||
@@ -1093,7 +1093,7 @@ fn show_statistics_syntax_errors() {
|
||||
success: false
|
||||
exit_code: 1
|
||||
----- stdout -----
|
||||
1 syntax-error
|
||||
1 invalid-syntax
|
||||
Found 1 error.
|
||||
|
||||
----- stderr -----
|
||||
|
||||
@@ -498,11 +498,8 @@ fn bench_project(benchmark: &ProjectBenchmark, criterion: &mut Criterion) {
|
||||
let diagnostics = result.len();
|
||||
|
||||
assert!(
|
||||
diagnostics > 1 && diagnostics <= max_diagnostics,
|
||||
"Expected between {} and {} diagnostics but got {}",
|
||||
1,
|
||||
max_diagnostics,
|
||||
diagnostics
|
||||
diagnostics <= max_diagnostics,
|
||||
"Expected <={max_diagnostics} diagnostics but got {diagnostics}"
|
||||
);
|
||||
}
|
||||
|
||||
@@ -570,6 +567,23 @@ fn anyio(criterion: &mut Criterion) {
|
||||
bench_project(&benchmark, criterion);
|
||||
}
|
||||
|
||||
fn datetype(criterion: &mut Criterion) {
|
||||
let benchmark = ProjectBenchmark::new(
|
||||
RealWorldProject {
|
||||
name: "DateType",
|
||||
repository: "https://github.com/glyph/DateType",
|
||||
commit: "57c9c93cf2468069f72945fc04bf27b64100dad8",
|
||||
paths: vec![SystemPath::new("src")],
|
||||
dependencies: vec![],
|
||||
max_dep_date: "2025-07-04",
|
||||
python_version: PythonVersion::PY313,
|
||||
},
|
||||
0,
|
||||
);
|
||||
|
||||
bench_project(&benchmark, criterion);
|
||||
}
|
||||
|
||||
criterion_group!(check_file, benchmark_cold, benchmark_incremental);
|
||||
criterion_group!(
|
||||
micro,
|
||||
@@ -578,5 +592,5 @@ criterion_group!(
|
||||
benchmark_complex_constrained_attributes_1,
|
||||
benchmark_complex_constrained_attributes_2,
|
||||
);
|
||||
criterion_group!(project, anyio, attrs, hydra);
|
||||
criterion_group!(project, anyio, attrs, hydra, datetype);
|
||||
criterion_main!(check_file, micro, project);
|
||||
|
||||
@@ -242,19 +242,20 @@ fn large(bencher: Bencher, benchmark: &Benchmark) {
|
||||
run_single_threaded(bencher, benchmark);
|
||||
}
|
||||
|
||||
#[bench(args=[&*PYDANTIC], sample_size=3, sample_count=3)]
|
||||
fn multithreaded(bencher: Bencher, benchmark: &Benchmark) {
|
||||
let thread_pool = ThreadPoolBuilder::new().build().unwrap();
|
||||
// Currently disabled because the benchmark is too noisy (± 10%) to give useful feedback.
|
||||
// #[bench(args=[&*PYDANTIC], sample_size=3, sample_count=3)]
|
||||
// fn multithreaded(bencher: Bencher, benchmark: &Benchmark) {
|
||||
// let thread_pool = ThreadPoolBuilder::new().build().unwrap();
|
||||
|
||||
bencher
|
||||
.with_inputs(|| benchmark.setup_iteration())
|
||||
.bench_local_values(|db| {
|
||||
thread_pool.install(|| {
|
||||
check_project(&db, benchmark.max_diagnostics);
|
||||
db
|
||||
})
|
||||
});
|
||||
}
|
||||
// bencher
|
||||
// .with_inputs(|| benchmark.setup_iteration())
|
||||
// .bench_local_values(|db| {
|
||||
// thread_pool.install(|| {
|
||||
// check_project(&db, benchmark.max_diagnostics);
|
||||
// db
|
||||
// })
|
||||
// });
|
||||
// }
|
||||
|
||||
fn main() {
|
||||
ThreadPoolBuilder::new()
|
||||
|
||||
@@ -13,6 +13,7 @@ license = { workspace = true }
|
||||
[dependencies]
|
||||
ruff_annotate_snippets = { workspace = true }
|
||||
ruff_cache = { workspace = true, optional = true }
|
||||
ruff_diagnostics = { workspace = true }
|
||||
ruff_notebook = { workspace = true }
|
||||
ruff_python_ast = { workspace = true, features = ["get-size"] }
|
||||
ruff_python_parser = { workspace = true }
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
use std::{fmt::Formatter, sync::Arc};
|
||||
|
||||
use render::{FileResolver, Input};
|
||||
use ruff_source_file::{SourceCode, SourceFile};
|
||||
use ruff_diagnostics::Fix;
|
||||
use ruff_source_file::{LineColumn, SourceCode, SourceFile};
|
||||
|
||||
use ruff_annotate_snippets::Level as AnnotateLevel;
|
||||
use ruff_text_size::{Ranged, TextRange};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
pub use self::render::DisplayDiagnostic;
|
||||
use crate::{Db, files::File};
|
||||
@@ -62,10 +63,37 @@ impl Diagnostic {
|
||||
message: message.into_diagnostic_message(),
|
||||
annotations: vec![],
|
||||
subs: vec![],
|
||||
fix: None,
|
||||
parent: None,
|
||||
noqa_offset: None,
|
||||
secondary_code: None,
|
||||
});
|
||||
Diagnostic { inner }
|
||||
}
|
||||
|
||||
/// Creates a `Diagnostic` for a syntax error.
|
||||
///
|
||||
/// Unlike the more general [`Diagnostic::new`], this requires a [`Span`] and a [`TextRange`]
|
||||
/// attached to it.
|
||||
///
|
||||
/// This should _probably_ be a method on the syntax errors, but
|
||||
/// at time of writing, `ruff_db` depends on `ruff_python_parser` instead of
|
||||
/// the other way around. And since we want to do this conversion in a couple
|
||||
/// places, it makes sense to centralize it _somewhere_. So it's here for now.
|
||||
///
|
||||
/// Note that `message` is stored in the primary annotation, _not_ in the primary diagnostic
|
||||
/// message.
|
||||
pub fn syntax_error(
|
||||
span: impl Into<Span>,
|
||||
message: impl IntoDiagnosticMessage,
|
||||
range: impl Ranged,
|
||||
) -> Diagnostic {
|
||||
let mut diag = Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, "");
|
||||
let span = span.into().with_range(range.range());
|
||||
diag.annotate(Annotation::primary(span).message(message));
|
||||
diag
|
||||
}
|
||||
|
||||
/// Add an annotation to this diagnostic.
|
||||
///
|
||||
/// Annotations for a diagnostic are optional, but if any are added,
|
||||
@@ -226,6 +254,11 @@ impl Diagnostic {
|
||||
self.primary_annotation().map(|ann| ann.span.clone())
|
||||
}
|
||||
|
||||
/// Returns a reference to the primary span of this diagnostic.
|
||||
pub fn primary_span_ref(&self) -> Option<&Span> {
|
||||
self.primary_annotation().map(|ann| &ann.span)
|
||||
}
|
||||
|
||||
/// Returns the tags from the primary annotation of this diagnostic if it exists.
|
||||
pub fn primary_tags(&self) -> Option<&[DiagnosticTag]> {
|
||||
self.primary_annotation().map(|ann| ann.tags.as_slice())
|
||||
@@ -268,6 +301,167 @@ impl Diagnostic {
|
||||
pub fn sub_diagnostics(&self) -> &[SubDiagnostic] {
|
||||
&self.inner.subs
|
||||
}
|
||||
|
||||
/// Returns the fix for this diagnostic if it exists.
|
||||
pub fn fix(&self) -> Option<&Fix> {
|
||||
self.inner.fix.as_ref()
|
||||
}
|
||||
|
||||
/// Set the fix for this diagnostic.
|
||||
pub fn set_fix(&mut self, fix: Fix) {
|
||||
Arc::make_mut(&mut self.inner).fix = Some(fix);
|
||||
}
|
||||
|
||||
/// Remove the fix for this diagnostic.
|
||||
pub fn remove_fix(&mut self) {
|
||||
Arc::make_mut(&mut self.inner).fix = None;
|
||||
}
|
||||
|
||||
/// Returns `true` if the diagnostic contains a [`Fix`].
|
||||
pub fn fixable(&self) -> bool {
|
||||
self.fix().is_some()
|
||||
}
|
||||
|
||||
/// Returns the offset of the parent statement for this diagnostic if it exists.
|
||||
///
|
||||
/// This is primarily used for checking noqa/secondary code suppressions.
|
||||
pub fn parent(&self) -> Option<TextSize> {
|
||||
self.inner.parent
|
||||
}
|
||||
|
||||
/// Set the offset of the diagnostic's parent statement.
|
||||
pub fn set_parent(&mut self, parent: TextSize) {
|
||||
Arc::make_mut(&mut self.inner).parent = Some(parent);
|
||||
}
|
||||
|
||||
/// Returns the remapped offset for a suppression comment if it exists.
|
||||
///
|
||||
/// Like [`Diagnostic::parent`], this is used for noqa code suppression comments in Ruff.
|
||||
pub fn noqa_offset(&self) -> Option<TextSize> {
|
||||
self.inner.noqa_offset
|
||||
}
|
||||
|
||||
/// Set the remapped offset for a suppression comment.
|
||||
pub fn set_noqa_offset(&mut self, noqa_offset: TextSize) {
|
||||
Arc::make_mut(&mut self.inner).noqa_offset = Some(noqa_offset);
|
||||
}
|
||||
|
||||
/// Returns the secondary code for the diagnostic if it exists.
|
||||
///
|
||||
/// The "primary" code for the diagnostic is its lint name. Diagnostics in ty don't have
|
||||
/// secondary codes (yet), but in Ruff the noqa code is used.
|
||||
pub fn secondary_code(&self) -> Option<&SecondaryCode> {
|
||||
self.inner.secondary_code.as_ref()
|
||||
}
|
||||
|
||||
/// Set the secondary code for this diagnostic.
|
||||
pub fn set_secondary_code(&mut self, code: SecondaryCode) {
|
||||
Arc::make_mut(&mut self.inner).secondary_code = Some(code);
|
||||
}
|
||||
|
||||
/// Returns the name used to represent the diagnostic.
|
||||
pub fn name(&self) -> &'static str {
|
||||
self.id().as_str()
|
||||
}
|
||||
|
||||
/// Returns `true` if `self` is a syntax error message.
|
||||
pub fn is_syntax_error(&self) -> bool {
|
||||
self.id().is_invalid_syntax()
|
||||
}
|
||||
|
||||
/// Returns the message body to display to the user.
|
||||
pub fn body(&self) -> &str {
|
||||
self.primary_message()
|
||||
}
|
||||
|
||||
/// Returns the fix suggestion for the violation.
|
||||
pub fn suggestion(&self) -> Option<&str> {
|
||||
self.primary_annotation()?.get_message()
|
||||
}
|
||||
|
||||
/// Returns the URL for the rule documentation, if it exists.
|
||||
pub fn to_url(&self) -> Option<String> {
|
||||
if self.is_syntax_error() {
|
||||
None
|
||||
} else {
|
||||
Some(format!(
|
||||
"{}/rules/{}",
|
||||
env!("CARGO_PKG_HOMEPAGE"),
|
||||
self.name()
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the filename for the message.
|
||||
///
|
||||
/// Panics if the diagnostic has no primary span, or if its file is not a `SourceFile`.
|
||||
pub fn expect_ruff_filename(&self) -> String {
|
||||
self.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.name()
|
||||
.to_string()
|
||||
}
|
||||
|
||||
/// Computes the start source location for the message.
|
||||
///
|
||||
/// Panics if the diagnostic has no primary span, if its file is not a `SourceFile`, or if the
|
||||
/// span has no range.
|
||||
pub fn expect_ruff_start_location(&self) -> LineColumn {
|
||||
self.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.to_source_code()
|
||||
.line_column(self.expect_range().start())
|
||||
}
|
||||
|
||||
/// Computes the end source location for the message.
|
||||
///
|
||||
/// Panics if the diagnostic has no primary span, if its file is not a `SourceFile`, or if the
|
||||
/// span has no range.
|
||||
pub fn expect_ruff_end_location(&self) -> LineColumn {
|
||||
self.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.to_source_code()
|
||||
.line_column(self.expect_range().end())
|
||||
}
|
||||
|
||||
/// Returns the [`SourceFile`] which the message belongs to.
|
||||
pub fn ruff_source_file(&self) -> Option<&SourceFile> {
|
||||
self.primary_span_ref()?.as_ruff_file()
|
||||
}
|
||||
|
||||
/// Returns the [`SourceFile`] which the message belongs to.
|
||||
///
|
||||
/// Panics if the diagnostic has no primary span, or if its file is not a `SourceFile`.
|
||||
pub fn expect_ruff_source_file(&self) -> SourceFile {
|
||||
self.expect_primary_span().expect_ruff_file().clone()
|
||||
}
|
||||
|
||||
/// Returns the [`TextRange`] for the diagnostic.
|
||||
pub fn range(&self) -> Option<TextRange> {
|
||||
self.primary_span()?.range()
|
||||
}
|
||||
|
||||
/// Returns the [`TextRange`] for the diagnostic.
|
||||
///
|
||||
/// Panics if the diagnostic has no primary span or if the span has no range.
|
||||
pub fn expect_range(&self) -> TextRange {
|
||||
self.range().expect("Expected a range for the primary span")
|
||||
}
|
||||
}
|
||||
|
||||
impl Ord for Diagnostic {
|
||||
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
|
||||
self.partial_cmp(other).unwrap_or(std::cmp::Ordering::Equal)
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialOrd for Diagnostic {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
|
||||
Some(
|
||||
(self.ruff_source_file()?, self.range()?.start())
|
||||
.cmp(&(other.ruff_source_file()?, other.range()?.start())),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Eq, PartialEq, get_size2::GetSize)]
|
||||
@@ -277,6 +471,10 @@ struct DiagnosticInner {
|
||||
message: DiagnosticMessage,
|
||||
annotations: Vec<Annotation>,
|
||||
subs: Vec<SubDiagnostic>,
|
||||
fix: Option<Fix>,
|
||||
parent: Option<TextSize>,
|
||||
noqa_offset: Option<TextSize>,
|
||||
secondary_code: Option<SecondaryCode>,
|
||||
}
|
||||
|
||||
struct RenderingSortKey<'a> {
|
||||
@@ -897,9 +1095,15 @@ impl Span {
|
||||
///
|
||||
/// Panics if the file is a [`UnifiedFile::Ty`] instead of a [`UnifiedFile::Ruff`].
|
||||
pub fn expect_ruff_file(&self) -> &SourceFile {
|
||||
self.as_ruff_file()
|
||||
.expect("Expected a ruff `SourceFile`, found a ty `File`")
|
||||
}
|
||||
|
||||
/// Returns the [`SourceFile`] attached to this [`Span`].
|
||||
pub fn as_ruff_file(&self) -> Option<&SourceFile> {
|
||||
match &self.file {
|
||||
UnifiedFile::Ty(_) => panic!("Expected a ruff `SourceFile`, found a ty `File`"),
|
||||
UnifiedFile::Ruff(file) => file,
|
||||
UnifiedFile::Ty(_) => None,
|
||||
UnifiedFile::Ruff(file) => Some(file),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1147,41 +1351,52 @@ impl<T: std::fmt::Display> IntoDiagnosticMessage for T {
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a `Diagnostic` from a parse error.
|
||||
/// A secondary identifier for a lint diagnostic.
|
||||
///
|
||||
/// This should _probably_ be a method on `ruff_python_parser::ParseError`, but
|
||||
/// at time of writing, `ruff_db` depends on `ruff_python_parser` instead of
|
||||
/// the other way around. And since we want to do this conversion in a couple
|
||||
/// places, it makes sense to centralize it _somewhere_. So it's here for now.
|
||||
pub fn create_parse_diagnostic(file: File, err: &ruff_python_parser::ParseError) -> Diagnostic {
|
||||
let mut diag = Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, "");
|
||||
let span = Span::from(file).with_range(err.location);
|
||||
diag.annotate(Annotation::primary(span).message(&err.error));
|
||||
diag
|
||||
/// For Ruff rules this means the noqa code.
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash, get_size2::GetSize)]
|
||||
#[cfg_attr(feature = "serde", derive(serde::Serialize), serde(transparent))]
|
||||
pub struct SecondaryCode(String);
|
||||
|
||||
impl SecondaryCode {
|
||||
pub fn new(code: String) -> Self {
|
||||
Self(code)
|
||||
}
|
||||
|
||||
pub fn as_str(&self) -> &str {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a `Diagnostic` from an unsupported syntax error.
|
||||
///
|
||||
/// See [`create_parse_diagnostic`] for more details.
|
||||
pub fn create_unsupported_syntax_diagnostic(
|
||||
file: File,
|
||||
err: &ruff_python_parser::UnsupportedSyntaxError,
|
||||
) -> Diagnostic {
|
||||
let mut diag = Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, "");
|
||||
let span = Span::from(file).with_range(err.range);
|
||||
diag.annotate(Annotation::primary(span).message(err.to_string()));
|
||||
diag
|
||||
impl std::fmt::Display for SecondaryCode {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_str(&self.0)
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a `Diagnostic` from a semantic syntax error.
|
||||
///
|
||||
/// See [`create_parse_diagnostic`] for more details.
|
||||
pub fn create_semantic_syntax_diagnostic(
|
||||
file: File,
|
||||
err: &ruff_python_parser::semantic_errors::SemanticSyntaxError,
|
||||
) -> Diagnostic {
|
||||
let mut diag = Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, "");
|
||||
let span = Span::from(file).with_range(err.range);
|
||||
diag.annotate(Annotation::primary(span).message(err.to_string()));
|
||||
diag
|
||||
impl std::ops::Deref for SecondaryCode {
|
||||
type Target = str;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<&str> for SecondaryCode {
|
||||
fn eq(&self, other: &&str) -> bool {
|
||||
self.0 == *other
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<SecondaryCode> for &str {
|
||||
fn eq(&self, other: &SecondaryCode) -> bool {
|
||||
other.eq(self)
|
||||
}
|
||||
}
|
||||
|
||||
// for `hashbrown::EntryRef`
|
||||
impl From<&SecondaryCode> for SecondaryCode {
|
||||
fn from(value: &SecondaryCode) -> Self {
|
||||
value.clone()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,5 +16,6 @@ doctest = false
|
||||
[dependencies]
|
||||
ruff_text_size = { workspace = true }
|
||||
|
||||
get-size2 = { workspace = true }
|
||||
is-macro = { workspace = true }
|
||||
serde = { workspace = true, optional = true, features = [] }
|
||||
|
||||
@@ -7,7 +7,7 @@ use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
/// A text edit to be applied to a source file. Inserts, deletes, or replaces
|
||||
/// content at a given location.
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash, get_size2::GetSize)]
|
||||
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
|
||||
pub struct Edit {
|
||||
/// The start location of the edit.
|
||||
|
||||
@@ -6,7 +6,9 @@ use ruff_text_size::{Ranged, TextSize};
|
||||
use crate::edit::Edit;
|
||||
|
||||
/// Indicates if a fix can be applied.
|
||||
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, is_macro::Is)]
|
||||
#[derive(
|
||||
Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, is_macro::Is, get_size2::GetSize,
|
||||
)]
|
||||
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde", serde(rename_all = "lowercase"))]
|
||||
pub enum Applicability {
|
||||
@@ -30,7 +32,7 @@ pub enum Applicability {
|
||||
}
|
||||
|
||||
/// Indicates the level of isolation required to apply a fix.
|
||||
#[derive(Default, Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
|
||||
#[derive(Default, Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord, get_size2::GetSize)]
|
||||
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
|
||||
pub enum IsolationLevel {
|
||||
/// The fix should be applied as long as no other fixes in the same group have been applied.
|
||||
@@ -41,7 +43,7 @@ pub enum IsolationLevel {
|
||||
}
|
||||
|
||||
/// A collection of [`Edit`] elements to be applied to a source file.
|
||||
#[derive(Debug, PartialEq, Eq, Clone)]
|
||||
#[derive(Debug, PartialEq, Eq, Clone, get_size2::GetSize)]
|
||||
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
|
||||
pub struct Fix {
|
||||
/// The [`Edit`] elements to be applied, sorted by [`Edit::start`] in ascending order.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff_linter"
|
||||
version = "0.12.1"
|
||||
version = "0.12.2"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
@@ -15,7 +15,7 @@ license = { workspace = true }
|
||||
[dependencies]
|
||||
ruff_annotate_snippets = { workspace = true }
|
||||
ruff_cache = { workspace = true }
|
||||
ruff_db = { workspace = true }
|
||||
ruff_db = { workspace = true, features = ["serde"] }
|
||||
ruff_diagnostics = { workspace = true, features = ["serde"] }
|
||||
ruff_notebook = { workspace = true }
|
||||
ruff_macros = { workspace = true }
|
||||
|
||||
@@ -1,29 +1,30 @@
|
||||
try:
|
||||
pass
|
||||
except Exception:
|
||||
continue
|
||||
for _ in []:
|
||||
try:
|
||||
pass
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
try:
|
||||
pass
|
||||
except:
|
||||
continue
|
||||
try:
|
||||
pass
|
||||
except:
|
||||
continue
|
||||
|
||||
try:
|
||||
pass
|
||||
except (Exception,):
|
||||
continue
|
||||
try:
|
||||
pass
|
||||
except (Exception,):
|
||||
continue
|
||||
|
||||
try:
|
||||
pass
|
||||
except (Exception, ValueError):
|
||||
continue
|
||||
try:
|
||||
pass
|
||||
except (Exception, ValueError):
|
||||
continue
|
||||
|
||||
try:
|
||||
pass
|
||||
except ValueError:
|
||||
continue
|
||||
try:
|
||||
pass
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
try:
|
||||
pass
|
||||
except (ValueError,):
|
||||
continue
|
||||
try:
|
||||
pass
|
||||
except (ValueError,):
|
||||
continue
|
||||
|
||||
@@ -185,38 +185,45 @@ for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
collect_shop_items(shopper, section_items)
|
||||
|
||||
# Shouldn't trigger the warning when there is a return statement.
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
def foo():
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
elif _section == "frozen items":
|
||||
return section_items
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
elif _section == "frozen items":
|
||||
return section_items
|
||||
collect_shop_items(shopper, section_items)
|
||||
|
||||
# Should trigger the warning for duplicate access, even if is a return statement after.
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
def foo():
|
||||
from itertools import groupby
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
|
||||
# Should trigger the warning for duplicate access, even if is a return in another branch.
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
elif _section == "frozen items":
|
||||
collect_shop_items(shopper, section_items)
|
||||
collect_shop_items(shopper, section_items)
|
||||
def foo():
|
||||
from itertools import groupby
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
elif _section == "frozen items":
|
||||
collect_shop_items(shopper, section_items)
|
||||
collect_shop_items(shopper, section_items)
|
||||
|
||||
# Should trigger, since only one branch has a return statement.
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
elif _section == "frozen items":
|
||||
collect_shop_items(shopper, section_items)
|
||||
collect_shop_items(shopper, section_items) # B031
|
||||
def foo():
|
||||
from itertools import groupby
|
||||
for _section, section_items in groupby(items, key=lambda p: p[1]):
|
||||
if _section == "greens":
|
||||
collect_shop_items(shopper, section_items)
|
||||
return
|
||||
elif _section == "frozen items":
|
||||
collect_shop_items(shopper, section_items)
|
||||
collect_shop_items(shopper, section_items) # B031
|
||||
|
||||
# Let's redefine the `groupby` function to make sure we pick up the correct one.
|
||||
# NOTE: This should always be at the end of the file.
|
||||
|
||||
@@ -26,8 +26,9 @@ abc(**{'a': b}, **{'a': c}) # PIE804
|
||||
abc(a=1, **{'a': c}, **{'b': c}) # PIE804
|
||||
|
||||
# Some values need to be parenthesized.
|
||||
abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
def foo():
|
||||
abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
|
||||
# https://github.com/astral-sh/ruff/issues/18036
|
||||
# The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
|
||||
@@ -27,8 +27,9 @@ with contextlib.ExitStack() as stack:
|
||||
close_files = stack.pop_all().close
|
||||
|
||||
# OK
|
||||
with contextlib.AsyncExitStack() as exit_stack:
|
||||
f = await exit_stack.enter_async_context(open("filename"))
|
||||
async def foo():
|
||||
with contextlib.AsyncExitStack() as exit_stack:
|
||||
f = await exit_stack.enter_async_context(open("filename"))
|
||||
|
||||
# OK (false negative)
|
||||
with contextlib.ExitStack():
|
||||
@@ -275,9 +276,10 @@ class ExampleClassTests(TestCase):
|
||||
cls.enterClassContext(open("filename"))
|
||||
|
||||
# OK
|
||||
class ExampleAsyncTests(IsolatedAsyncioTestCase):
|
||||
async def test_something(self):
|
||||
await self.enterAsyncContext(open("filename"))
|
||||
async def foo():
|
||||
class ExampleAsyncTests(IsolatedAsyncioTestCase):
|
||||
async def test_something(self):
|
||||
await self.enterAsyncContext(open("filename"))
|
||||
|
||||
# OK
|
||||
class ExampleTests(TestCase):
|
||||
|
||||
@@ -1,98 +1,99 @@
|
||||
# Errors
|
||||
a = "hello"
|
||||
def foo():
|
||||
# Errors
|
||||
a = "hello"
|
||||
|
||||
# SIM116
|
||||
if a == "foo":
|
||||
return "bar"
|
||||
elif a == "bar":
|
||||
return "baz"
|
||||
elif a == "boo":
|
||||
return "ooh"
|
||||
else:
|
||||
return 42
|
||||
|
||||
# SIM116
|
||||
if a == 1:
|
||||
return (1, 2, 3)
|
||||
elif a == 2:
|
||||
return (4, 5, 6)
|
||||
elif a == 3:
|
||||
return (7, 8, 9)
|
||||
else:
|
||||
return (10, 11, 12)
|
||||
|
||||
# SIM116
|
||||
if a == 1:
|
||||
return (1, 2, 3)
|
||||
elif a == 2:
|
||||
return (4, 5, 6)
|
||||
elif a == 3:
|
||||
return (7, 8, 9)
|
||||
|
||||
# SIM116
|
||||
if a == "hello 'sir'":
|
||||
return (1, 2, 3)
|
||||
elif a == 'goodbye "mam"':
|
||||
return (4, 5, 6)
|
||||
elif a == """Fairwell 'mister'""":
|
||||
return (7, 8, 9)
|
||||
else:
|
||||
return (10, 11, 12)
|
||||
|
||||
# SIM116
|
||||
if a == b"one":
|
||||
return 1
|
||||
elif a == b"two":
|
||||
return 2
|
||||
elif a == b"three":
|
||||
return 3
|
||||
|
||||
# SIM116
|
||||
if a == "hello 'sir'":
|
||||
return ("hello'", 'hi"', 3)
|
||||
elif a == 'goodbye "mam"':
|
||||
return (4, 5, 6)
|
||||
elif a == """Fairwell 'mister'""":
|
||||
return (7, 8, 9)
|
||||
else:
|
||||
return (10, 11, 12)
|
||||
|
||||
# OK
|
||||
if a == "foo":
|
||||
return "bar"
|
||||
elif a == "bar":
|
||||
return baz()
|
||||
elif a == "boo":
|
||||
return "ooh"
|
||||
else:
|
||||
return 42
|
||||
|
||||
# OK
|
||||
if a == b"one":
|
||||
return 1
|
||||
elif b == b"two":
|
||||
return 2
|
||||
elif a == b"three":
|
||||
return 3
|
||||
|
||||
# SIM116
|
||||
if func_name == "create":
|
||||
return "A"
|
||||
elif func_name == "modify":
|
||||
return "M"
|
||||
elif func_name == "remove":
|
||||
return "D"
|
||||
elif func_name == "move":
|
||||
return "MV"
|
||||
|
||||
# OK
|
||||
def no_return_in_else(platform):
|
||||
if platform == "linux":
|
||||
return "auditwheel repair -w {dest_dir} {wheel}"
|
||||
elif platform == "macos":
|
||||
return "delocate-wheel --require-archs {delocate_archs} -w {dest_dir} -v {wheel}"
|
||||
elif platform == "windows":
|
||||
return ""
|
||||
# SIM116
|
||||
if a == "foo":
|
||||
return "bar"
|
||||
elif a == "bar":
|
||||
return "baz"
|
||||
elif a == "boo":
|
||||
return "ooh"
|
||||
else:
|
||||
msg = f"Unknown platform: {platform!r}"
|
||||
raise ValueError(msg)
|
||||
return 42
|
||||
|
||||
# SIM116
|
||||
if a == 1:
|
||||
return (1, 2, 3)
|
||||
elif a == 2:
|
||||
return (4, 5, 6)
|
||||
elif a == 3:
|
||||
return (7, 8, 9)
|
||||
else:
|
||||
return (10, 11, 12)
|
||||
|
||||
# SIM116
|
||||
if a == 1:
|
||||
return (1, 2, 3)
|
||||
elif a == 2:
|
||||
return (4, 5, 6)
|
||||
elif a == 3:
|
||||
return (7, 8, 9)
|
||||
|
||||
# SIM116
|
||||
if a == "hello 'sir'":
|
||||
return (1, 2, 3)
|
||||
elif a == 'goodbye "mam"':
|
||||
return (4, 5, 6)
|
||||
elif a == """Fairwell 'mister'""":
|
||||
return (7, 8, 9)
|
||||
else:
|
||||
return (10, 11, 12)
|
||||
|
||||
# SIM116
|
||||
if a == b"one":
|
||||
return 1
|
||||
elif a == b"two":
|
||||
return 2
|
||||
elif a == b"three":
|
||||
return 3
|
||||
|
||||
# SIM116
|
||||
if a == "hello 'sir'":
|
||||
return ("hello'", 'hi"', 3)
|
||||
elif a == 'goodbye "mam"':
|
||||
return (4, 5, 6)
|
||||
elif a == """Fairwell 'mister'""":
|
||||
return (7, 8, 9)
|
||||
else:
|
||||
return (10, 11, 12)
|
||||
|
||||
# OK
|
||||
if a == "foo":
|
||||
return "bar"
|
||||
elif a == "bar":
|
||||
return baz()
|
||||
elif a == "boo":
|
||||
return "ooh"
|
||||
else:
|
||||
return 42
|
||||
|
||||
# OK
|
||||
if a == b"one":
|
||||
return 1
|
||||
elif b == b"two":
|
||||
return 2
|
||||
elif a == b"three":
|
||||
return 3
|
||||
|
||||
# SIM116
|
||||
if func_name == "create":
|
||||
return "A"
|
||||
elif func_name == "modify":
|
||||
return "M"
|
||||
elif func_name == "remove":
|
||||
return "D"
|
||||
elif func_name == "move":
|
||||
return "MV"
|
||||
|
||||
# OK
|
||||
def no_return_in_else(platform):
|
||||
if platform == "linux":
|
||||
return "auditwheel repair -w {dest_dir} {wheel}"
|
||||
elif platform == "macos":
|
||||
return "delocate-wheel --require-archs {delocate_archs} -w {dest_dir} -v {wheel}"
|
||||
elif platform == "windows":
|
||||
return ""
|
||||
else:
|
||||
msg = f"Unknown platform: {platform!r}"
|
||||
raise ValueError(msg)
|
||||
|
||||
@@ -81,4 +81,5 @@ match(foo):
|
||||
# https://github.com/astral-sh/ruff/issues/12094
|
||||
pass;
|
||||
|
||||
yield, x
|
||||
def foo():
|
||||
yield, x
|
||||
|
||||
@@ -125,3 +125,20 @@ class J:
|
||||
class K:
|
||||
f: F = F()
|
||||
g: G = G()
|
||||
|
||||
|
||||
# Regression test for https://github.com/astral-sh/ruff/issues/19014
|
||||
# These are all valid field calls and should not cause diagnostics.
|
||||
@attr.define
|
||||
class TestAttrField:
|
||||
attr_field_factory: list[int] = attr.field(factory=list)
|
||||
attr_field_default: list[int] = attr.field(default=attr.Factory(list))
|
||||
attr_factory: list[int] = attr.Factory(list)
|
||||
attr_ib: list[int] = attr.ib(factory=list)
|
||||
attr_attr: list[int] = attr.attr(factory=list)
|
||||
attr_attrib: list[int] = attr.attrib(factory=list)
|
||||
|
||||
|
||||
@attr.attributes
|
||||
class TestAttrAttributes:
|
||||
x: list[int] = list() # RUF009
|
||||
|
||||
@@ -28,6 +28,7 @@ use itertools::Itertools;
|
||||
use log::debug;
|
||||
use rustc_hash::{FxHashMap, FxHashSet};
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_diagnostics::{Applicability, Fix, IsolationLevel};
|
||||
use ruff_notebook::{CellOffsets, NotebookIndex};
|
||||
use ruff_python_ast::helpers::{collect_import_from_member, is_docstring_stmt, to_module_path};
|
||||
@@ -63,6 +64,7 @@ use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
use crate::checkers::ast::annotation::AnnotationContext;
|
||||
use crate::docstrings::extraction::ExtractionTarget;
|
||||
use crate::importer::{ImportRequest, Importer, ResolutionError};
|
||||
use crate::message::diagnostic_from_violation;
|
||||
use crate::noqa::NoqaMapping;
|
||||
use crate::package::PackageRoot;
|
||||
use crate::preview::is_undefined_export_in_dunder_init_enabled;
|
||||
@@ -74,7 +76,7 @@ use crate::rules::pylint::rules::{AwaitOutsideAsync, LoadBeforeGlobalDeclaration
|
||||
use crate::rules::{flake8_pyi, flake8_type_checking, pyflakes, pyupgrade};
|
||||
use crate::settings::rule_table::RuleTable;
|
||||
use crate::settings::{LinterSettings, TargetVersion, flags};
|
||||
use crate::{Edit, OldDiagnostic, Violation};
|
||||
use crate::{Edit, Violation};
|
||||
use crate::{Locator, docstrings, noqa};
|
||||
|
||||
mod analyze;
|
||||
@@ -388,7 +390,7 @@ impl<'a> Checker<'a> {
|
||||
|
||||
/// Return a [`DiagnosticGuard`] for reporting a diagnostic.
|
||||
///
|
||||
/// The guard derefs to an [`OldDiagnostic`], so it can be used to further modify the diagnostic
|
||||
/// The guard derefs to a [`Diagnostic`], so it can be used to further modify the diagnostic
|
||||
/// before it is added to the collection in the checker on `Drop`.
|
||||
pub(crate) fn report_diagnostic<'chk, T: Violation>(
|
||||
&'chk self,
|
||||
@@ -401,7 +403,7 @@ impl<'a> Checker<'a> {
|
||||
/// Return a [`DiagnosticGuard`] for reporting a diagnostic if the corresponding rule is
|
||||
/// enabled.
|
||||
///
|
||||
/// The guard derefs to an [`OldDiagnostic`], so it can be used to further modify the diagnostic
|
||||
/// The guard derefs to a [`Diagnostic`], so it can be used to further modify the diagnostic
|
||||
/// before it is added to the collection in the checker on `Drop`.
|
||||
pub(crate) fn report_diagnostic_if_enabled<'chk, T: Violation>(
|
||||
&'chk self,
|
||||
@@ -3116,9 +3118,9 @@ pub(crate) fn check_ast(
|
||||
/// A type for collecting diagnostics in a given file.
|
||||
///
|
||||
/// [`LintContext::report_diagnostic`] can be used to obtain a [`DiagnosticGuard`], which will push
|
||||
/// a [`Violation`] to the contained [`OldDiagnostic`] collection on `Drop`.
|
||||
/// a [`Violation`] to the contained [`Diagnostic`] collection on `Drop`.
|
||||
pub(crate) struct LintContext<'a> {
|
||||
diagnostics: RefCell<Vec<OldDiagnostic>>,
|
||||
diagnostics: RefCell<Vec<Diagnostic>>,
|
||||
source_file: SourceFile,
|
||||
rules: RuleTable,
|
||||
settings: &'a LinterSettings,
|
||||
@@ -3126,7 +3128,7 @@ pub(crate) struct LintContext<'a> {
|
||||
|
||||
impl<'a> LintContext<'a> {
|
||||
/// Create a new collector with the given `source_file` and an empty collection of
|
||||
/// `OldDiagnostic`s.
|
||||
/// `Diagnostic`s.
|
||||
pub(crate) fn new(path: &Path, contents: &str, settings: &'a LinterSettings) -> Self {
|
||||
let source_file =
|
||||
SourceFileBuilder::new(path.to_string_lossy().as_ref(), contents).finish();
|
||||
@@ -3147,7 +3149,7 @@ impl<'a> LintContext<'a> {
|
||||
|
||||
/// Return a [`DiagnosticGuard`] for reporting a diagnostic.
|
||||
///
|
||||
/// The guard derefs to an [`OldDiagnostic`], so it can be used to further modify the diagnostic
|
||||
/// The guard derefs to a [`Diagnostic`], so it can be used to further modify the diagnostic
|
||||
/// before it is added to the collection in the context on `Drop`.
|
||||
pub(crate) fn report_diagnostic<'chk, T: Violation>(
|
||||
&'chk self,
|
||||
@@ -3156,7 +3158,7 @@ impl<'a> LintContext<'a> {
|
||||
) -> DiagnosticGuard<'chk, 'a> {
|
||||
DiagnosticGuard {
|
||||
context: self,
|
||||
diagnostic: Some(OldDiagnostic::new(kind, range, &self.source_file)),
|
||||
diagnostic: Some(diagnostic_from_violation(kind, range, &self.source_file)),
|
||||
rule: T::rule(),
|
||||
}
|
||||
}
|
||||
@@ -3164,7 +3166,7 @@ impl<'a> LintContext<'a> {
|
||||
/// Return a [`DiagnosticGuard`] for reporting a diagnostic if the corresponding rule is
|
||||
/// enabled.
|
||||
///
|
||||
/// The guard derefs to an [`OldDiagnostic`], so it can be used to further modify the diagnostic
|
||||
/// The guard derefs to a [`Diagnostic`], so it can be used to further modify the diagnostic
|
||||
/// before it is added to the collection in the context on `Drop`.
|
||||
pub(crate) fn report_diagnostic_if_enabled<'chk, T: Violation>(
|
||||
&'chk self,
|
||||
@@ -3175,7 +3177,7 @@ impl<'a> LintContext<'a> {
|
||||
if self.is_rule_enabled(rule) {
|
||||
Some(DiagnosticGuard {
|
||||
context: self,
|
||||
diagnostic: Some(OldDiagnostic::new(kind, range, &self.source_file)),
|
||||
diagnostic: Some(diagnostic_from_violation(kind, range, &self.source_file)),
|
||||
rule,
|
||||
})
|
||||
} else {
|
||||
@@ -3199,17 +3201,17 @@ impl<'a> LintContext<'a> {
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn into_parts(self) -> (Vec<OldDiagnostic>, SourceFile) {
|
||||
pub(crate) fn into_parts(self) -> (Vec<Diagnostic>, SourceFile) {
|
||||
(self.diagnostics.into_inner(), self.source_file)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn as_mut_vec(&mut self) -> &mut Vec<OldDiagnostic> {
|
||||
pub(crate) fn as_mut_vec(&mut self) -> &mut Vec<Diagnostic> {
|
||||
self.diagnostics.get_mut()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn iter(&mut self) -> impl Iterator<Item = &OldDiagnostic> {
|
||||
pub(crate) fn iter(&mut self) -> impl Iterator<Item = &Diagnostic> {
|
||||
self.diagnostics.get_mut().iter()
|
||||
}
|
||||
}
|
||||
@@ -3227,7 +3229,7 @@ pub(crate) struct DiagnosticGuard<'a, 'b> {
|
||||
/// The diagnostic that we want to report.
|
||||
///
|
||||
/// This is always `Some` until the `Drop` (or `defuse`) call.
|
||||
diagnostic: Option<OldDiagnostic>,
|
||||
diagnostic: Option<Diagnostic>,
|
||||
rule: Rule,
|
||||
}
|
||||
|
||||
@@ -3253,11 +3255,14 @@ impl DiagnosticGuard<'_, '_> {
|
||||
#[inline]
|
||||
pub(crate) fn set_fix(&mut self, fix: Fix) {
|
||||
if !self.context.rules.should_fix(self.rule) {
|
||||
self.fix = None;
|
||||
self.diagnostic.as_mut().unwrap().remove_fix();
|
||||
return;
|
||||
}
|
||||
let applicability = self.resolve_applicability(&fix);
|
||||
self.fix = Some(fix.with_applicability(applicability));
|
||||
self.diagnostic
|
||||
.as_mut()
|
||||
.unwrap()
|
||||
.set_fix(fix.with_applicability(applicability));
|
||||
}
|
||||
|
||||
/// Set the [`Fix`] used to fix the diagnostic, if the provided function returns `Ok`.
|
||||
@@ -3286,9 +3291,9 @@ impl DiagnosticGuard<'_, '_> {
|
||||
}
|
||||
|
||||
impl std::ops::Deref for DiagnosticGuard<'_, '_> {
|
||||
type Target = OldDiagnostic;
|
||||
type Target = Diagnostic;
|
||||
|
||||
fn deref(&self) -> &OldDiagnostic {
|
||||
fn deref(&self) -> &Diagnostic {
|
||||
// OK because `self.diagnostic` is only `None` within `Drop`.
|
||||
self.diagnostic.as_ref().unwrap()
|
||||
}
|
||||
@@ -3296,7 +3301,7 @@ impl std::ops::Deref for DiagnosticGuard<'_, '_> {
|
||||
|
||||
/// Return a mutable borrow of the diagnostic in this guard.
|
||||
impl std::ops::DerefMut for DiagnosticGuard<'_, '_> {
|
||||
fn deref_mut(&mut self) -> &mut OldDiagnostic {
|
||||
fn deref_mut(&mut self) -> &mut Diagnostic {
|
||||
// OK because `self.diagnostic` is only `None` within `Drop`.
|
||||
self.diagnostic.as_mut().unwrap()
|
||||
}
|
||||
|
||||
@@ -66,9 +66,9 @@ pub(crate) fn check_noqa(
|
||||
}
|
||||
|
||||
let noqa_offsets = diagnostic
|
||||
.parent
|
||||
.parent()
|
||||
.into_iter()
|
||||
.chain(std::iter::once(diagnostic.start()))
|
||||
.chain(std::iter::once(diagnostic.expect_range().start()))
|
||||
.map(|position| noqa_line_for.resolve(position))
|
||||
.unique();
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@
|
||||
/// `--select`. For pylint this is e.g. C0414 and E0118 but also C and E01.
|
||||
use std::fmt::Formatter;
|
||||
|
||||
use ruff_db::diagnostic::SecondaryCode;
|
||||
use strum_macros::EnumIter;
|
||||
|
||||
use crate::registry::Linter;
|
||||
@@ -52,6 +53,18 @@ impl PartialEq<NoqaCode> for &str {
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<NoqaCode> for SecondaryCode {
|
||||
fn eq(&self, other: &NoqaCode) -> bool {
|
||||
&self.as_str() == other
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<SecondaryCode> for NoqaCode {
|
||||
fn eq(&self, other: &SecondaryCode) -> bool {
|
||||
other.eq(self)
|
||||
}
|
||||
}
|
||||
|
||||
impl serde::Serialize for NoqaCode {
|
||||
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
|
||||
where
|
||||
|
||||
@@ -618,7 +618,8 @@ mod tests {
|
||||
use crate::fix::edits::{
|
||||
add_to_dunder_all, make_redundant_alias, next_stmt_break, trailing_semicolon,
|
||||
};
|
||||
use crate::{Edit, Fix, Locator, OldDiagnostic};
|
||||
use crate::message::diagnostic_from_violation;
|
||||
use crate::{Edit, Fix, Locator};
|
||||
|
||||
/// Parse the given source using [`Mode::Module`] and return the first statement.
|
||||
fn parse_first_stmt(source: &str) -> Result<Stmt> {
|
||||
@@ -749,12 +750,12 @@ x = 1 \
|
||||
let diag = {
|
||||
use crate::rules::pycodestyle::rules::MissingNewlineAtEndOfFile;
|
||||
let mut iter = edits.into_iter();
|
||||
let mut diagnostic = OldDiagnostic::new(
|
||||
let mut diagnostic = diagnostic_from_violation(
|
||||
MissingNewlineAtEndOfFile, // The choice of rule here is arbitrary.
|
||||
TextRange::default(),
|
||||
&SourceFileBuilder::new("<filename>", "<code>").finish(),
|
||||
);
|
||||
diagnostic.fix = Some(Fix::safe_edits(
|
||||
diagnostic.set_fix(Fix::safe_edits(
|
||||
iter.next().ok_or(anyhow!("expected edits nonempty"))?,
|
||||
iter,
|
||||
));
|
||||
|
||||
@@ -3,12 +3,12 @@ use std::collections::BTreeSet;
|
||||
use itertools::Itertools;
|
||||
use rustc_hash::FxHashSet;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_diagnostics::{IsolationLevel, SourceMap};
|
||||
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
|
||||
|
||||
use crate::Locator;
|
||||
use crate::linter::FixTable;
|
||||
use crate::message::OldDiagnostic;
|
||||
use crate::registry::Rule;
|
||||
use crate::settings::types::UnsafeFixes;
|
||||
use crate::{Edit, Fix};
|
||||
@@ -28,7 +28,7 @@ pub(crate) struct FixResult {
|
||||
|
||||
/// Fix errors in a file, and write the fixed source code to disk.
|
||||
pub(crate) fn fix_file(
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
locator: &Locator,
|
||||
unsafe_fixes: UnsafeFixes,
|
||||
) -> Option<FixResult> {
|
||||
@@ -52,7 +52,7 @@ pub(crate) fn fix_file(
|
||||
|
||||
/// Apply a series of fixes.
|
||||
fn apply_fixes<'a>(
|
||||
diagnostics: impl Iterator<Item = &'a OldDiagnostic>,
|
||||
diagnostics: impl Iterator<Item = &'a Diagnostic>,
|
||||
locator: &'a Locator<'a>,
|
||||
) -> FixResult {
|
||||
let mut output = String::with_capacity(locator.len());
|
||||
@@ -173,25 +173,26 @@ mod tests {
|
||||
use ruff_text_size::{Ranged, TextSize};
|
||||
|
||||
use crate::Locator;
|
||||
use crate::OldDiagnostic;
|
||||
use crate::fix::{FixResult, apply_fixes};
|
||||
use crate::message::diagnostic_from_violation;
|
||||
use crate::rules::pycodestyle::rules::MissingNewlineAtEndOfFile;
|
||||
use crate::{Edit, Fix};
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
|
||||
fn create_diagnostics(
|
||||
filename: &str,
|
||||
source: &str,
|
||||
edit: impl IntoIterator<Item = Edit>,
|
||||
) -> Vec<OldDiagnostic> {
|
||||
) -> Vec<Diagnostic> {
|
||||
edit.into_iter()
|
||||
.map(|edit| {
|
||||
// The choice of rule here is arbitrary.
|
||||
let mut diagnostic = OldDiagnostic::new(
|
||||
let mut diagnostic = diagnostic_from_violation(
|
||||
MissingNewlineAtEndOfFile,
|
||||
edit.range(),
|
||||
&SourceFileBuilder::new(filename, source).finish(),
|
||||
);
|
||||
diagnostic.fix = Some(Fix::safe_edit(edit));
|
||||
diagnostic.set_fix(Fix::safe_edit(edit));
|
||||
diagnostic
|
||||
})
|
||||
.collect()
|
||||
|
||||
@@ -14,7 +14,6 @@ pub use rule_selector::RuleSelector;
|
||||
pub use rule_selector::clap_completion::RuleSelectorParser;
|
||||
pub use rules::pycodestyle::rules::IOError;
|
||||
|
||||
pub use message::OldDiagnostic;
|
||||
pub(crate) use ruff_diagnostics::{Applicability, Edit, Fix};
|
||||
pub use violation::{AlwaysFixableViolation, FixAvailability, Violation, ViolationMetadata};
|
||||
|
||||
|
||||
@@ -7,15 +7,14 @@ use itertools::Itertools;
|
||||
use ruff_python_parser::semantic_errors::SemanticSyntaxError;
|
||||
use rustc_hash::FxBuildHasher;
|
||||
|
||||
use ruff_db::diagnostic::{Diagnostic, SecondaryCode};
|
||||
use ruff_notebook::Notebook;
|
||||
use ruff_python_ast::{ModModule, PySourceType, PythonVersion};
|
||||
use ruff_python_codegen::Stylist;
|
||||
use ruff_python_index::Indexer;
|
||||
use ruff_python_parser::{ParseError, ParseOptions, Parsed, UnsupportedSyntaxError};
|
||||
use ruff_source_file::SourceFile;
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::OldDiagnostic;
|
||||
use crate::checkers::ast::{LintContext, check_ast};
|
||||
use crate::checkers::filesystem::check_file_path;
|
||||
use crate::checkers::imports::check_imports;
|
||||
@@ -25,7 +24,7 @@ use crate::checkers::tokens::check_tokens;
|
||||
use crate::directives::Directives;
|
||||
use crate::doc_lines::{doc_lines_from_ast, doc_lines_from_tokens};
|
||||
use crate::fix::{FixResult, fix_file};
|
||||
use crate::message::SecondaryCode;
|
||||
use crate::message::create_syntax_error_diagnostic;
|
||||
use crate::noqa::add_noqa;
|
||||
use crate::package::PackageRoot;
|
||||
use crate::preview::is_py314_support_enabled;
|
||||
@@ -41,7 +40,7 @@ pub(crate) mod float;
|
||||
|
||||
pub struct LinterResult {
|
||||
/// A collection of diagnostic messages generated by the linter.
|
||||
pub diagnostics: Vec<OldDiagnostic>,
|
||||
pub diagnostics: Vec<Diagnostic>,
|
||||
/// Flag indicating that the parsed source code does not contain any
|
||||
/// [`ParseError`]s
|
||||
has_valid_syntax: bool,
|
||||
@@ -145,7 +144,7 @@ pub struct FixerResult<'a> {
|
||||
pub fixed: FixTable,
|
||||
}
|
||||
|
||||
/// Generate [`OldDiagnostic`]s from the source code contents at the given `Path`.
|
||||
/// Generate [`Diagnostic`]s from the source code contents at the given `Path`.
|
||||
#[expect(clippy::too_many_arguments)]
|
||||
pub fn check_path(
|
||||
path: &Path,
|
||||
@@ -160,7 +159,7 @@ pub fn check_path(
|
||||
source_type: PySourceType,
|
||||
parsed: &Parsed<ModModule>,
|
||||
target_version: TargetVersion,
|
||||
) -> Vec<OldDiagnostic> {
|
||||
) -> Vec<Diagnostic> {
|
||||
// Aggregate all diagnostics.
|
||||
let mut context = LintContext::new(path, locator.contents(), settings);
|
||||
|
||||
@@ -382,7 +381,7 @@ pub fn check_path(
|
||||
if !parsed.has_valid_syntax() {
|
||||
// Avoid fixing in case the source code contains syntax errors.
|
||||
for diagnostic in &mut diagnostics {
|
||||
diagnostic.fix = None;
|
||||
diagnostic.remove_fix();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -393,7 +392,6 @@ pub fn check_path(
|
||||
parsed.errors(),
|
||||
syntax_errors,
|
||||
&semantic_syntax_errors,
|
||||
locator,
|
||||
directives,
|
||||
&source_file,
|
||||
)
|
||||
@@ -459,7 +457,7 @@ pub fn add_noqa_to_path(
|
||||
)
|
||||
}
|
||||
|
||||
/// Generate an [`OldDiagnostic`] for each diagnostic triggered by the given source code.
|
||||
/// Generate a [`Diagnostic`] for each diagnostic triggered by the given source code.
|
||||
pub fn lint_only(
|
||||
path: &Path,
|
||||
package: Option<PackageRoot<'_>>,
|
||||
@@ -516,7 +514,7 @@ pub fn lint_only(
|
||||
|
||||
LinterResult {
|
||||
has_valid_syntax: parsed.has_valid_syntax(),
|
||||
has_no_syntax_errors: !diagnostics.iter().any(OldDiagnostic::is_syntax_error),
|
||||
has_no_syntax_errors: !diagnostics.iter().any(Diagnostic::is_syntax_error),
|
||||
diagnostics,
|
||||
}
|
||||
}
|
||||
@@ -525,30 +523,32 @@ pub fn lint_only(
|
||||
///
|
||||
/// Also use `directives` to attach noqa offsets to lint diagnostics.
|
||||
fn diagnostics_to_messages(
|
||||
diagnostics: Vec<OldDiagnostic>,
|
||||
diagnostics: Vec<Diagnostic>,
|
||||
parse_errors: &[ParseError],
|
||||
unsupported_syntax_errors: &[UnsupportedSyntaxError],
|
||||
semantic_syntax_errors: &[SemanticSyntaxError],
|
||||
locator: &Locator,
|
||||
directives: &Directives,
|
||||
source_file: &SourceFile,
|
||||
) -> Vec<OldDiagnostic> {
|
||||
) -> Vec<Diagnostic> {
|
||||
parse_errors
|
||||
.iter()
|
||||
.map(|parse_error| {
|
||||
OldDiagnostic::from_parse_error(parse_error, locator, source_file.clone())
|
||||
create_syntax_error_diagnostic(source_file.clone(), &parse_error.error, parse_error)
|
||||
})
|
||||
.chain(unsupported_syntax_errors.iter().map(|syntax_error| {
|
||||
OldDiagnostic::from_unsupported_syntax_error(syntax_error, source_file.clone())
|
||||
create_syntax_error_diagnostic(source_file.clone(), syntax_error, syntax_error)
|
||||
}))
|
||||
.chain(
|
||||
semantic_syntax_errors
|
||||
.iter()
|
||||
.map(|error| OldDiagnostic::from_semantic_syntax_error(error, source_file.clone())),
|
||||
.map(|error| create_syntax_error_diagnostic(source_file.clone(), error, error)),
|
||||
)
|
||||
.chain(diagnostics.into_iter().map(|diagnostic| {
|
||||
let noqa_offset = directives.noqa_line_for.resolve(diagnostic.start());
|
||||
diagnostic.with_noqa_offset(noqa_offset)
|
||||
.chain(diagnostics.into_iter().map(|mut diagnostic| {
|
||||
let noqa_offset = directives
|
||||
.noqa_line_for
|
||||
.resolve(diagnostic.expect_range().start());
|
||||
diagnostic.set_noqa_offset(noqa_offset);
|
||||
diagnostic
|
||||
}))
|
||||
.collect()
|
||||
}
|
||||
@@ -629,7 +629,7 @@ pub fn lint_fix<'a>(
|
||||
|
||||
if iterations == 0 {
|
||||
has_valid_syntax = parsed.has_valid_syntax();
|
||||
has_no_syntax_errors = !diagnostics.iter().any(OldDiagnostic::is_syntax_error);
|
||||
has_no_syntax_errors = !diagnostics.iter().any(Diagnostic::is_syntax_error);
|
||||
} else {
|
||||
// If the source code had no syntax errors on the first pass, but
|
||||
// does on a subsequent pass, then we've introduced a
|
||||
@@ -687,8 +687,8 @@ where
|
||||
}
|
||||
|
||||
#[expect(clippy::print_stderr)]
|
||||
fn report_failed_to_converge_error(path: &Path, transformed: &str, diagnostics: &[OldDiagnostic]) {
|
||||
let codes = collect_rule_codes(diagnostics.iter().filter_map(OldDiagnostic::secondary_code));
|
||||
fn report_failed_to_converge_error(path: &Path, transformed: &str, diagnostics: &[Diagnostic]) {
|
||||
let codes = collect_rule_codes(diagnostics.iter().filter_map(Diagnostic::secondary_code));
|
||||
if cfg!(debug_assertions) {
|
||||
eprintln!(
|
||||
"{}{} Failed to converge after {} iterations in `{}` with rule codes {}:---\n{}\n---",
|
||||
@@ -806,13 +806,12 @@ mod tests {
|
||||
use ruff_python_index::Indexer;
|
||||
use ruff_python_parser::ParseOptions;
|
||||
use ruff_python_trivia::textwrap::dedent;
|
||||
use ruff_text_size::Ranged;
|
||||
use test_case::test_case;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_notebook::{Notebook, NotebookError};
|
||||
|
||||
use crate::linter::check_path;
|
||||
use crate::message::OldDiagnostic;
|
||||
use crate::registry::Rule;
|
||||
use crate::settings::LinterSettings;
|
||||
use crate::source_kind::SourceKind;
|
||||
@@ -970,7 +969,7 @@ mod tests {
|
||||
|
||||
/// Wrapper around `test_contents_syntax_errors` for testing a snippet of code instead of a
|
||||
/// file.
|
||||
fn test_snippet_syntax_errors(contents: &str, settings: &LinterSettings) -> Vec<OldDiagnostic> {
|
||||
fn test_snippet_syntax_errors(contents: &str, settings: &LinterSettings) -> Vec<Diagnostic> {
|
||||
let contents = dedent(contents);
|
||||
test_contents_syntax_errors(
|
||||
&SourceKind::Python(contents.to_string()),
|
||||
@@ -985,7 +984,7 @@ mod tests {
|
||||
source_kind: &SourceKind,
|
||||
path: &Path,
|
||||
settings: &LinterSettings,
|
||||
) -> Vec<OldDiagnostic> {
|
||||
) -> Vec<Diagnostic> {
|
||||
let source_type = PySourceType::from(path);
|
||||
let target_version = settings.resolve_target_version(path);
|
||||
let options =
|
||||
@@ -1016,7 +1015,7 @@ mod tests {
|
||||
&parsed,
|
||||
target_version,
|
||||
);
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
diagnostics
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use std::fmt::{Display, Formatter, Write};
|
||||
use std::fmt::{Display, Formatter};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{LazyLock, Mutex};
|
||||
|
||||
@@ -6,7 +6,7 @@ use anyhow::Result;
|
||||
use colored::Colorize;
|
||||
use fern;
|
||||
use log::Level;
|
||||
use ruff_python_parser::{ParseError, ParseErrorType};
|
||||
use ruff_python_parser::ParseError;
|
||||
use rustc_hash::FxHashSet;
|
||||
|
||||
use ruff_source_file::{LineColumn, LineIndex, OneIndexed, SourceCode};
|
||||
@@ -248,7 +248,7 @@ impl Display for DisplayParseError {
|
||||
row = location.line,
|
||||
column = location.column,
|
||||
colon = ":".cyan(),
|
||||
inner = &DisplayParseErrorType(&self.error.error)
|
||||
inner = self.error.error
|
||||
)
|
||||
}
|
||||
ErrorLocation::Cell(cell, location) => {
|
||||
@@ -259,27 +259,13 @@ impl Display for DisplayParseError {
|
||||
row = location.line,
|
||||
column = location.column,
|
||||
colon = ":".cyan(),
|
||||
inner = &DisplayParseErrorType(&self.error.error)
|
||||
inner = self.error.error
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) struct DisplayParseErrorType<'a>(&'a ParseErrorType);
|
||||
|
||||
impl<'a> DisplayParseErrorType<'a> {
|
||||
pub(crate) fn new(error: &'a ParseErrorType) -> Self {
|
||||
Self(error)
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for DisplayParseErrorType<'_> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", TruncateAtNewline(&self.0))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
enum ErrorLocation {
|
||||
/// The error occurred in a Python file.
|
||||
@@ -288,44 +274,6 @@ enum ErrorLocation {
|
||||
Cell(OneIndexed, LineColumn),
|
||||
}
|
||||
|
||||
/// Truncates the display text before the first newline character to avoid line breaks.
|
||||
struct TruncateAtNewline<'a>(&'a dyn Display);
|
||||
|
||||
impl Display for TruncateAtNewline<'_> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
struct TruncateAdapter<'a> {
|
||||
inner: &'a mut dyn Write,
|
||||
after_new_line: bool,
|
||||
}
|
||||
|
||||
impl Write for TruncateAdapter<'_> {
|
||||
fn write_str(&mut self, s: &str) -> std::fmt::Result {
|
||||
if self.after_new_line {
|
||||
Ok(())
|
||||
} else {
|
||||
if let Some(end) = s.find(['\n', '\r']) {
|
||||
self.inner.write_str(&s[..end])?;
|
||||
self.inner.write_str("\u{23ce}...")?;
|
||||
self.after_new_line = true;
|
||||
Ok(())
|
||||
} else {
|
||||
self.inner.write_str(s)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
write!(
|
||||
TruncateAdapter {
|
||||
inner: f,
|
||||
after_new_line: false,
|
||||
},
|
||||
"{}",
|
||||
self.0
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::logging::LogLevel;
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
use std::io::Write;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::LineColumn;
|
||||
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
|
||||
/// Generate error logging commands for Azure Pipelines format.
|
||||
/// See [documentation](https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=bash#logissue-log-an-error-or-warning)
|
||||
@@ -13,23 +14,23 @@ impl Emitter for AzureEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
for diagnostic in diagnostics {
|
||||
let location = if context.is_notebook(&diagnostic.filename()) {
|
||||
let filename = diagnostic.expect_ruff_filename();
|
||||
let location = if context.is_notebook(&filename) {
|
||||
// We can't give a reasonable location for the structured formats,
|
||||
// so we show one that's clearly a fallback
|
||||
LineColumn::default()
|
||||
} else {
|
||||
diagnostic.compute_start_location()
|
||||
diagnostic.expect_ruff_start_location()
|
||||
};
|
||||
|
||||
writeln!(
|
||||
writer,
|
||||
"##vso[task.logissue type=error\
|
||||
;sourcepath={filename};linenumber={line};columnnumber={col};{code}]{body}",
|
||||
filename = diagnostic.filename(),
|
||||
line = location.line,
|
||||
col = location.column,
|
||||
code = diagnostic
|
||||
|
||||
@@ -2,13 +2,12 @@ use std::fmt::{Display, Formatter};
|
||||
use std::num::NonZeroUsize;
|
||||
|
||||
use colored::{Color, ColoredString, Colorize, Styles};
|
||||
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
use similar::{ChangeTag, TextDiff};
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::{OneIndexed, SourceFile};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
use crate::message::OldDiagnostic;
|
||||
use crate::text_helpers::ShowNonprinting;
|
||||
use crate::{Applicability, Fix};
|
||||
|
||||
@@ -26,9 +25,9 @@ pub(super) struct Diff<'a> {
|
||||
}
|
||||
|
||||
impl<'a> Diff<'a> {
|
||||
pub(crate) fn from_message(message: &'a OldDiagnostic) -> Option<Diff<'a>> {
|
||||
pub(crate) fn from_message(message: &'a Diagnostic) -> Option<Diff<'a>> {
|
||||
message.fix().map(|fix| Diff {
|
||||
source_code: message.source_file(),
|
||||
source_code: message.expect_ruff_source_file(),
|
||||
fix,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
use std::io::Write;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::LineColumn;
|
||||
|
||||
use crate::fs::relativize_path;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
|
||||
/// Generate error workflow command in GitHub Actions format.
|
||||
/// See: [GitHub documentation](https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-error-message)
|
||||
@@ -14,12 +15,13 @@ impl Emitter for GithubEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
for diagnostic in diagnostics {
|
||||
let source_location = diagnostic.compute_start_location();
|
||||
let location = if context.is_notebook(&diagnostic.filename()) {
|
||||
let source_location = diagnostic.expect_ruff_start_location();
|
||||
let filename = diagnostic.expect_ruff_filename();
|
||||
let location = if context.is_notebook(&filename) {
|
||||
// We can't give a reasonable location for the structured formats,
|
||||
// so we show one that's clearly a fallback
|
||||
LineColumn::default()
|
||||
@@ -27,7 +29,7 @@ impl Emitter for GithubEmitter {
|
||||
source_location
|
||||
};
|
||||
|
||||
let end_location = diagnostic.compute_end_location();
|
||||
let end_location = diagnostic.expect_ruff_end_location();
|
||||
|
||||
write!(
|
||||
writer,
|
||||
@@ -35,7 +37,7 @@ impl Emitter for GithubEmitter {
|
||||
code = diagnostic
|
||||
.secondary_code()
|
||||
.map_or_else(String::new, |code| format!(" ({code})")),
|
||||
file = diagnostic.filename(),
|
||||
file = filename,
|
||||
row = source_location.line,
|
||||
column = source_location.column,
|
||||
end_row = end_location.line,
|
||||
@@ -45,7 +47,7 @@ impl Emitter for GithubEmitter {
|
||||
write!(
|
||||
writer,
|
||||
"{path}:{row}:{column}:",
|
||||
path = relativize_path(&*diagnostic.filename()),
|
||||
path = relativize_path(&filename),
|
||||
row = location.line,
|
||||
column = location.column,
|
||||
)?;
|
||||
|
||||
@@ -7,8 +7,10 @@ use serde::ser::SerializeSeq;
|
||||
use serde::{Serialize, Serializer};
|
||||
use serde_json::json;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
|
||||
use crate::fs::{relativize_path, relativize_path_to};
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
|
||||
/// Generate JSON with violations in GitLab CI format
|
||||
// https://docs.gitlab.com/ee/ci/testing/code_quality.html#implement-a-custom-tool
|
||||
@@ -28,7 +30,7 @@ impl Emitter for GitlabEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
serde_json::to_writer_pretty(
|
||||
@@ -45,7 +47,7 @@ impl Emitter for GitlabEmitter {
|
||||
}
|
||||
|
||||
struct SerializedMessages<'a> {
|
||||
diagnostics: &'a [OldDiagnostic],
|
||||
diagnostics: &'a [Diagnostic],
|
||||
context: &'a EmitterContext<'a>,
|
||||
project_dir: Option<&'a str>,
|
||||
}
|
||||
@@ -59,10 +61,11 @@ impl Serialize for SerializedMessages<'_> {
|
||||
let mut fingerprints = HashSet::<u64>::with_capacity(self.diagnostics.len());
|
||||
|
||||
for diagnostic in self.diagnostics {
|
||||
let start_location = diagnostic.compute_start_location();
|
||||
let end_location = diagnostic.compute_end_location();
|
||||
let start_location = diagnostic.expect_ruff_start_location();
|
||||
let end_location = diagnostic.expect_ruff_end_location();
|
||||
|
||||
let lines = if self.context.is_notebook(&diagnostic.filename()) {
|
||||
let filename = diagnostic.expect_ruff_filename();
|
||||
let lines = if self.context.is_notebook(&filename) {
|
||||
// We can't give a reasonable location for the structured formats,
|
||||
// so we show one that's clearly a fallback
|
||||
json!({
|
||||
@@ -77,8 +80,8 @@ impl Serialize for SerializedMessages<'_> {
|
||||
};
|
||||
|
||||
let path = self.project_dir.as_ref().map_or_else(
|
||||
|| relativize_path(&*diagnostic.filename()),
|
||||
|project_dir| relativize_path_to(&*diagnostic.filename(), project_dir),
|
||||
|| relativize_path(&filename),
|
||||
|project_dir| relativize_path_to(&filename, project_dir),
|
||||
);
|
||||
|
||||
let mut message_fingerprint = fingerprint(diagnostic, &path, 0);
|
||||
@@ -120,7 +123,7 @@ impl Serialize for SerializedMessages<'_> {
|
||||
}
|
||||
|
||||
/// Generate a unique fingerprint to identify a violation.
|
||||
fn fingerprint(message: &OldDiagnostic, project_path: &str, salt: u64) -> u64 {
|
||||
fn fingerprint(message: &Diagnostic, project_path: &str, salt: u64) -> u64 {
|
||||
let mut hasher = DefaultHasher::new();
|
||||
|
||||
salt.hash(&mut hasher);
|
||||
|
||||
@@ -4,15 +4,14 @@ use std::num::NonZeroUsize;
|
||||
|
||||
use colored::Colorize;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_notebook::NotebookIndex;
|
||||
use ruff_source_file::OneIndexed;
|
||||
|
||||
use crate::fs::relativize_path;
|
||||
use crate::message::diff::calculate_print_width;
|
||||
use crate::message::text::{MessageCodeFrame, RuleCodeAndBody};
|
||||
use crate::message::{
|
||||
Emitter, EmitterContext, MessageWithLocation, OldDiagnostic, group_diagnostics_by_filename,
|
||||
};
|
||||
use crate::message::{Emitter, EmitterContext, MessageWithLocation, group_diagnostics_by_filename};
|
||||
use crate::settings::types::UnsafeFixes;
|
||||
|
||||
#[derive(Default)]
|
||||
@@ -46,7 +45,7 @@ impl Emitter for GroupedEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
for (filename, messages) in group_diagnostics_by_filename(diagnostics) {
|
||||
@@ -73,7 +72,7 @@ impl Emitter for GroupedEmitter {
|
||||
writer,
|
||||
"{}",
|
||||
DisplayGroupedMessage {
|
||||
notebook_index: context.notebook_index(&message.filename()),
|
||||
notebook_index: context.notebook_index(&message.expect_ruff_filename()),
|
||||
message,
|
||||
show_fix_status: self.show_fix_status,
|
||||
unsafe_fixes: self.unsafe_fixes,
|
||||
|
||||
@@ -4,12 +4,13 @@ use serde::ser::SerializeSeq;
|
||||
use serde::{Serialize, Serializer};
|
||||
use serde_json::{Value, json};
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_notebook::NotebookIndex;
|
||||
use ruff_source_file::{LineColumn, OneIndexed, SourceCode};
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::Edit;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct JsonEmitter;
|
||||
@@ -18,7 +19,7 @@ impl Emitter for JsonEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
serde_json::to_writer_pretty(
|
||||
@@ -34,7 +35,7 @@ impl Emitter for JsonEmitter {
|
||||
}
|
||||
|
||||
struct ExpandedMessages<'a> {
|
||||
diagnostics: &'a [OldDiagnostic],
|
||||
diagnostics: &'a [Diagnostic],
|
||||
context: &'a EmitterContext<'a>,
|
||||
}
|
||||
|
||||
@@ -54,10 +55,11 @@ impl Serialize for ExpandedMessages<'_> {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn message_to_json_value(message: &OldDiagnostic, context: &EmitterContext) -> Value {
|
||||
let source_file = message.source_file();
|
||||
pub(crate) fn message_to_json_value(message: &Diagnostic, context: &EmitterContext) -> Value {
|
||||
let source_file = message.expect_ruff_source_file();
|
||||
let source_code = source_file.to_source_code();
|
||||
let notebook_index = context.notebook_index(&message.filename());
|
||||
let filename = message.expect_ruff_filename();
|
||||
let notebook_index = context.notebook_index(&filename);
|
||||
|
||||
let fix = message.fix().map(|fix| {
|
||||
json!({
|
||||
@@ -67,8 +69,8 @@ pub(crate) fn message_to_json_value(message: &OldDiagnostic, context: &EmitterCo
|
||||
})
|
||||
});
|
||||
|
||||
let mut start_location = source_code.line_column(message.start());
|
||||
let mut end_location = source_code.line_column(message.end());
|
||||
let mut start_location = source_code.line_column(message.expect_range().start());
|
||||
let mut end_location = source_code.line_column(message.expect_range().end());
|
||||
let mut noqa_location = message
|
||||
.noqa_offset()
|
||||
.map(|offset| source_code.line_column(offset));
|
||||
@@ -94,7 +96,7 @@ pub(crate) fn message_to_json_value(message: &OldDiagnostic, context: &EmitterCo
|
||||
"cell": notebook_cell_index,
|
||||
"location": location_to_json(start_location),
|
||||
"end_location": location_to_json(end_location),
|
||||
"filename": message.filename(),
|
||||
"filename": filename,
|
||||
"noqa_row": noqa_location.map(|location| location.line)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
use std::io::Write;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
|
||||
use crate::message::json::message_to_json_value;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct JsonLinesEmitter;
|
||||
@@ -10,7 +12,7 @@ impl Emitter for JsonLinesEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
for diagnostic in diagnostics {
|
||||
|
||||
@@ -3,11 +3,10 @@ use std::path::Path;
|
||||
|
||||
use quick_junit::{NonSuccessKind, Report, TestCase, TestCaseStatus, TestSuite, XmlString};
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::LineColumn;
|
||||
|
||||
use crate::message::{
|
||||
Emitter, EmitterContext, MessageWithLocation, OldDiagnostic, group_diagnostics_by_filename,
|
||||
};
|
||||
use crate::message::{Emitter, EmitterContext, MessageWithLocation, group_diagnostics_by_filename};
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct JunitEmitter;
|
||||
@@ -16,7 +15,7 @@ impl Emitter for JunitEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
let mut report = Report::new("ruff");
|
||||
@@ -44,7 +43,7 @@ impl Emitter for JunitEmitter {
|
||||
} = message;
|
||||
let mut status = TestCaseStatus::non_success(NonSuccessKind::Failure);
|
||||
status.set_message(message.body());
|
||||
let location = if context.is_notebook(&message.filename()) {
|
||||
let location = if context.is_notebook(&message.expect_ruff_filename()) {
|
||||
// We can't give a reasonable location for the structured formats,
|
||||
// so we show one that's clearly a fallback
|
||||
LineColumn::default()
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
use std::cmp::Ordering;
|
||||
use std::collections::BTreeMap;
|
||||
use std::fmt::Display;
|
||||
use std::io::Write;
|
||||
use std::ops::Deref;
|
||||
|
||||
use ruff_db::diagnostic::{self as db, Annotation, DiagnosticId, LintName, Severity, Span};
|
||||
use ruff_python_parser::semantic_errors::SemanticSyntaxError;
|
||||
use ruff_db::diagnostic::{
|
||||
Annotation, Diagnostic, DiagnosticId, LintName, SecondaryCode, Severity, Span,
|
||||
};
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
pub use azure::AzureEmitter;
|
||||
@@ -18,17 +18,14 @@ pub use junit::JunitEmitter;
|
||||
pub use pylint::PylintEmitter;
|
||||
pub use rdjson::RdjsonEmitter;
|
||||
use ruff_notebook::NotebookIndex;
|
||||
use ruff_python_parser::{ParseError, UnsupportedSyntaxError};
|
||||
use ruff_source_file::{LineColumn, SourceFile};
|
||||
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
pub use sarif::SarifEmitter;
|
||||
pub use text::TextEmitter;
|
||||
|
||||
use crate::Fix;
|
||||
use crate::codes::NoqaCode;
|
||||
use crate::logging::DisplayParseErrorType;
|
||||
use crate::Violation;
|
||||
use crate::registry::Rule;
|
||||
use crate::{Locator, Violation};
|
||||
|
||||
mod azure;
|
||||
mod diff;
|
||||
@@ -43,292 +40,103 @@ mod rdjson;
|
||||
mod sarif;
|
||||
mod text;
|
||||
|
||||
/// `OldDiagnostic` represents either a diagnostic message corresponding to a rule violation or a
|
||||
/// syntax error message.
|
||||
/// Creates a `Diagnostic` from a syntax error, with the format expected by Ruff.
|
||||
///
|
||||
/// All of the information for syntax errors is captured in the underlying [`db::Diagnostic`], while
|
||||
/// rule violations can have the additional optional fields like fixes, suggestions, and (parent)
|
||||
/// `noqa` offsets.
|
||||
/// This is almost identical to `ruff_db::diagnostic::create_syntax_error_diagnostic`, except the
|
||||
/// `message` is stored as the primary diagnostic message instead of on the primary annotation, and
|
||||
/// `SyntaxError: ` is prepended to the message.
|
||||
///
|
||||
/// For diagnostic messages, the [`db::Diagnostic`]'s primary message contains the
|
||||
/// [`OldDiagnostic::body`], and the primary annotation optionally contains the suggestion
|
||||
/// accompanying a fix. The `db::Diagnostic::id` field contains the kebab-case lint name derived
|
||||
/// from the `Rule`.
|
||||
#[derive(Clone, Debug, PartialEq, Eq)]
|
||||
pub struct OldDiagnostic {
|
||||
pub diagnostic: db::Diagnostic,
|
||||
|
||||
// these fields are specific to rule violations
|
||||
pub fix: Option<Fix>,
|
||||
pub parent: Option<TextSize>,
|
||||
pub(crate) noqa_offset: Option<TextSize>,
|
||||
pub(crate) secondary_code: Option<SecondaryCode>,
|
||||
/// TODO(brent) These should be unified at some point, but we keep them separate for now to avoid a
|
||||
/// ton of snapshot changes while combining ruff's diagnostic type with `Diagnostic`.
|
||||
pub fn create_syntax_error_diagnostic(
|
||||
span: impl Into<Span>,
|
||||
message: impl std::fmt::Display,
|
||||
range: impl Ranged,
|
||||
) -> Diagnostic {
|
||||
let mut diag = Diagnostic::new(
|
||||
DiagnosticId::InvalidSyntax,
|
||||
Severity::Error,
|
||||
format_args!("SyntaxError: {message}"),
|
||||
);
|
||||
let span = span.into().with_range(range.range());
|
||||
diag.annotate(Annotation::primary(span));
|
||||
diag
|
||||
}
|
||||
|
||||
impl OldDiagnostic {
|
||||
pub fn syntax_error(
|
||||
message: impl Display,
|
||||
range: TextRange,
|
||||
file: SourceFile,
|
||||
) -> OldDiagnostic {
|
||||
let mut diag = db::Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, message);
|
||||
let span = Span::from(file).with_range(range);
|
||||
diag.annotate(Annotation::primary(span));
|
||||
Self {
|
||||
diagnostic: diag,
|
||||
fix: None,
|
||||
parent: None,
|
||||
noqa_offset: None,
|
||||
secondary_code: None,
|
||||
}
|
||||
#[expect(clippy::too_many_arguments)]
|
||||
pub fn create_lint_diagnostic<B, S>(
|
||||
body: B,
|
||||
suggestion: Option<S>,
|
||||
range: TextRange,
|
||||
fix: Option<Fix>,
|
||||
parent: Option<TextSize>,
|
||||
file: SourceFile,
|
||||
noqa_offset: Option<TextSize>,
|
||||
rule: Rule,
|
||||
) -> Diagnostic
|
||||
where
|
||||
B: Display,
|
||||
S: Display,
|
||||
{
|
||||
let mut diagnostic = Diagnostic::new(
|
||||
DiagnosticId::Lint(LintName::of(rule.into())),
|
||||
Severity::Error,
|
||||
body,
|
||||
);
|
||||
|
||||
if let Some(fix) = fix {
|
||||
diagnostic.set_fix(fix);
|
||||
}
|
||||
|
||||
#[expect(clippy::too_many_arguments)]
|
||||
pub fn lint<B, S>(
|
||||
body: B,
|
||||
suggestion: Option<S>,
|
||||
range: TextRange,
|
||||
fix: Option<Fix>,
|
||||
parent: Option<TextSize>,
|
||||
file: SourceFile,
|
||||
noqa_offset: Option<TextSize>,
|
||||
rule: Rule,
|
||||
) -> OldDiagnostic
|
||||
where
|
||||
B: Display,
|
||||
S: Display,
|
||||
{
|
||||
let mut diagnostic = db::Diagnostic::new(
|
||||
DiagnosticId::Lint(LintName::of(rule.into())),
|
||||
Severity::Error,
|
||||
body,
|
||||
);
|
||||
let span = Span::from(file).with_range(range);
|
||||
let mut annotation = Annotation::primary(span);
|
||||
if let Some(suggestion) = suggestion {
|
||||
annotation = annotation.message(suggestion);
|
||||
}
|
||||
diagnostic.annotate(annotation);
|
||||
|
||||
OldDiagnostic {
|
||||
diagnostic,
|
||||
fix,
|
||||
parent,
|
||||
noqa_offset,
|
||||
secondary_code: Some(SecondaryCode(rule.noqa_code().to_string())),
|
||||
}
|
||||
if let Some(parent) = parent {
|
||||
diagnostic.set_parent(parent);
|
||||
}
|
||||
|
||||
/// Create an [`OldDiagnostic`] from the given [`ParseError`].
|
||||
pub fn from_parse_error(
|
||||
parse_error: &ParseError,
|
||||
locator: &Locator,
|
||||
file: SourceFile,
|
||||
) -> OldDiagnostic {
|
||||
// Try to create a non-empty range so that the diagnostic can print a caret at the right
|
||||
// position. This requires that we retrieve the next character, if any, and take its length
|
||||
// to maintain char-boundaries.
|
||||
let len = locator
|
||||
.after(parse_error.location.start())
|
||||
.chars()
|
||||
.next()
|
||||
.map_or(TextSize::new(0), TextLen::text_len);
|
||||
|
||||
OldDiagnostic::syntax_error(
|
||||
format_args!(
|
||||
"SyntaxError: {}",
|
||||
DisplayParseErrorType::new(&parse_error.error)
|
||||
),
|
||||
TextRange::at(parse_error.location.start(), len),
|
||||
file,
|
||||
)
|
||||
if let Some(noqa_offset) = noqa_offset {
|
||||
diagnostic.set_noqa_offset(noqa_offset);
|
||||
}
|
||||
|
||||
/// Create an [`OldDiagnostic`] from the given [`UnsupportedSyntaxError`].
|
||||
pub fn from_unsupported_syntax_error(
|
||||
unsupported_syntax_error: &UnsupportedSyntaxError,
|
||||
file: SourceFile,
|
||||
) -> OldDiagnostic {
|
||||
OldDiagnostic::syntax_error(
|
||||
format_args!("SyntaxError: {unsupported_syntax_error}"),
|
||||
unsupported_syntax_error.range,
|
||||
file,
|
||||
)
|
||||
let span = Span::from(file).with_range(range);
|
||||
let mut annotation = Annotation::primary(span);
|
||||
if let Some(suggestion) = suggestion {
|
||||
annotation = annotation.message(suggestion);
|
||||
}
|
||||
diagnostic.annotate(annotation);
|
||||
|
||||
/// Create an [`OldDiagnostic`] from the given [`SemanticSyntaxError`].
|
||||
pub fn from_semantic_syntax_error(
|
||||
semantic_syntax_error: &SemanticSyntaxError,
|
||||
file: SourceFile,
|
||||
) -> OldDiagnostic {
|
||||
OldDiagnostic::syntax_error(
|
||||
format_args!("SyntaxError: {semantic_syntax_error}"),
|
||||
semantic_syntax_error.range,
|
||||
file,
|
||||
)
|
||||
}
|
||||
diagnostic.set_secondary_code(SecondaryCode::new(rule.noqa_code().to_string()));
|
||||
|
||||
// TODO(brent) We temporarily allow this to avoid updating all of the call sites to add
|
||||
// references. I expect this method to go away or change significantly with the rest of the
|
||||
// diagnostic refactor, but if it still exists in this form at the end of the refactor, we
|
||||
// should just update the call sites.
|
||||
#[expect(clippy::needless_pass_by_value)]
|
||||
pub fn new<T: Violation>(kind: T, range: TextRange, file: &SourceFile) -> Self {
|
||||
Self::lint(
|
||||
Violation::message(&kind),
|
||||
Violation::fix_title(&kind),
|
||||
range,
|
||||
None,
|
||||
None,
|
||||
file.clone(),
|
||||
None,
|
||||
T::rule(),
|
||||
)
|
||||
}
|
||||
|
||||
/// Consumes `self` and returns a new `Diagnostic` with the given parent node.
|
||||
#[inline]
|
||||
#[must_use]
|
||||
pub fn with_parent(mut self, parent: TextSize) -> Self {
|
||||
self.set_parent(parent);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the location of the diagnostic's parent node.
|
||||
#[inline]
|
||||
pub fn set_parent(&mut self, parent: TextSize) {
|
||||
self.parent = Some(parent);
|
||||
}
|
||||
|
||||
/// Consumes `self` and returns a new `Diagnostic` with the given noqa offset.
|
||||
#[inline]
|
||||
#[must_use]
|
||||
pub fn with_noqa_offset(mut self, noqa_offset: TextSize) -> Self {
|
||||
self.noqa_offset = Some(noqa_offset);
|
||||
self
|
||||
}
|
||||
|
||||
/// Returns `true` if `self` is a syntax error message.
|
||||
pub fn is_syntax_error(&self) -> bool {
|
||||
self.diagnostic.id().is_invalid_syntax()
|
||||
}
|
||||
|
||||
/// Returns the name used to represent the diagnostic.
|
||||
pub fn name(&self) -> &'static str {
|
||||
if self.is_syntax_error() {
|
||||
"syntax-error"
|
||||
} else {
|
||||
self.diagnostic.id().as_str()
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the message body to display to the user.
|
||||
pub fn body(&self) -> &str {
|
||||
self.diagnostic.primary_message()
|
||||
}
|
||||
|
||||
/// Returns the fix suggestion for the violation.
|
||||
pub fn suggestion(&self) -> Option<&str> {
|
||||
self.diagnostic.primary_annotation()?.get_message()
|
||||
}
|
||||
|
||||
/// Returns the offset at which the `noqa` comment will be placed if it's a diagnostic message.
|
||||
pub fn noqa_offset(&self) -> Option<TextSize> {
|
||||
self.noqa_offset
|
||||
}
|
||||
|
||||
/// Returns the [`Fix`] for the diagnostic, if there is any.
|
||||
pub fn fix(&self) -> Option<&Fix> {
|
||||
self.fix.as_ref()
|
||||
}
|
||||
|
||||
/// Returns `true` if the diagnostic contains a [`Fix`].
|
||||
pub fn fixable(&self) -> bool {
|
||||
self.fix().is_some()
|
||||
}
|
||||
|
||||
/// Returns the noqa code for the diagnostic message as a string.
|
||||
pub fn secondary_code(&self) -> Option<&SecondaryCode> {
|
||||
self.secondary_code.as_ref()
|
||||
}
|
||||
|
||||
/// Returns the URL for the rule documentation, if it exists.
|
||||
pub fn to_url(&self) -> Option<String> {
|
||||
if self.is_syntax_error() {
|
||||
None
|
||||
} else {
|
||||
Some(format!(
|
||||
"{}/rules/{}",
|
||||
env!("CARGO_PKG_HOMEPAGE"),
|
||||
self.name()
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the filename for the message.
|
||||
pub fn filename(&self) -> String {
|
||||
self.diagnostic
|
||||
.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.name()
|
||||
.to_string()
|
||||
}
|
||||
|
||||
/// Computes the start source location for the message.
|
||||
pub fn compute_start_location(&self) -> LineColumn {
|
||||
self.diagnostic
|
||||
.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.to_source_code()
|
||||
.line_column(self.start())
|
||||
}
|
||||
|
||||
/// Computes the end source location for the message.
|
||||
pub fn compute_end_location(&self) -> LineColumn {
|
||||
self.diagnostic
|
||||
.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.to_source_code()
|
||||
.line_column(self.end())
|
||||
}
|
||||
|
||||
/// Returns the [`SourceFile`] which the message belongs to.
|
||||
pub fn source_file(&self) -> SourceFile {
|
||||
self.diagnostic
|
||||
.expect_primary_span()
|
||||
.expect_ruff_file()
|
||||
.clone()
|
||||
}
|
||||
diagnostic
|
||||
}
|
||||
|
||||
impl Ord for OldDiagnostic {
|
||||
fn cmp(&self, other: &Self) -> Ordering {
|
||||
(self.source_file(), self.start()).cmp(&(other.source_file(), other.start()))
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialOrd for OldDiagnostic {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
|
||||
impl Ranged for OldDiagnostic {
|
||||
fn range(&self) -> TextRange {
|
||||
self.diagnostic
|
||||
.expect_primary_span()
|
||||
.range()
|
||||
.expect("Expected range for ruff span")
|
||||
}
|
||||
// TODO(brent) We temporarily allow this to avoid updating all of the call sites to add
|
||||
// references. I expect this method to go away or change significantly with the rest of the
|
||||
// diagnostic refactor, but if it still exists in this form at the end of the refactor, we
|
||||
// should just update the call sites.
|
||||
#[expect(clippy::needless_pass_by_value)]
|
||||
pub fn diagnostic_from_violation<T: Violation>(
|
||||
kind: T,
|
||||
range: TextRange,
|
||||
file: &SourceFile,
|
||||
) -> Diagnostic {
|
||||
create_lint_diagnostic(
|
||||
Violation::message(&kind),
|
||||
Violation::fix_title(&kind),
|
||||
range,
|
||||
None,
|
||||
None,
|
||||
file.clone(),
|
||||
None,
|
||||
T::rule(),
|
||||
)
|
||||
}
|
||||
|
||||
struct MessageWithLocation<'a> {
|
||||
message: &'a OldDiagnostic,
|
||||
message: &'a Diagnostic,
|
||||
start_location: LineColumn,
|
||||
}
|
||||
|
||||
impl Deref for MessageWithLocation<'_> {
|
||||
type Target = OldDiagnostic;
|
||||
type Target = Diagnostic;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
self.message
|
||||
@@ -336,30 +144,30 @@ impl Deref for MessageWithLocation<'_> {
|
||||
}
|
||||
|
||||
fn group_diagnostics_by_filename(
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
) -> BTreeMap<String, Vec<MessageWithLocation>> {
|
||||
let mut grouped_messages = BTreeMap::default();
|
||||
for diagnostic in diagnostics {
|
||||
grouped_messages
|
||||
.entry(diagnostic.filename().to_string())
|
||||
.entry(diagnostic.expect_ruff_filename())
|
||||
.or_insert_with(Vec::new)
|
||||
.push(MessageWithLocation {
|
||||
message: diagnostic,
|
||||
start_location: diagnostic.compute_start_location(),
|
||||
start_location: diagnostic.expect_ruff_start_location(),
|
||||
});
|
||||
}
|
||||
grouped_messages
|
||||
}
|
||||
|
||||
/// Display format for [`OldDiagnostic`]s.
|
||||
/// Display format for [`Diagnostic`]s.
|
||||
///
|
||||
/// The emitter serializes a slice of [`OldDiagnostic`]s and writes them to a [`Write`].
|
||||
/// The emitter serializes a slice of [`Diagnostic`]s and writes them to a [`Write`].
|
||||
pub trait Emitter {
|
||||
/// Serializes the `diagnostics` and writes the output to `writer`.
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()>;
|
||||
}
|
||||
@@ -384,101 +192,40 @@ impl<'a> EmitterContext<'a> {
|
||||
}
|
||||
}
|
||||
|
||||
/// A secondary identifier for a lint diagnostic.
|
||||
///
|
||||
/// For Ruff rules this means the noqa code.
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Default, Hash, serde::Serialize)]
|
||||
#[serde(transparent)]
|
||||
pub struct SecondaryCode(String);
|
||||
|
||||
impl SecondaryCode {
|
||||
pub fn new(code: String) -> Self {
|
||||
Self(code)
|
||||
}
|
||||
|
||||
pub fn as_str(&self) -> &str {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for SecondaryCode {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_str(&self.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::ops::Deref for SecondaryCode {
|
||||
type Target = str;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<&str> for SecondaryCode {
|
||||
fn eq(&self, other: &&str) -> bool {
|
||||
self.0 == *other
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<SecondaryCode> for &str {
|
||||
fn eq(&self, other: &SecondaryCode) -> bool {
|
||||
other.eq(self)
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<NoqaCode> for SecondaryCode {
|
||||
fn eq(&self, other: &NoqaCode) -> bool {
|
||||
&self.as_str() == other
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq<SecondaryCode> for NoqaCode {
|
||||
fn eq(&self, other: &SecondaryCode) -> bool {
|
||||
other.eq(self)
|
||||
}
|
||||
}
|
||||
|
||||
// for `hashbrown::EntryRef`
|
||||
impl From<&SecondaryCode> for SecondaryCode {
|
||||
fn from(value: &SecondaryCode) -> Self {
|
||||
value.clone()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
use crate::codes::Rule;
|
||||
use crate::{Edit, Fix};
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_notebook::NotebookIndex;
|
||||
use ruff_python_parser::{Mode, ParseOptions, parse_unchecked};
|
||||
use ruff_source_file::{OneIndexed, SourceFileBuilder};
|
||||
use ruff_text_size::{TextRange, TextSize};
|
||||
|
||||
use crate::Locator;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::codes::Rule;
|
||||
use crate::message::{Emitter, EmitterContext, create_lint_diagnostic};
|
||||
use crate::{Edit, Fix};
|
||||
|
||||
pub(super) fn create_syntax_error_diagnostics() -> Vec<OldDiagnostic> {
|
||||
use super::create_syntax_error_diagnostic;
|
||||
|
||||
pub(super) fn create_syntax_error_diagnostics() -> Vec<Diagnostic> {
|
||||
let source = r"from os import
|
||||
|
||||
if call(foo
|
||||
def bar():
|
||||
pass
|
||||
";
|
||||
let locator = Locator::new(source);
|
||||
let source_file = SourceFileBuilder::new("syntax_errors.py", source).finish();
|
||||
parse_unchecked(source, ParseOptions::from(Mode::Module))
|
||||
.errors()
|
||||
.iter()
|
||||
.map(|parse_error| {
|
||||
OldDiagnostic::from_parse_error(parse_error, &locator, source_file.clone())
|
||||
create_syntax_error_diagnostic(source_file.clone(), &parse_error.error, parse_error)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
pub(super) fn create_diagnostics() -> Vec<OldDiagnostic> {
|
||||
pub(super) fn create_diagnostics() -> Vec<Diagnostic> {
|
||||
let fib = r#"import os
|
||||
|
||||
|
||||
@@ -496,7 +243,7 @@ def fibonacci(n):
|
||||
let fib_source = SourceFileBuilder::new("fib.py", fib).finish();
|
||||
|
||||
let unused_import_start = TextSize::from(7);
|
||||
let unused_import = OldDiagnostic::lint(
|
||||
let unused_import = create_lint_diagnostic(
|
||||
"`os` imported but unused",
|
||||
Some("Remove unused import: `os`"),
|
||||
TextRange::new(unused_import_start, TextSize::from(9)),
|
||||
@@ -511,7 +258,7 @@ def fibonacci(n):
|
||||
);
|
||||
|
||||
let unused_variable_start = TextSize::from(94);
|
||||
let unused_variable = OldDiagnostic::lint(
|
||||
let unused_variable = create_lint_diagnostic(
|
||||
"Local variable `x` is assigned to but never used",
|
||||
Some("Remove assignment to unused variable `x`"),
|
||||
TextRange::new(unused_variable_start, TextSize::from(95)),
|
||||
@@ -528,7 +275,7 @@ def fibonacci(n):
|
||||
let file_2 = r"if a == 1: pass";
|
||||
|
||||
let undefined_name_start = TextSize::from(3);
|
||||
let undefined_name = OldDiagnostic::lint(
|
||||
let undefined_name = create_lint_diagnostic(
|
||||
"Undefined name `a`",
|
||||
Option::<&'static str>::None,
|
||||
TextRange::new(undefined_name_start, TextSize::from(4)),
|
||||
@@ -543,7 +290,7 @@ def fibonacci(n):
|
||||
}
|
||||
|
||||
pub(super) fn create_notebook_diagnostics()
|
||||
-> (Vec<OldDiagnostic>, FxHashMap<String, NotebookIndex>) {
|
||||
-> (Vec<Diagnostic>, FxHashMap<String, NotebookIndex>) {
|
||||
let notebook = r"# cell 1
|
||||
import os
|
||||
# cell 2
|
||||
@@ -559,7 +306,7 @@ def foo():
|
||||
let notebook_source = SourceFileBuilder::new("notebook.ipynb", notebook).finish();
|
||||
|
||||
let unused_import_os_start = TextSize::from(16);
|
||||
let unused_import_os = OldDiagnostic::lint(
|
||||
let unused_import_os = create_lint_diagnostic(
|
||||
"`os` imported but unused",
|
||||
Some("Remove unused import: `os`"),
|
||||
TextRange::new(unused_import_os_start, TextSize::from(18)),
|
||||
@@ -574,7 +321,7 @@ def foo():
|
||||
);
|
||||
|
||||
let unused_import_math_start = TextSize::from(35);
|
||||
let unused_import_math = OldDiagnostic::lint(
|
||||
let unused_import_math = create_lint_diagnostic(
|
||||
"`math` imported but unused",
|
||||
Some("Remove unused import: `math`"),
|
||||
TextRange::new(unused_import_math_start, TextSize::from(39)),
|
||||
@@ -589,7 +336,7 @@ def foo():
|
||||
);
|
||||
|
||||
let unused_variable_start = TextSize::from(98);
|
||||
let unused_variable = OldDiagnostic::lint(
|
||||
let unused_variable = create_lint_diagnostic(
|
||||
"Local variable `x` is assigned to but never used",
|
||||
Some("Remove assignment to unused variable `x`"),
|
||||
TextRange::new(unused_variable_start, TextSize::from(99)),
|
||||
@@ -642,7 +389,7 @@ def foo():
|
||||
|
||||
pub(super) fn capture_emitter_output(
|
||||
emitter: &mut dyn Emitter,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
) -> String {
|
||||
let notebook_indexes = FxHashMap::default();
|
||||
let context = EmitterContext::new(¬ebook_indexes);
|
||||
@@ -654,7 +401,7 @@ def foo():
|
||||
|
||||
pub(super) fn capture_emitter_notebook_output(
|
||||
emitter: &mut dyn Emitter,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
notebook_indexes: &FxHashMap<String, NotebookIndex>,
|
||||
) -> String {
|
||||
let context = EmitterContext::new(notebook_indexes);
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
use std::io::Write;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::OneIndexed;
|
||||
|
||||
use crate::fs::relativize_path;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
|
||||
/// Generate violations in Pylint format.
|
||||
/// See: [Flake8 documentation](https://flake8.pycqa.org/en/latest/internal/formatters.html#pylint-formatter)
|
||||
@@ -14,16 +15,17 @@ impl Emitter for PylintEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
for diagnostic in diagnostics {
|
||||
let row = if context.is_notebook(&diagnostic.filename()) {
|
||||
let filename = diagnostic.expect_ruff_filename();
|
||||
let row = if context.is_notebook(&filename) {
|
||||
// We can't give a reasonable location for the structured formats,
|
||||
// so we show one that's clearly a fallback
|
||||
OneIndexed::from_zero_indexed(0)
|
||||
} else {
|
||||
diagnostic.compute_start_location().line
|
||||
diagnostic.expect_ruff_start_location().line
|
||||
};
|
||||
|
||||
let body = if let Some(code) = diagnostic.secondary_code() {
|
||||
@@ -35,7 +37,7 @@ impl Emitter for PylintEmitter {
|
||||
writeln!(
|
||||
writer,
|
||||
"{path}:{row}: {body}",
|
||||
path = relativize_path(&*diagnostic.filename()),
|
||||
path = relativize_path(&filename),
|
||||
)?;
|
||||
}
|
||||
|
||||
|
||||
@@ -4,11 +4,12 @@ use serde::ser::SerializeSeq;
|
||||
use serde::{Serialize, Serializer};
|
||||
use serde_json::{Value, json};
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::SourceCode;
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::Edit;
|
||||
use crate::message::{Emitter, EmitterContext, LineColumn, OldDiagnostic};
|
||||
use crate::message::{Emitter, EmitterContext, LineColumn};
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct RdjsonEmitter;
|
||||
@@ -17,7 +18,7 @@ impl Emitter for RdjsonEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
_context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
serde_json::to_writer_pretty(
|
||||
@@ -37,7 +38,7 @@ impl Emitter for RdjsonEmitter {
|
||||
}
|
||||
|
||||
struct ExpandedMessages<'a> {
|
||||
diagnostics: &'a [OldDiagnostic],
|
||||
diagnostics: &'a [Diagnostic],
|
||||
}
|
||||
|
||||
impl Serialize for ExpandedMessages<'_> {
|
||||
@@ -56,18 +57,18 @@ impl Serialize for ExpandedMessages<'_> {
|
||||
}
|
||||
}
|
||||
|
||||
fn message_to_rdjson_value(message: &OldDiagnostic) -> Value {
|
||||
let source_file = message.source_file();
|
||||
fn message_to_rdjson_value(message: &Diagnostic) -> Value {
|
||||
let source_file = message.expect_ruff_source_file();
|
||||
let source_code = source_file.to_source_code();
|
||||
|
||||
let start_location = source_code.line_column(message.start());
|
||||
let end_location = source_code.line_column(message.end());
|
||||
let start_location = source_code.line_column(message.expect_range().start());
|
||||
let end_location = source_code.line_column(message.expect_range().end());
|
||||
|
||||
if let Some(fix) = message.fix() {
|
||||
json!({
|
||||
"message": message.body(),
|
||||
"location": {
|
||||
"path": message.filename(),
|
||||
"path": message.expect_ruff_filename(),
|
||||
"range": rdjson_range(start_location, end_location),
|
||||
},
|
||||
"code": {
|
||||
@@ -80,7 +81,7 @@ fn message_to_rdjson_value(message: &OldDiagnostic) -> Value {
|
||||
json!({
|
||||
"message": message.body(),
|
||||
"location": {
|
||||
"path": message.filename(),
|
||||
"path": message.expect_ruff_filename(),
|
||||
"range": rdjson_range(start_location, end_location),
|
||||
},
|
||||
"code": {
|
||||
|
||||
@@ -5,11 +5,12 @@ use anyhow::Result;
|
||||
use serde::{Serialize, Serializer};
|
||||
use serde_json::json;
|
||||
|
||||
use ruff_db::diagnostic::{Diagnostic, SecondaryCode};
|
||||
use ruff_source_file::OneIndexed;
|
||||
|
||||
use crate::VERSION;
|
||||
use crate::fs::normalize_path;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic, SecondaryCode};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
use crate::registry::{Linter, RuleNamespace};
|
||||
|
||||
pub struct SarifEmitter;
|
||||
@@ -18,7 +19,7 @@ impl Emitter for SarifEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
_context: &EmitterContext,
|
||||
) -> Result<()> {
|
||||
let results = diagnostics
|
||||
@@ -122,10 +123,10 @@ struct SarifResult<'a> {
|
||||
|
||||
impl<'a> SarifResult<'a> {
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
fn from_message(message: &'a OldDiagnostic) -> Result<Self> {
|
||||
let start_location = message.compute_start_location();
|
||||
let end_location = message.compute_end_location();
|
||||
let path = normalize_path(&*message.filename());
|
||||
fn from_message(message: &'a Diagnostic) -> Result<Self> {
|
||||
let start_location = message.expect_ruff_start_location();
|
||||
let end_location = message.expect_ruff_end_location();
|
||||
let path = normalize_path(&*message.expect_ruff_filename());
|
||||
Ok(Self {
|
||||
code: message.secondary_code(),
|
||||
level: "error".to_string(),
|
||||
@@ -142,10 +143,10 @@ impl<'a> SarifResult<'a> {
|
||||
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
#[expect(clippy::unnecessary_wraps)]
|
||||
fn from_message(message: &'a OldDiagnostic) -> Result<Self> {
|
||||
let start_location = message.compute_start_location();
|
||||
let end_location = message.compute_end_location();
|
||||
let path = normalize_path(&*message.filename());
|
||||
fn from_message(message: &'a Diagnostic) -> Result<Self> {
|
||||
let start_location = message.expect_ruff_start_location();
|
||||
let end_location = message.expect_ruff_end_location();
|
||||
let path = normalize_path(&*message.expect_ruff_filename());
|
||||
Ok(Self {
|
||||
code: message.secondary_code(),
|
||||
level: "error".to_string(),
|
||||
|
||||
@@ -6,15 +6,16 @@ use bitflags::bitflags;
|
||||
use colored::Colorize;
|
||||
use ruff_annotate_snippets::{Level, Renderer, Snippet};
|
||||
|
||||
use ruff_db::diagnostic::{Diagnostic, SecondaryCode};
|
||||
use ruff_notebook::NotebookIndex;
|
||||
use ruff_source_file::{LineColumn, OneIndexed};
|
||||
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
|
||||
use ruff_text_size::{TextLen, TextRange, TextSize};
|
||||
|
||||
use crate::Locator;
|
||||
use crate::fs::relativize_path;
|
||||
use crate::line_width::{IndentWidth, LineWidthBuilder};
|
||||
use crate::message::diff::Diff;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic, SecondaryCode};
|
||||
use crate::message::{Emitter, EmitterContext};
|
||||
use crate::settings::types::UnsafeFixes;
|
||||
|
||||
bitflags! {
|
||||
@@ -66,19 +67,20 @@ impl Emitter for TextEmitter {
|
||||
fn emit(
|
||||
&mut self,
|
||||
writer: &mut dyn Write,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
context: &EmitterContext,
|
||||
) -> anyhow::Result<()> {
|
||||
for message in diagnostics {
|
||||
let filename = message.expect_ruff_filename();
|
||||
write!(
|
||||
writer,
|
||||
"{path}{sep}",
|
||||
path = relativize_path(&*message.filename()).bold(),
|
||||
path = relativize_path(&filename).bold(),
|
||||
sep = ":".cyan(),
|
||||
)?;
|
||||
|
||||
let start_location = message.compute_start_location();
|
||||
let notebook_index = context.notebook_index(&message.filename());
|
||||
let start_location = message.expect_ruff_start_location();
|
||||
let notebook_index = context.notebook_index(&filename);
|
||||
|
||||
// Check if we're working on a jupyter notebook and translate positions with cell accordingly
|
||||
let diagnostic_location = if let Some(notebook_index) = notebook_index {
|
||||
@@ -116,7 +118,7 @@ impl Emitter for TextEmitter {
|
||||
|
||||
if self.flags.intersects(EmitterFlags::SHOW_SOURCE) {
|
||||
// The `0..0` range is used to highlight file-level diagnostics.
|
||||
if message.range() != TextRange::default() {
|
||||
if message.expect_range() != TextRange::default() {
|
||||
writeln!(
|
||||
writer,
|
||||
"{}",
|
||||
@@ -140,7 +142,7 @@ impl Emitter for TextEmitter {
|
||||
}
|
||||
|
||||
pub(super) struct RuleCodeAndBody<'a> {
|
||||
pub(crate) message: &'a OldDiagnostic,
|
||||
pub(crate) message: &'a Diagnostic,
|
||||
pub(crate) show_fix_status: bool,
|
||||
pub(crate) unsafe_fixes: UnsafeFixes,
|
||||
}
|
||||
@@ -178,7 +180,7 @@ impl Display for RuleCodeAndBody<'_> {
|
||||
}
|
||||
|
||||
pub(super) struct MessageCodeFrame<'a> {
|
||||
pub(crate) message: &'a OldDiagnostic,
|
||||
pub(crate) message: &'a Diagnostic,
|
||||
pub(crate) notebook_index: Option<&'a NotebookIndex>,
|
||||
}
|
||||
|
||||
@@ -191,10 +193,10 @@ impl Display for MessageCodeFrame<'_> {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let source_file = self.message.source_file();
|
||||
let source_file = self.message.expect_ruff_source_file();
|
||||
let source_code = source_file.to_source_code();
|
||||
|
||||
let content_start_index = source_code.line_index(self.message.start());
|
||||
let content_start_index = source_code.line_index(self.message.expect_range().start());
|
||||
let mut start_index = content_start_index.saturating_sub(2);
|
||||
|
||||
// If we're working with a Jupyter Notebook, skip the lines which are
|
||||
@@ -217,7 +219,7 @@ impl Display for MessageCodeFrame<'_> {
|
||||
start_index = start_index.saturating_add(1);
|
||||
}
|
||||
|
||||
let content_end_index = source_code.line_index(self.message.end());
|
||||
let content_end_index = source_code.line_index(self.message.expect_range().end());
|
||||
let mut end_index = content_end_index
|
||||
.saturating_add(2)
|
||||
.min(OneIndexed::from_zero_indexed(source_code.line_count()));
|
||||
@@ -248,7 +250,7 @@ impl Display for MessageCodeFrame<'_> {
|
||||
|
||||
let source = replace_whitespace_and_unprintable(
|
||||
source_code.slice(TextRange::new(start_offset, end_offset)),
|
||||
self.message.range() - start_offset,
|
||||
self.message.expect_range() - start_offset,
|
||||
)
|
||||
.fix_up_empty_spans_after_line_terminator();
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ use anyhow::Result;
|
||||
use itertools::Itertools;
|
||||
use log::warn;
|
||||
|
||||
use ruff_db::diagnostic::{Diagnostic, SecondaryCode};
|
||||
use ruff_python_trivia::{CommentRanges, Cursor, indentation_at_offset};
|
||||
use ruff_source_file::{LineEnding, LineRanges};
|
||||
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
|
||||
@@ -17,7 +18,6 @@ use rustc_hash::FxHashSet;
|
||||
use crate::Edit;
|
||||
use crate::Locator;
|
||||
use crate::fs::relativize_path;
|
||||
use crate::message::{OldDiagnostic, SecondaryCode};
|
||||
use crate::registry::Rule;
|
||||
use crate::rule_redirects::get_redirect_target;
|
||||
|
||||
@@ -28,7 +28,7 @@ use crate::rule_redirects::get_redirect_target;
|
||||
/// simultaneously.
|
||||
pub fn generate_noqa_edits(
|
||||
path: &Path,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
locator: &Locator,
|
||||
comment_ranges: &CommentRanges,
|
||||
external: &[String],
|
||||
@@ -717,7 +717,7 @@ impl Error for LexicalError {}
|
||||
/// Adds noqa comments to suppress all messages of a file.
|
||||
pub(crate) fn add_noqa(
|
||||
path: &Path,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
locator: &Locator,
|
||||
comment_ranges: &CommentRanges,
|
||||
external: &[String],
|
||||
@@ -740,7 +740,7 @@ pub(crate) fn add_noqa(
|
||||
|
||||
fn add_noqa_inner(
|
||||
path: &Path,
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
locator: &Locator,
|
||||
comment_ranges: &CommentRanges,
|
||||
external: &[String],
|
||||
@@ -845,7 +845,7 @@ struct NoqaComment<'a> {
|
||||
}
|
||||
|
||||
fn find_noqa_comments<'a>(
|
||||
diagnostics: &'a [OldDiagnostic],
|
||||
diagnostics: &'a [Diagnostic],
|
||||
locator: &'a Locator,
|
||||
exemption: &'a FileExemption,
|
||||
directives: &'a NoqaDirectives,
|
||||
@@ -867,7 +867,7 @@ fn find_noqa_comments<'a>(
|
||||
}
|
||||
|
||||
// Is the violation ignored by a `noqa` directive on the parent line?
|
||||
if let Some(parent) = message.parent {
|
||||
if let Some(parent) = message.parent() {
|
||||
if let Some(directive_line) =
|
||||
directives.find_line_with_directive(noqa_line_for.resolve(parent))
|
||||
{
|
||||
@@ -886,7 +886,7 @@ fn find_noqa_comments<'a>(
|
||||
}
|
||||
}
|
||||
|
||||
let noqa_offset = noqa_line_for.resolve(message.range().start());
|
||||
let noqa_offset = noqa_line_for.resolve(message.expect_range().start());
|
||||
|
||||
// Or ignored by the directive itself?
|
||||
if let Some(directive_line) = directives.find_line_with_directive(noqa_offset) {
|
||||
@@ -1225,6 +1225,8 @@ mod tests {
|
||||
use ruff_source_file::{LineEnding, SourceFileBuilder};
|
||||
use ruff_text_size::{TextLen, TextRange, TextSize};
|
||||
|
||||
use crate::Edit;
|
||||
use crate::message::diagnostic_from_violation;
|
||||
use crate::noqa::{
|
||||
Directive, LexicalError, NoqaLexerOutput, NoqaMapping, add_noqa_inner, lex_codes,
|
||||
lex_file_exemption, lex_inline_noqa,
|
||||
@@ -1232,7 +1234,6 @@ mod tests {
|
||||
use crate::rules::pycodestyle::rules::{AmbiguousVariableName, UselessSemicolon};
|
||||
use crate::rules::pyflakes::rules::UnusedVariable;
|
||||
use crate::rules::pyupgrade::rules::PrintfStringFormatting;
|
||||
use crate::{Edit, OldDiagnostic};
|
||||
use crate::{Locator, generate_noqa_edits};
|
||||
|
||||
fn assert_lexed_ranges_match_slices(
|
||||
@@ -2831,7 +2832,7 @@ mod tests {
|
||||
assert_eq!(output, format!("{contents}"));
|
||||
|
||||
let source_file = SourceFileBuilder::new(path.to_string_lossy(), contents).finish();
|
||||
let messages = [OldDiagnostic::new(
|
||||
let messages = [diagnostic_from_violation(
|
||||
UnusedVariable {
|
||||
name: "x".to_string(),
|
||||
},
|
||||
@@ -2855,12 +2856,12 @@ mod tests {
|
||||
|
||||
let source_file = SourceFileBuilder::new(path.to_string_lossy(), contents).finish();
|
||||
let messages = [
|
||||
OldDiagnostic::new(
|
||||
diagnostic_from_violation(
|
||||
AmbiguousVariableName("x".to_string()),
|
||||
TextRange::new(TextSize::from(0), TextSize::from(0)),
|
||||
&source_file,
|
||||
),
|
||||
OldDiagnostic::new(
|
||||
diagnostic_from_violation(
|
||||
UnusedVariable {
|
||||
name: "x".to_string(),
|
||||
},
|
||||
@@ -2886,12 +2887,12 @@ mod tests {
|
||||
|
||||
let source_file = SourceFileBuilder::new(path.to_string_lossy(), contents).finish();
|
||||
let messages = [
|
||||
OldDiagnostic::new(
|
||||
diagnostic_from_violation(
|
||||
AmbiguousVariableName("x".to_string()),
|
||||
TextRange::new(TextSize::from(0), TextSize::from(0)),
|
||||
&source_file,
|
||||
),
|
||||
OldDiagnostic::new(
|
||||
diagnostic_from_violation(
|
||||
UnusedVariable {
|
||||
name: "x".to_string(),
|
||||
},
|
||||
@@ -2930,7 +2931,7 @@ print(
|
||||
"#;
|
||||
let noqa_line_for = [TextRange::new(8.into(), 68.into())].into_iter().collect();
|
||||
let source_file = SourceFileBuilder::new(path.to_string_lossy(), source).finish();
|
||||
let messages = [OldDiagnostic::new(
|
||||
let messages = [diagnostic_from_violation(
|
||||
PrintfStringFormatting,
|
||||
TextRange::new(12.into(), 79.into()),
|
||||
&source_file,
|
||||
@@ -2963,7 +2964,7 @@ foo;
|
||||
bar =
|
||||
";
|
||||
let source_file = SourceFileBuilder::new(path.to_string_lossy(), source).finish();
|
||||
let messages = [OldDiagnostic::new(
|
||||
let messages = [diagnostic_from_violation(
|
||||
UselessSemicolon,
|
||||
TextRange::new(4.into(), 5.into()),
|
||||
&source_file,
|
||||
|
||||
@@ -3,19 +3,17 @@ use log::warn;
|
||||
use pyproject_toml::PyProjectToml;
|
||||
use ruff_text_size::{TextRange, TextSize};
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_source_file::SourceFile;
|
||||
|
||||
use crate::IOError;
|
||||
use crate::OldDiagnostic;
|
||||
use crate::message::diagnostic_from_violation;
|
||||
use crate::registry::Rule;
|
||||
use crate::rules::ruff::rules::InvalidPyprojectToml;
|
||||
use crate::settings::LinterSettings;
|
||||
|
||||
/// RUF200
|
||||
pub fn lint_pyproject_toml(
|
||||
source_file: &SourceFile,
|
||||
settings: &LinterSettings,
|
||||
) -> Vec<OldDiagnostic> {
|
||||
pub fn lint_pyproject_toml(source_file: &SourceFile, settings: &LinterSettings) -> Vec<Diagnostic> {
|
||||
let Some(err) = toml::from_str::<PyProjectToml>(source_file.source_text()).err() else {
|
||||
return Vec::default();
|
||||
};
|
||||
@@ -32,8 +30,11 @@ pub fn lint_pyproject_toml(
|
||||
source_file.name(),
|
||||
);
|
||||
if settings.rules.enabled(Rule::IOError) {
|
||||
let diagnostic =
|
||||
OldDiagnostic::new(IOError { message }, TextRange::default(), source_file);
|
||||
let diagnostic = diagnostic_from_violation(
|
||||
IOError { message },
|
||||
TextRange::default(),
|
||||
source_file,
|
||||
);
|
||||
messages.push(diagnostic);
|
||||
} else {
|
||||
warn!(
|
||||
@@ -55,7 +56,7 @@ pub fn lint_pyproject_toml(
|
||||
|
||||
if settings.rules.enabled(Rule::InvalidPyprojectToml) {
|
||||
let toml_err = err.message().to_string();
|
||||
let diagnostic = OldDiagnostic::new(
|
||||
let diagnostic = diagnostic_from_violation(
|
||||
InvalidPyprojectToml { message: toml_err },
|
||||
range,
|
||||
source_file,
|
||||
|
||||
@@ -1,46 +1,46 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
|
||||
---
|
||||
S112.py:3:1: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
S112.py:4:5: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
|
|
||||
1 | try:
|
||||
2 | pass
|
||||
3 | / except Exception:
|
||||
4 | | continue
|
||||
| |____________^ S112
|
||||
5 |
|
||||
6 | try:
|
||||
2 | try:
|
||||
3 | pass
|
||||
4 | / except Exception:
|
||||
5 | | continue
|
||||
| |________________^ S112
|
||||
6 |
|
||||
7 | try:
|
||||
|
|
||||
|
||||
S112.py:8:1: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
S112.py:9:5: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
|
|
||||
6 | try:
|
||||
7 | pass
|
||||
8 | / except:
|
||||
9 | | continue
|
||||
| |____________^ S112
|
||||
10 |
|
||||
11 | try:
|
||||
7 | try:
|
||||
8 | pass
|
||||
9 | / except:
|
||||
10 | | continue
|
||||
| |________________^ S112
|
||||
11 |
|
||||
12 | try:
|
||||
|
|
||||
|
||||
S112.py:13:1: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
S112.py:14:5: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
|
|
||||
11 | try:
|
||||
12 | pass
|
||||
13 | / except (Exception,):
|
||||
14 | | continue
|
||||
| |____________^ S112
|
||||
15 |
|
||||
16 | try:
|
||||
12 | try:
|
||||
13 | pass
|
||||
14 | / except (Exception,):
|
||||
15 | | continue
|
||||
| |________________^ S112
|
||||
16 |
|
||||
17 | try:
|
||||
|
|
||||
|
||||
S112.py:18:1: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
S112.py:19:5: S112 `try`-`except`-`continue` detected, consider logging the exception
|
||||
|
|
||||
16 | try:
|
||||
17 | pass
|
||||
18 | / except (Exception, ValueError):
|
||||
19 | | continue
|
||||
| |____________^ S112
|
||||
20 |
|
||||
21 | try:
|
||||
17 | try:
|
||||
18 | pass
|
||||
19 | / except (Exception, ValueError):
|
||||
20 | | continue
|
||||
| |________________^ S112
|
||||
21 |
|
||||
22 | try:
|
||||
|
|
||||
|
||||
@@ -195,31 +195,31 @@ B031.py:144:33: B031 Using the generator returned from `itertools.groupby()` mor
|
||||
146 | for group in groupby(items, key=lambda p: p[1]):
|
||||
|
|
||||
|
||||
B031.py:200:37: B031 Using the generator returned from `itertools.groupby()` more than once will do nothing on the second usage
|
||||
B031.py:203:41: B031 Using the generator returned from `itertools.groupby()` more than once will do nothing on the second usage
|
||||
|
|
||||
198 | if _section == "greens":
|
||||
199 | collect_shop_items(shopper, section_items)
|
||||
200 | collect_shop_items(shopper, section_items)
|
||||
| ^^^^^^^^^^^^^ B031
|
||||
201 | return
|
||||
201 | if _section == "greens":
|
||||
202 | collect_shop_items(shopper, section_items)
|
||||
203 | collect_shop_items(shopper, section_items)
|
||||
| ^^^^^^^^^^^^^ B031
|
||||
204 | return
|
||||
|
|
||||
|
||||
B031.py:210:37: B031 Using the generator returned from `itertools.groupby()` more than once will do nothing on the second usage
|
||||
B031.py:215:41: B031 Using the generator returned from `itertools.groupby()` more than once will do nothing on the second usage
|
||||
|
|
||||
208 | elif _section == "frozen items":
|
||||
209 | collect_shop_items(shopper, section_items)
|
||||
210 | collect_shop_items(shopper, section_items)
|
||||
| ^^^^^^^^^^^^^ B031
|
||||
211 |
|
||||
212 | # Should trigger, since only one branch has a return statement.
|
||||
213 | elif _section == "frozen items":
|
||||
214 | collect_shop_items(shopper, section_items)
|
||||
215 | collect_shop_items(shopper, section_items)
|
||||
| ^^^^^^^^^^^^^ B031
|
||||
216 |
|
||||
217 | # Should trigger, since only one branch has a return statement.
|
||||
|
|
||||
|
||||
B031.py:219:33: B031 Using the generator returned from `itertools.groupby()` more than once will do nothing on the second usage
|
||||
B031.py:226:37: B031 Using the generator returned from `itertools.groupby()` more than once will do nothing on the second usage
|
||||
|
|
||||
217 | elif _section == "frozen items":
|
||||
218 | collect_shop_items(shopper, section_items)
|
||||
219 | collect_shop_items(shopper, section_items) # B031
|
||||
| ^^^^^^^^^^^^^ B031
|
||||
220 |
|
||||
221 | # Let's redefine the `groupby` function to make sure we pick up the correct one.
|
||||
224 | elif _section == "frozen items":
|
||||
225 | collect_shop_items(shopper, section_items)
|
||||
226 | collect_shop_items(shopper, section_items) # B031
|
||||
| ^^^^^^^^^^^^^ B031
|
||||
227 |
|
||||
228 | # Let's redefine the `groupby` function to make sure we pick up the correct one.
|
||||
|
|
||||
|
||||
@@ -355,7 +355,7 @@ fn check_token(
|
||||
if let Some(mut diagnostic) =
|
||||
lint_context.report_diagnostic_if_enabled(ProhibitedTrailingComma, prev.range())
|
||||
{
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ COM81_syntax_error.py:3:5: SyntaxError: Starred expression cannot be used here
|
||||
1 | # Check for `flake8-commas` violation for a file containing syntax errors.
|
||||
2 | (
|
||||
3 | *args
|
||||
| ^
|
||||
| ^^^^^
|
||||
4 | )
|
||||
|
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ ISC_syntax_error.py:2:5: SyntaxError: missing closing quote in string literal
|
||||
|
|
||||
1 | # The lexer doesn't emit a string token if it's unterminated
|
||||
2 | "a" "b
|
||||
| ^
|
||||
| ^^
|
||||
3 | "a" "b" "c
|
||||
4 | "a" """b
|
||||
|
|
||||
@@ -36,7 +36,7 @@ ISC_syntax_error.py:3:9: SyntaxError: missing closing quote in string literal
|
||||
1 | # The lexer doesn't emit a string token if it's unterminated
|
||||
2 | "a" "b
|
||||
3 | "a" "b" "c
|
||||
| ^
|
||||
| ^^
|
||||
4 | "a" """b
|
||||
5 | c""" "d
|
||||
|
|
||||
@@ -68,7 +68,7 @@ ISC_syntax_error.py:5:6: SyntaxError: missing closing quote in string literal
|
||||
3 | "a" "b" "c
|
||||
4 | "a" """b
|
||||
5 | c""" "d
|
||||
| ^
|
||||
| ^^
|
||||
6 |
|
||||
7 | # For f-strings, the `FStringRanges` won't contain the range for
|
||||
|
|
||||
@@ -153,19 +153,21 @@ ISC_syntax_error.py:16:5: SyntaxError: missing closing quote in string literal
|
||||
14 | (
|
||||
15 | "a"
|
||||
16 | "b
|
||||
| ^
|
||||
| ^^
|
||||
17 | "c"
|
||||
18 | "d"
|
||||
|
|
||||
|
||||
ISC_syntax_error.py:26:9: SyntaxError: f-string: unterminated triple-quoted string
|
||||
|
|
||||
24 | (
|
||||
25 | """abc"""
|
||||
26 | f"""def
|
||||
| ^
|
||||
27 | "g" "h"
|
||||
28 | "i" "j"
|
||||
24 | (
|
||||
25 | """abc"""
|
||||
26 | f"""def
|
||||
| _________^
|
||||
27 | | "g" "h"
|
||||
28 | | "i" "j"
|
||||
29 | | )
|
||||
| |__^
|
||||
|
|
||||
|
||||
ISC_syntax_error.py:30:1: SyntaxError: unexpected EOF while parsing
|
||||
|
||||
@@ -5,7 +5,7 @@ ISC_syntax_error.py:2:5: SyntaxError: missing closing quote in string literal
|
||||
|
|
||||
1 | # The lexer doesn't emit a string token if it's unterminated
|
||||
2 | "a" "b
|
||||
| ^
|
||||
| ^^
|
||||
3 | "a" "b" "c
|
||||
4 | "a" """b
|
||||
|
|
||||
@@ -25,7 +25,7 @@ ISC_syntax_error.py:3:9: SyntaxError: missing closing quote in string literal
|
||||
1 | # The lexer doesn't emit a string token if it's unterminated
|
||||
2 | "a" "b
|
||||
3 | "a" "b" "c
|
||||
| ^
|
||||
| ^^
|
||||
4 | "a" """b
|
||||
5 | c""" "d
|
||||
|
|
||||
@@ -45,7 +45,7 @@ ISC_syntax_error.py:5:6: SyntaxError: missing closing quote in string literal
|
||||
3 | "a" "b" "c
|
||||
4 | "a" """b
|
||||
5 | c""" "d
|
||||
| ^
|
||||
| ^^
|
||||
6 |
|
||||
7 | # For f-strings, the `FStringRanges` won't contain the range for
|
||||
|
|
||||
@@ -107,19 +107,21 @@ ISC_syntax_error.py:16:5: SyntaxError: missing closing quote in string literal
|
||||
14 | (
|
||||
15 | "a"
|
||||
16 | "b
|
||||
| ^
|
||||
| ^^
|
||||
17 | "c"
|
||||
18 | "d"
|
||||
|
|
||||
|
||||
ISC_syntax_error.py:26:9: SyntaxError: f-string: unterminated triple-quoted string
|
||||
|
|
||||
24 | (
|
||||
25 | """abc"""
|
||||
26 | f"""def
|
||||
| ^
|
||||
27 | "g" "h"
|
||||
28 | "i" "j"
|
||||
24 | (
|
||||
25 | """abc"""
|
||||
26 | f"""def
|
||||
| _________^
|
||||
27 | | "g" "h"
|
||||
28 | | "i" "j"
|
||||
29 | | )
|
||||
| |__^
|
||||
|
|
||||
|
||||
ISC_syntax_error.py:30:1: SyntaxError: unexpected EOF while parsing
|
||||
|
||||
@@ -190,72 +190,73 @@ PIE804.py:26:22: PIE804 [*] Unnecessary `dict` kwargs
|
||||
26 |+abc(a=1, **{'a': c}, b=c) # PIE804
|
||||
27 27 |
|
||||
28 28 | # Some values need to be parenthesized.
|
||||
29 29 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
29 29 | def foo():
|
||||
|
||||
PIE804.py:29:12: PIE804 [*] Unnecessary `dict` kwargs
|
||||
PIE804.py:30:16: PIE804 [*] Unnecessary `dict` kwargs
|
||||
|
|
||||
28 | # Some values need to be parenthesized.
|
||||
29 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ PIE804
|
||||
30 | abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
|
|
||||
= help: Remove unnecessary kwargs
|
||||
|
||||
ℹ Safe fix
|
||||
26 26 | abc(a=1, **{'a': c}, **{'b': c}) # PIE804
|
||||
27 27 |
|
||||
28 28 | # Some values need to be parenthesized.
|
||||
29 |-abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
29 |+abc(foo=1, bar=(bar := 1)) # PIE804
|
||||
30 30 | abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
31 31 |
|
||||
32 32 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
|
||||
PIE804.py:30:12: PIE804 [*] Unnecessary `dict` kwargs
|
||||
|
|
||||
28 | # Some values need to be parenthesized.
|
||||
29 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
30 | abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
| ^^^^^^^^^^^^^^^^^^^^ PIE804
|
||||
31 |
|
||||
32 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
29 | def foo():
|
||||
30 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ PIE804
|
||||
31 | abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
|
|
||||
= help: Remove unnecessary kwargs
|
||||
|
||||
ℹ Safe fix
|
||||
27 27 |
|
||||
28 28 | # Some values need to be parenthesized.
|
||||
29 29 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
30 |-abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
30 |+abc(foo=1, bar=(yield 1)) # PIE804
|
||||
31 31 |
|
||||
32 32 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
33 33 | # The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
29 29 | def foo():
|
||||
30 |- abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
30 |+ abc(foo=1, bar=(bar := 1)) # PIE804
|
||||
31 31 | abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
32 32 |
|
||||
33 33 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
|
||||
PIE804.py:35:5: PIE804 [*] Unnecessary `dict` kwargs
|
||||
PIE804.py:31:16: PIE804 [*] Unnecessary `dict` kwargs
|
||||
|
|
||||
33 | # The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
34 | foo(
|
||||
35 | / **{
|
||||
36 | | # Comment 1
|
||||
37 | | "x": 1.0,
|
||||
38 | | # Comment 2
|
||||
39 | | "y": 2.0,
|
||||
40 | | }
|
||||
29 | def foo():
|
||||
30 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
31 | abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
| ^^^^^^^^^^^^^^^^^^^^ PIE804
|
||||
32 |
|
||||
33 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
|
|
||||
= help: Remove unnecessary kwargs
|
||||
|
||||
ℹ Safe fix
|
||||
28 28 | # Some values need to be parenthesized.
|
||||
29 29 | def foo():
|
||||
30 30 | abc(foo=1, **{'bar': (bar := 1)}) # PIE804
|
||||
31 |- abc(foo=1, **{'bar': (yield 1)}) # PIE804
|
||||
31 |+ abc(foo=1, bar=(yield 1)) # PIE804
|
||||
32 32 |
|
||||
33 33 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
34 34 | # The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
|
||||
PIE804.py:36:5: PIE804 [*] Unnecessary `dict` kwargs
|
||||
|
|
||||
34 | # The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
35 | foo(
|
||||
36 | / **{
|
||||
37 | | # Comment 1
|
||||
38 | | "x": 1.0,
|
||||
39 | | # Comment 2
|
||||
40 | | "y": 2.0,
|
||||
41 | | }
|
||||
| |_____^ PIE804
|
||||
41 | )
|
||||
42 | )
|
||||
|
|
||||
= help: Remove unnecessary kwargs
|
||||
|
||||
ℹ Unsafe fix
|
||||
32 32 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
33 33 | # The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
34 34 | foo(
|
||||
35 |- **{
|
||||
36 |- # Comment 1
|
||||
37 |- "x": 1.0,
|
||||
38 |- # Comment 2
|
||||
39 |- "y": 2.0,
|
||||
40 |- }
|
||||
35 |+ x=1.0, y=2.0
|
||||
41 36 | )
|
||||
33 33 | # https://github.com/astral-sh/ruff/issues/18036
|
||||
34 34 | # The autofix for this is unsafe due to the comments inside the dictionary.
|
||||
35 35 | foo(
|
||||
36 |- **{
|
||||
37 |- # Comment 1
|
||||
38 |- "x": 1.0,
|
||||
39 |- # Comment 2
|
||||
40 |- "y": 2.0,
|
||||
41 |- }
|
||||
36 |+ x=1.0, y=2.0
|
||||
42 37 | )
|
||||
|
||||
@@ -31,7 +31,7 @@ use crate::rules::flake8_pytest_style::helpers::{Parentheses, get_mark_decorator
|
||||
/// import pytest
|
||||
///
|
||||
///
|
||||
/// @pytest.mark.foo
|
||||
/// @pytest.mark.foo()
|
||||
/// def test_something(): ...
|
||||
/// ```
|
||||
///
|
||||
@@ -41,7 +41,7 @@ use crate::rules::flake8_pytest_style::helpers::{Parentheses, get_mark_decorator
|
||||
/// import pytest
|
||||
///
|
||||
///
|
||||
/// @pytest.mark.foo()
|
||||
/// @pytest.mark.foo
|
||||
/// def test_something(): ...
|
||||
/// ```
|
||||
///
|
||||
|
||||
@@ -76,11 +76,11 @@ impl Violation for PytestWarnsWithMultipleStatements {
|
||||
///
|
||||
///
|
||||
/// def test_foo():
|
||||
/// with pytest.warns(RuntimeWarning):
|
||||
/// with pytest.warns(Warning):
|
||||
/// ...
|
||||
///
|
||||
/// # empty string is also an error
|
||||
/// with pytest.warns(RuntimeWarning, match=""):
|
||||
/// with pytest.warns(Warning, match=""):
|
||||
/// ...
|
||||
/// ```
|
||||
///
|
||||
@@ -90,7 +90,7 @@ impl Violation for PytestWarnsWithMultipleStatements {
|
||||
///
|
||||
///
|
||||
/// def test_foo():
|
||||
/// with pytest.warns(RuntimeWarning, match="expected message"):
|
||||
/// with pytest.warns(Warning, match="expected message"):
|
||||
/// ...
|
||||
/// ```
|
||||
///
|
||||
|
||||
@@ -19,12 +19,12 @@ use crate::{AlwaysFixableViolation, Edit, Fix};
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// foo = 'bar\'s'
|
||||
/// foo = "bar\"s"
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// foo = "bar's"
|
||||
/// foo = 'bar"s'
|
||||
/// ```
|
||||
///
|
||||
/// ## Formatter compatibility
|
||||
|
||||
@@ -20,6 +20,7 @@ use crate::checkers::ast::Checker;
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// fruits = ["apple", "banana", "cherry"]
|
||||
/// i = 0
|
||||
/// for fruit in fruits:
|
||||
/// print(f"{i + 1}. {fruit}")
|
||||
/// i += 1
|
||||
|
||||
@@ -27,6 +27,7 @@ use crate::{Edit, Fix, FixAvailability, Violation};
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// foo = {}
|
||||
/// if "bar" in foo:
|
||||
/// value = foo["bar"]
|
||||
/// else:
|
||||
@@ -35,6 +36,7 @@ use crate::{Edit, Fix, FixAvailability, Violation};
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// foo = {}
|
||||
/// value = foo.get("bar", 0)
|
||||
/// ```
|
||||
///
|
||||
|
||||
@@ -23,15 +23,17 @@ use crate::{Edit, Fix, FixAvailability, Violation};
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// for item in iterable:
|
||||
/// if predicate(item):
|
||||
/// return True
|
||||
/// return False
|
||||
/// def foo():
|
||||
/// for item in iterable:
|
||||
/// if predicate(item):
|
||||
/// return True
|
||||
/// return False
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// return any(predicate(item) for item in iterable)
|
||||
/// def foo():
|
||||
/// return any(predicate(item) for item in iterable)
|
||||
/// ```
|
||||
///
|
||||
/// ## Fix safety
|
||||
|
||||
@@ -50,285 +50,285 @@ SIM115.py:12:5: SIM115 Use a context manager for opening files
|
||||
14 | f.close()
|
||||
|
|
||||
|
||||
SIM115.py:39:9: SIM115 Use a context manager for opening files
|
||||
SIM115.py:40:9: SIM115 Use a context manager for opening files
|
||||
|
|
||||
37 | # SIM115
|
||||
38 | with contextlib.ExitStack():
|
||||
39 | f = open("filename")
|
||||
38 | # SIM115
|
||||
39 | with contextlib.ExitStack():
|
||||
40 | f = open("filename")
|
||||
| ^^^^ SIM115
|
||||
40 |
|
||||
41 | # OK
|
||||
|
|
||||
|
||||
SIM115.py:80:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
78 | import fileinput
|
||||
79 |
|
||||
80 | f = tempfile.NamedTemporaryFile()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
81 | f = tempfile.TemporaryFile()
|
||||
82 | f = tempfile.SpooledTemporaryFile()
|
||||
41 |
|
||||
42 | # OK
|
||||
|
|
||||
|
||||
SIM115.py:81:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
80 | f = tempfile.NamedTemporaryFile()
|
||||
81 | f = tempfile.TemporaryFile()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
82 | f = tempfile.SpooledTemporaryFile()
|
||||
83 | f = tarfile.open("foo.tar")
|
||||
79 | import fileinput
|
||||
80 |
|
||||
81 | f = tempfile.NamedTemporaryFile()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
82 | f = tempfile.TemporaryFile()
|
||||
83 | f = tempfile.SpooledTemporaryFile()
|
||||
|
|
||||
|
||||
SIM115.py:82:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
80 | f = tempfile.NamedTemporaryFile()
|
||||
81 | f = tempfile.TemporaryFile()
|
||||
82 | f = tempfile.SpooledTemporaryFile()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
83 | f = tarfile.open("foo.tar")
|
||||
84 | f = TarFile("foo.tar").open()
|
||||
81 | f = tempfile.NamedTemporaryFile()
|
||||
82 | f = tempfile.TemporaryFile()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
83 | f = tempfile.SpooledTemporaryFile()
|
||||
84 | f = tarfile.open("foo.tar")
|
||||
|
|
||||
|
||||
SIM115.py:83:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
81 | f = tempfile.TemporaryFile()
|
||||
82 | f = tempfile.SpooledTemporaryFile()
|
||||
83 | f = tarfile.open("foo.tar")
|
||||
| ^^^^^^^^^^^^ SIM115
|
||||
84 | f = TarFile("foo.tar").open()
|
||||
85 | f = tarfile.TarFile("foo.tar").open()
|
||||
81 | f = tempfile.NamedTemporaryFile()
|
||||
82 | f = tempfile.TemporaryFile()
|
||||
83 | f = tempfile.SpooledTemporaryFile()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
84 | f = tarfile.open("foo.tar")
|
||||
85 | f = TarFile("foo.tar").open()
|
||||
|
|
||||
|
||||
SIM115.py:84:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
82 | f = tempfile.SpooledTemporaryFile()
|
||||
83 | f = tarfile.open("foo.tar")
|
||||
84 | f = TarFile("foo.tar").open()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
85 | f = tarfile.TarFile("foo.tar").open()
|
||||
86 | f = tarfile.TarFile().open()
|
||||
82 | f = tempfile.TemporaryFile()
|
||||
83 | f = tempfile.SpooledTemporaryFile()
|
||||
84 | f = tarfile.open("foo.tar")
|
||||
| ^^^^^^^^^^^^ SIM115
|
||||
85 | f = TarFile("foo.tar").open()
|
||||
86 | f = tarfile.TarFile("foo.tar").open()
|
||||
|
|
||||
|
||||
SIM115.py:85:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
83 | f = tarfile.open("foo.tar")
|
||||
84 | f = TarFile("foo.tar").open()
|
||||
85 | f = tarfile.TarFile("foo.tar").open()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
86 | f = tarfile.TarFile().open()
|
||||
87 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
83 | f = tempfile.SpooledTemporaryFile()
|
||||
84 | f = tarfile.open("foo.tar")
|
||||
85 | f = TarFile("foo.tar").open()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
86 | f = tarfile.TarFile("foo.tar").open()
|
||||
87 | f = tarfile.TarFile().open()
|
||||
|
|
||||
|
||||
SIM115.py:86:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
84 | f = TarFile("foo.tar").open()
|
||||
85 | f = tarfile.TarFile("foo.tar").open()
|
||||
86 | f = tarfile.TarFile().open()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
87 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
88 | f = io.open("foo.txt")
|
||||
84 | f = tarfile.open("foo.tar")
|
||||
85 | f = TarFile("foo.tar").open()
|
||||
86 | f = tarfile.TarFile("foo.tar").open()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
87 | f = tarfile.TarFile().open()
|
||||
88 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:87:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
85 | f = tarfile.TarFile("foo.tar").open()
|
||||
86 | f = tarfile.TarFile().open()
|
||||
87 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
88 | f = io.open("foo.txt")
|
||||
89 | f = io.open_code("foo.txt")
|
||||
85 | f = TarFile("foo.tar").open()
|
||||
86 | f = tarfile.TarFile("foo.tar").open()
|
||||
87 | f = tarfile.TarFile().open()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
88 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
89 | f = io.open("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:88:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
86 | f = tarfile.TarFile().open()
|
||||
87 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
88 | f = io.open("foo.txt")
|
||||
| ^^^^^^^ SIM115
|
||||
89 | f = io.open_code("foo.txt")
|
||||
90 | f = codecs.open("foo.txt")
|
||||
86 | f = tarfile.TarFile("foo.tar").open()
|
||||
87 | f = tarfile.TarFile().open()
|
||||
88 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
89 | f = io.open("foo.txt")
|
||||
90 | f = io.open_code("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:89:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
87 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
88 | f = io.open("foo.txt")
|
||||
89 | f = io.open_code("foo.txt")
|
||||
| ^^^^^^^^^^^^ SIM115
|
||||
90 | f = codecs.open("foo.txt")
|
||||
91 | f = bz2.open("foo.txt")
|
||||
87 | f = tarfile.TarFile().open()
|
||||
88 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
89 | f = io.open("foo.txt")
|
||||
| ^^^^^^^ SIM115
|
||||
90 | f = io.open_code("foo.txt")
|
||||
91 | f = codecs.open("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:90:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
88 | f = io.open("foo.txt")
|
||||
89 | f = io.open_code("foo.txt")
|
||||
90 | f = codecs.open("foo.txt")
|
||||
| ^^^^^^^^^^^ SIM115
|
||||
91 | f = bz2.open("foo.txt")
|
||||
92 | f = gzip.open("foo.txt")
|
||||
88 | f = zipfile.ZipFile("foo.zip").open("foo.txt")
|
||||
89 | f = io.open("foo.txt")
|
||||
90 | f = io.open_code("foo.txt")
|
||||
| ^^^^^^^^^^^^ SIM115
|
||||
91 | f = codecs.open("foo.txt")
|
||||
92 | f = bz2.open("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:91:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
89 | f = io.open_code("foo.txt")
|
||||
90 | f = codecs.open("foo.txt")
|
||||
91 | f = bz2.open("foo.txt")
|
||||
| ^^^^^^^^ SIM115
|
||||
92 | f = gzip.open("foo.txt")
|
||||
93 | f = dbm.open("foo.db")
|
||||
89 | f = io.open("foo.txt")
|
||||
90 | f = io.open_code("foo.txt")
|
||||
91 | f = codecs.open("foo.txt")
|
||||
| ^^^^^^^^^^^ SIM115
|
||||
92 | f = bz2.open("foo.txt")
|
||||
93 | f = gzip.open("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:92:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
90 | f = codecs.open("foo.txt")
|
||||
91 | f = bz2.open("foo.txt")
|
||||
92 | f = gzip.open("foo.txt")
|
||||
| ^^^^^^^^^ SIM115
|
||||
93 | f = dbm.open("foo.db")
|
||||
94 | f = dbm.gnu.open("foo.db")
|
||||
90 | f = io.open_code("foo.txt")
|
||||
91 | f = codecs.open("foo.txt")
|
||||
92 | f = bz2.open("foo.txt")
|
||||
| ^^^^^^^^ SIM115
|
||||
93 | f = gzip.open("foo.txt")
|
||||
94 | f = dbm.open("foo.db")
|
||||
|
|
||||
|
||||
SIM115.py:93:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
91 | f = bz2.open("foo.txt")
|
||||
92 | f = gzip.open("foo.txt")
|
||||
93 | f = dbm.open("foo.db")
|
||||
| ^^^^^^^^ SIM115
|
||||
94 | f = dbm.gnu.open("foo.db")
|
||||
95 | f = dbm.ndbm.open("foo.db")
|
||||
91 | f = codecs.open("foo.txt")
|
||||
92 | f = bz2.open("foo.txt")
|
||||
93 | f = gzip.open("foo.txt")
|
||||
| ^^^^^^^^^ SIM115
|
||||
94 | f = dbm.open("foo.db")
|
||||
95 | f = dbm.gnu.open("foo.db")
|
||||
|
|
||||
|
||||
SIM115.py:94:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
92 | f = gzip.open("foo.txt")
|
||||
93 | f = dbm.open("foo.db")
|
||||
94 | f = dbm.gnu.open("foo.db")
|
||||
| ^^^^^^^^^^^^ SIM115
|
||||
95 | f = dbm.ndbm.open("foo.db")
|
||||
96 | f = dbm.dumb.open("foo.db")
|
||||
92 | f = bz2.open("foo.txt")
|
||||
93 | f = gzip.open("foo.txt")
|
||||
94 | f = dbm.open("foo.db")
|
||||
| ^^^^^^^^ SIM115
|
||||
95 | f = dbm.gnu.open("foo.db")
|
||||
96 | f = dbm.ndbm.open("foo.db")
|
||||
|
|
||||
|
||||
SIM115.py:95:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
93 | f = dbm.open("foo.db")
|
||||
94 | f = dbm.gnu.open("foo.db")
|
||||
95 | f = dbm.ndbm.open("foo.db")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
96 | f = dbm.dumb.open("foo.db")
|
||||
97 | f = lzma.open("foo.xz")
|
||||
93 | f = gzip.open("foo.txt")
|
||||
94 | f = dbm.open("foo.db")
|
||||
95 | f = dbm.gnu.open("foo.db")
|
||||
| ^^^^^^^^^^^^ SIM115
|
||||
96 | f = dbm.ndbm.open("foo.db")
|
||||
97 | f = dbm.dumb.open("foo.db")
|
||||
|
|
||||
|
||||
SIM115.py:96:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
94 | f = dbm.gnu.open("foo.db")
|
||||
95 | f = dbm.ndbm.open("foo.db")
|
||||
96 | f = dbm.dumb.open("foo.db")
|
||||
94 | f = dbm.open("foo.db")
|
||||
95 | f = dbm.gnu.open("foo.db")
|
||||
96 | f = dbm.ndbm.open("foo.db")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
97 | f = lzma.open("foo.xz")
|
||||
98 | f = lzma.LZMAFile("foo.xz")
|
||||
97 | f = dbm.dumb.open("foo.db")
|
||||
98 | f = lzma.open("foo.xz")
|
||||
|
|
||||
|
||||
SIM115.py:97:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
95 | f = dbm.ndbm.open("foo.db")
|
||||
96 | f = dbm.dumb.open("foo.db")
|
||||
97 | f = lzma.open("foo.xz")
|
||||
| ^^^^^^^^^ SIM115
|
||||
98 | f = lzma.LZMAFile("foo.xz")
|
||||
99 | f = shelve.open("foo.db")
|
||||
95 | f = dbm.gnu.open("foo.db")
|
||||
96 | f = dbm.ndbm.open("foo.db")
|
||||
97 | f = dbm.dumb.open("foo.db")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
98 | f = lzma.open("foo.xz")
|
||||
99 | f = lzma.LZMAFile("foo.xz")
|
||||
|
|
||||
|
||||
SIM115.py:98:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
96 | f = dbm.dumb.open("foo.db")
|
||||
97 | f = lzma.open("foo.xz")
|
||||
98 | f = lzma.LZMAFile("foo.xz")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
99 | f = shelve.open("foo.db")
|
||||
100 | f = tokenize.open("foo.py")
|
||||
96 | f = dbm.ndbm.open("foo.db")
|
||||
97 | f = dbm.dumb.open("foo.db")
|
||||
98 | f = lzma.open("foo.xz")
|
||||
| ^^^^^^^^^ SIM115
|
||||
99 | f = lzma.LZMAFile("foo.xz")
|
||||
100 | f = shelve.open("foo.db")
|
||||
|
|
||||
|
||||
SIM115.py:99:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
97 | f = lzma.open("foo.xz")
|
||||
98 | f = lzma.LZMAFile("foo.xz")
|
||||
99 | f = shelve.open("foo.db")
|
||||
| ^^^^^^^^^^^ SIM115
|
||||
100 | f = tokenize.open("foo.py")
|
||||
101 | f = wave.open("foo.wav")
|
||||
97 | f = dbm.dumb.open("foo.db")
|
||||
98 | f = lzma.open("foo.xz")
|
||||
99 | f = lzma.LZMAFile("foo.xz")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
100 | f = shelve.open("foo.db")
|
||||
101 | f = tokenize.open("foo.py")
|
||||
|
|
||||
|
||||
SIM115.py:100:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
98 | f = lzma.LZMAFile("foo.xz")
|
||||
99 | f = shelve.open("foo.db")
|
||||
100 | f = tokenize.open("foo.py")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
101 | f = wave.open("foo.wav")
|
||||
102 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
98 | f = lzma.open("foo.xz")
|
||||
99 | f = lzma.LZMAFile("foo.xz")
|
||||
100 | f = shelve.open("foo.db")
|
||||
| ^^^^^^^^^^^ SIM115
|
||||
101 | f = tokenize.open("foo.py")
|
||||
102 | f = wave.open("foo.wav")
|
||||
|
|
||||
|
||||
SIM115.py:101:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
99 | f = shelve.open("foo.db")
|
||||
100 | f = tokenize.open("foo.py")
|
||||
101 | f = wave.open("foo.wav")
|
||||
| ^^^^^^^^^ SIM115
|
||||
102 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
103 | f = fileinput.input("foo.txt")
|
||||
99 | f = lzma.LZMAFile("foo.xz")
|
||||
100 | f = shelve.open("foo.db")
|
||||
101 | f = tokenize.open("foo.py")
|
||||
| ^^^^^^^^^^^^^ SIM115
|
||||
102 | f = wave.open("foo.wav")
|
||||
103 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
|
|
||||
|
||||
SIM115.py:102:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
100 | f = tokenize.open("foo.py")
|
||||
101 | f = wave.open("foo.wav")
|
||||
102 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
103 | f = fileinput.input("foo.txt")
|
||||
104 | f = fileinput.FileInput("foo.txt")
|
||||
100 | f = shelve.open("foo.db")
|
||||
101 | f = tokenize.open("foo.py")
|
||||
102 | f = wave.open("foo.wav")
|
||||
| ^^^^^^^^^ SIM115
|
||||
103 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
104 | f = fileinput.input("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:103:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
101 | f = wave.open("foo.wav")
|
||||
102 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
103 | f = fileinput.input("foo.txt")
|
||||
| ^^^^^^^^^^^^^^^ SIM115
|
||||
104 | f = fileinput.FileInput("foo.txt")
|
||||
101 | f = tokenize.open("foo.py")
|
||||
102 | f = wave.open("foo.wav")
|
||||
103 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
104 | f = fileinput.input("foo.txt")
|
||||
105 | f = fileinput.FileInput("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:104:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
102 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
103 | f = fileinput.input("foo.txt")
|
||||
104 | f = fileinput.FileInput("foo.txt")
|
||||
102 | f = wave.open("foo.wav")
|
||||
103 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
104 | f = fileinput.input("foo.txt")
|
||||
| ^^^^^^^^^^^^^^^ SIM115
|
||||
105 | f = fileinput.FileInput("foo.txt")
|
||||
|
|
||||
|
||||
SIM115.py:105:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
103 | f = tarfile.TarFile.taropen("foo.tar")
|
||||
104 | f = fileinput.input("foo.txt")
|
||||
105 | f = fileinput.FileInput("foo.txt")
|
||||
| ^^^^^^^^^^^^^^^^^^^ SIM115
|
||||
105 |
|
||||
106 | with contextlib.suppress(Exception):
|
||||
106 |
|
||||
107 | with contextlib.suppress(Exception):
|
||||
|
|
||||
|
||||
SIM115.py:240:9: SIM115 Use a context manager for opening files
|
||||
SIM115.py:241:9: SIM115 Use a context manager for opening files
|
||||
|
|
||||
238 | def aliased():
|
||||
239 | from shelve import open as open_shelf
|
||||
240 | x = open_shelf("foo.dbm")
|
||||
239 | def aliased():
|
||||
240 | from shelve import open as open_shelf
|
||||
241 | x = open_shelf("foo.dbm")
|
||||
| ^^^^^^^^^^ SIM115
|
||||
241 | x.close()
|
||||
242 | x.close()
|
||||
|
|
||||
|
||||
SIM115.py:244:9: SIM115 Use a context manager for opening files
|
||||
SIM115.py:245:9: SIM115 Use a context manager for opening files
|
||||
|
|
||||
243 | from tarfile import TarFile as TF
|
||||
244 | f = TF("foo").open()
|
||||
244 | from tarfile import TarFile as TF
|
||||
245 | f = TF("foo").open()
|
||||
| ^^^^^^^^^^^^^^ SIM115
|
||||
245 | f.close()
|
||||
246 | f.close()
|
||||
|
|
||||
|
||||
SIM115.py:257:5: SIM115 Use a context manager for opening files
|
||||
SIM115.py:258:5: SIM115 Use a context manager for opening files
|
||||
|
|
||||
256 | # SIM115
|
||||
257 | f = dbm.sqlite3.open("foo.db")
|
||||
257 | # SIM115
|
||||
258 | f = dbm.sqlite3.open("foo.db")
|
||||
| ^^^^^^^^^^^^^^^^ SIM115
|
||||
258 | f.close()
|
||||
259 | f.close()
|
||||
|
|
||||
|
||||
@@ -1,110 +1,110 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_simplify/mod.rs
|
||||
---
|
||||
SIM116.py:5:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:6:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
4 | # SIM116
|
||||
5 | / if a == "foo":
|
||||
6 | | return "bar"
|
||||
7 | | elif a == "bar":
|
||||
8 | | return "baz"
|
||||
9 | | elif a == "boo":
|
||||
10 | | return "ooh"
|
||||
11 | | else:
|
||||
12 | | return 42
|
||||
| |_____________^ SIM116
|
||||
13 |
|
||||
14 | # SIM116
|
||||
5 | # SIM116
|
||||
6 | / if a == "foo":
|
||||
7 | | return "bar"
|
||||
8 | | elif a == "bar":
|
||||
9 | | return "baz"
|
||||
10 | | elif a == "boo":
|
||||
11 | | return "ooh"
|
||||
12 | | else:
|
||||
13 | | return 42
|
||||
| |_________________^ SIM116
|
||||
14 |
|
||||
15 | # SIM116
|
||||
|
|
||||
|
||||
SIM116.py:15:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:16:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
14 | # SIM116
|
||||
15 | / if a == 1:
|
||||
16 | | return (1, 2, 3)
|
||||
17 | | elif a == 2:
|
||||
18 | | return (4, 5, 6)
|
||||
19 | | elif a == 3:
|
||||
20 | | return (7, 8, 9)
|
||||
21 | | else:
|
||||
22 | | return (10, 11, 12)
|
||||
| |_______________________^ SIM116
|
||||
23 |
|
||||
24 | # SIM116
|
||||
15 | # SIM116
|
||||
16 | / if a == 1:
|
||||
17 | | return (1, 2, 3)
|
||||
18 | | elif a == 2:
|
||||
19 | | return (4, 5, 6)
|
||||
20 | | elif a == 3:
|
||||
21 | | return (7, 8, 9)
|
||||
22 | | else:
|
||||
23 | | return (10, 11, 12)
|
||||
| |___________________________^ SIM116
|
||||
24 |
|
||||
25 | # SIM116
|
||||
|
|
||||
|
||||
SIM116.py:25:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:26:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
24 | # SIM116
|
||||
25 | / if a == 1:
|
||||
26 | | return (1, 2, 3)
|
||||
27 | | elif a == 2:
|
||||
28 | | return (4, 5, 6)
|
||||
29 | | elif a == 3:
|
||||
30 | | return (7, 8, 9)
|
||||
| |____________________^ SIM116
|
||||
31 |
|
||||
32 | # SIM116
|
||||
25 | # SIM116
|
||||
26 | / if a == 1:
|
||||
27 | | return (1, 2, 3)
|
||||
28 | | elif a == 2:
|
||||
29 | | return (4, 5, 6)
|
||||
30 | | elif a == 3:
|
||||
31 | | return (7, 8, 9)
|
||||
| |________________________^ SIM116
|
||||
32 |
|
||||
33 | # SIM116
|
||||
|
|
||||
|
||||
SIM116.py:33:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:34:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
32 | # SIM116
|
||||
33 | / if a == "hello 'sir'":
|
||||
34 | | return (1, 2, 3)
|
||||
35 | | elif a == 'goodbye "mam"':
|
||||
36 | | return (4, 5, 6)
|
||||
37 | | elif a == """Fairwell 'mister'""":
|
||||
38 | | return (7, 8, 9)
|
||||
39 | | else:
|
||||
40 | | return (10, 11, 12)
|
||||
| |_______________________^ SIM116
|
||||
41 |
|
||||
42 | # SIM116
|
||||
33 | # SIM116
|
||||
34 | / if a == "hello 'sir'":
|
||||
35 | | return (1, 2, 3)
|
||||
36 | | elif a == 'goodbye "mam"':
|
||||
37 | | return (4, 5, 6)
|
||||
38 | | elif a == """Fairwell 'mister'""":
|
||||
39 | | return (7, 8, 9)
|
||||
40 | | else:
|
||||
41 | | return (10, 11, 12)
|
||||
| |___________________________^ SIM116
|
||||
42 |
|
||||
43 | # SIM116
|
||||
|
|
||||
|
||||
SIM116.py:43:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:44:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
42 | # SIM116
|
||||
43 | / if a == b"one":
|
||||
44 | | return 1
|
||||
45 | | elif a == b"two":
|
||||
46 | | return 2
|
||||
47 | | elif a == b"three":
|
||||
48 | | return 3
|
||||
| |____________^ SIM116
|
||||
49 |
|
||||
50 | # SIM116
|
||||
43 | # SIM116
|
||||
44 | / if a == b"one":
|
||||
45 | | return 1
|
||||
46 | | elif a == b"two":
|
||||
47 | | return 2
|
||||
48 | | elif a == b"three":
|
||||
49 | | return 3
|
||||
| |________________^ SIM116
|
||||
50 |
|
||||
51 | # SIM116
|
||||
|
|
||||
|
||||
SIM116.py:51:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:52:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
50 | # SIM116
|
||||
51 | / if a == "hello 'sir'":
|
||||
52 | | return ("hello'", 'hi"', 3)
|
||||
53 | | elif a == 'goodbye "mam"':
|
||||
54 | | return (4, 5, 6)
|
||||
55 | | elif a == """Fairwell 'mister'""":
|
||||
56 | | return (7, 8, 9)
|
||||
57 | | else:
|
||||
58 | | return (10, 11, 12)
|
||||
| |_______________________^ SIM116
|
||||
59 |
|
||||
60 | # OK
|
||||
51 | # SIM116
|
||||
52 | / if a == "hello 'sir'":
|
||||
53 | | return ("hello'", 'hi"', 3)
|
||||
54 | | elif a == 'goodbye "mam"':
|
||||
55 | | return (4, 5, 6)
|
||||
56 | | elif a == """Fairwell 'mister'""":
|
||||
57 | | return (7, 8, 9)
|
||||
58 | | else:
|
||||
59 | | return (10, 11, 12)
|
||||
| |___________________________^ SIM116
|
||||
60 |
|
||||
61 | # OK
|
||||
|
|
||||
|
||||
SIM116.py:79:1: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
SIM116.py:80:5: SIM116 Use a dictionary instead of consecutive `if` statements
|
||||
|
|
||||
78 | # SIM116
|
||||
79 | / if func_name == "create":
|
||||
80 | | return "A"
|
||||
81 | | elif func_name == "modify":
|
||||
82 | | return "M"
|
||||
83 | | elif func_name == "remove":
|
||||
84 | | return "D"
|
||||
85 | | elif func_name == "move":
|
||||
86 | | return "MV"
|
||||
| |_______________^ SIM116
|
||||
87 |
|
||||
88 | # OK
|
||||
79 | # SIM116
|
||||
80 | / if func_name == "create":
|
||||
81 | | return "A"
|
||||
82 | | elif func_name == "modify":
|
||||
83 | | return "M"
|
||||
84 | | elif func_name == "remove":
|
||||
85 | | return "D"
|
||||
86 | | elif func_name == "move":
|
||||
87 | | return "MV"
|
||||
| |___________________^ SIM116
|
||||
88 |
|
||||
89 | # OK
|
||||
|
|
||||
|
||||
@@ -290,7 +290,6 @@ mod tests {
|
||||
use test_case::test_case;
|
||||
|
||||
use ruff_python_semantic::{MemberNameImport, ModuleNameImport, NameImport};
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::assert_diagnostics;
|
||||
use crate::registry::Rule;
|
||||
@@ -658,7 +657,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -686,7 +685,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -716,7 +715,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -744,7 +743,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -766,7 +765,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -786,7 +785,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -1130,7 +1129,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -1155,7 +1154,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -1177,7 +1176,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -1198,7 +1197,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(&*snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
@@ -1217,7 +1216,7 @@ mod tests {
|
||||
..LinterSettings::for_rule(Rule::UnsortedImports)
|
||||
},
|
||||
)?;
|
||||
diagnostics.sort_by_key(Ranged::start);
|
||||
diagnostics.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
assert_diagnostics!(snapshot, diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -168,7 +168,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
WhitespaceAfterOpenBracket { symbol },
|
||||
TextRange::at(token.end(), trailing_len),
|
||||
) {
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
|
||||
}
|
||||
}
|
||||
@@ -182,7 +182,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
WhitespaceBeforeCloseBracket { symbol },
|
||||
TextRange::at(token.start() - offset, offset),
|
||||
) {
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
|
||||
}
|
||||
}
|
||||
@@ -210,7 +210,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
TextRange::at(token.start() - offset, offset),
|
||||
)
|
||||
{
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic
|
||||
.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
|
||||
}
|
||||
@@ -227,7 +227,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
TextRange::at(token.start() - offset, offset),
|
||||
)
|
||||
{
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edit(
|
||||
Edit::range_deletion(range),
|
||||
));
|
||||
@@ -255,7 +255,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
TextRange::at(token.start() - offset, offset),
|
||||
)
|
||||
{
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edits(
|
||||
Edit::range_deletion(range),
|
||||
[Edit::insertion(
|
||||
@@ -278,7 +278,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
TextRange::at(token.start() - offset, offset),
|
||||
)
|
||||
{
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edit(
|
||||
Edit::range_deletion(range),
|
||||
));
|
||||
@@ -297,7 +297,7 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &LintContext) {
|
||||
WhitespaceBeforePunctuation { symbol },
|
||||
TextRange::at(token.start() - offset, offset),
|
||||
) {
|
||||
let range = diagnostic.range();
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
snapshot_kind: text
|
||||
---
|
||||
E11.py:3:1: E111 Indentation is not a multiple of 4
|
||||
|
|
||||
@@ -27,7 +26,7 @@ E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -37,7 +36,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -57,7 +56,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
@@ -16,7 +16,7 @@ E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -26,7 +26,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -56,7 +56,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
snapshot_kind: text
|
||||
---
|
||||
E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
|
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -27,7 +26,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -47,7 +46,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
snapshot_kind: text
|
||||
---
|
||||
E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
|
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -17,7 +16,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -47,7 +46,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
@@ -6,7 +6,7 @@ E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -16,7 +16,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -96,7 +96,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
snapshot_kind: text
|
||||
---
|
||||
E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
|
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -17,7 +16,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -77,7 +76,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
snapshot_kind: text
|
||||
---
|
||||
E11.py:6:1: E117 Over-indented
|
||||
|
|
||||
@@ -17,7 +16,7 @@ E11.py:9:1: SyntaxError: Expected an indented block after `if` statement
|
||||
7 | #: E112
|
||||
8 | if False:
|
||||
9 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
|
|
||||
@@ -27,7 +26,7 @@ E11.py:12:1: SyntaxError: Unexpected indentation
|
||||
10 | #: E113
|
||||
11 | print()
|
||||
12 | print()
|
||||
| ^
|
||||
| ^^^^
|
||||
13 | #: E114 E116
|
||||
14 | mimetype = 'application/x-directory'
|
||||
|
|
||||
@@ -67,7 +66,7 @@ E11.py:45:1: SyntaxError: Expected an indented block after `if` statement
|
||||
43 | #: E112
|
||||
44 | if False: #
|
||||
45 | print()
|
||||
| ^
|
||||
| ^^^^^
|
||||
46 | #:
|
||||
47 | if False:
|
||||
|
|
||||
|
||||
Binary file not shown.
@@ -11,6 +11,7 @@ mod tests {
|
||||
|
||||
use anyhow::Result;
|
||||
use regex::Regex;
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_python_parser::ParseOptions;
|
||||
use rustc_hash::FxHashMap;
|
||||
use test_case::test_case;
|
||||
@@ -19,7 +20,6 @@ mod tests {
|
||||
use ruff_python_codegen::Stylist;
|
||||
use ruff_python_index::Indexer;
|
||||
use ruff_python_trivia::textwrap::dedent;
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::linter::check_path;
|
||||
use crate::registry::{Linter, Rule};
|
||||
@@ -29,7 +29,7 @@ mod tests {
|
||||
use crate::settings::{LinterSettings, flags};
|
||||
use crate::source_kind::SourceKind;
|
||||
use crate::test::{test_contents, test_path, test_snippet};
|
||||
use crate::{Locator, OldDiagnostic, assert_diagnostics, directives};
|
||||
use crate::{Locator, assert_diagnostics, directives};
|
||||
|
||||
#[test_case(Rule::UnusedImport, Path::new("F401_0.py"))]
|
||||
#[test_case(Rule::UnusedImport, Path::new("F401_1.py"))]
|
||||
@@ -771,11 +771,11 @@ mod tests {
|
||||
&parsed,
|
||||
target_version,
|
||||
);
|
||||
messages.sort_by_key(Ranged::start);
|
||||
messages.sort_by_key(|diagnostic| diagnostic.expect_range().start());
|
||||
let actual = messages
|
||||
.iter()
|
||||
.filter(|msg| !msg.is_syntax_error())
|
||||
.map(OldDiagnostic::name)
|
||||
.map(Diagnostic::name)
|
||||
.collect::<Vec<_>>();
|
||||
let expected: Vec<_> = expected.iter().map(|rule| rule.name().as_str()).collect();
|
||||
assert_eq!(actual, expected);
|
||||
|
||||
@@ -24,13 +24,14 @@ use crate::checkers::ast::Checker;
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// with open("file", "rwx") as f:
|
||||
/// return f.read()
|
||||
/// content = f.read()
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
///
|
||||
/// ```python
|
||||
/// with open("file", "r") as f:
|
||||
/// return f.read()
|
||||
/// content = f.read()
|
||||
/// ```
|
||||
///
|
||||
/// ## References
|
||||
|
||||
@@ -44,6 +44,13 @@ use crate::{
|
||||
/// print(platform.python_version())
|
||||
/// ```
|
||||
///
|
||||
/// ## See also
|
||||
/// This rule will ignore import statements configured in
|
||||
/// [`lint.flake8-tidy-imports.banned-module-level-imports`][banned-module-level-imports]
|
||||
/// if the rule [`banned-module-level-imports`][TID253] is enabled.
|
||||
///
|
||||
/// [banned-module-level-imports]: https://docs.astral.sh/ruff/settings/#lint_flake8-tidy-imports_banned-module-level-imports
|
||||
/// [TID253]: https://docs.astral.sh/ruff/rules/banned-module-level-imports/
|
||||
/// [PEP 8]: https://peps.python.org/pep-0008/#imports
|
||||
#[derive(ViolationMetadata)]
|
||||
pub(crate) struct ImportOutsideTopLevel;
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pylint/mod.rs
|
||||
snapshot_kind: text
|
||||
---
|
||||
invalid_characters_syntax_error.py:5:6: PLE2510 Invalid unescaped character backspace, use "\b" instead
|
||||
|
|
||||
@@ -17,7 +16,7 @@ invalid_characters_syntax_error.py:7:5: SyntaxError: missing closing quote in st
|
||||
5 | b = '␈'
|
||||
6 | # Unterminated string
|
||||
7 | b = '␈
|
||||
| ^
|
||||
| ^^
|
||||
8 | b = '␈'
|
||||
9 | # Unterminated f-string
|
||||
|
|
||||
@@ -99,7 +98,7 @@ invalid_characters_syntax_error.py:13:14: SyntaxError: missing closing quote in
|
||||
11 | b = f'␈'
|
||||
12 | # Implicitly concatenated
|
||||
13 | b = '␈' f'␈' '␈
|
||||
| ^
|
||||
| ^^
|
||||
|
|
||||
|
||||
invalid_characters_syntax_error.py:13:16: SyntaxError: Expected a statement
|
||||
|
||||
@@ -15,21 +15,23 @@ use crate::{Edit, Fix, FixAvailability, Violation};
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// for x in foo:
|
||||
/// yield x
|
||||
/// def bar():
|
||||
/// for x in foo:
|
||||
/// yield x
|
||||
///
|
||||
/// global y
|
||||
/// for y in foo:
|
||||
/// yield y
|
||||
/// global y
|
||||
/// for y in foo:
|
||||
/// yield y
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// yield from foo
|
||||
/// def bar():
|
||||
/// yield from foo
|
||||
///
|
||||
/// for _element in foo:
|
||||
/// y = _element
|
||||
/// yield y
|
||||
/// for _element in foo:
|
||||
/// y = _element
|
||||
/// yield y
|
||||
/// ```
|
||||
///
|
||||
/// ## Fix safety
|
||||
|
||||
@@ -25,14 +25,16 @@ fn is_stdlib_dataclass_field(func: &Expr, semantic: &SemanticModel) -> bool {
|
||||
.is_some_and(|qualified_name| matches!(qualified_name.segments(), ["dataclasses", "field"]))
|
||||
}
|
||||
|
||||
/// Returns `true` if the given [`Expr`] is a call to `attr.ib()` or `attrs.field()`.
|
||||
/// Returns `true` if the given [`Expr`] is a call to an `attrs` field function.
|
||||
fn is_attrs_field(func: &Expr, semantic: &SemanticModel) -> bool {
|
||||
semantic
|
||||
.resolve_qualified_name(func)
|
||||
.is_some_and(|qualified_name| {
|
||||
matches!(
|
||||
qualified_name.segments(),
|
||||
["attrs", "field" | "Factory"] | ["attr", "ib"]
|
||||
["attrs", "field" | "Factory"]
|
||||
// See https://github.com/python-attrs/attrs/blob/main/src/attr/__init__.py#L33
|
||||
| ["attr", "ib" | "attr" | "attrib" | "field" | "Factory"]
|
||||
)
|
||||
})
|
||||
}
|
||||
@@ -120,7 +122,8 @@ pub(super) fn dataclass_kind<'a>(
|
||||
|
||||
match qualified_name.segments() {
|
||||
["attrs" | "attr", func @ ("define" | "frozen" | "mutable")]
|
||||
| ["attr", func @ ("s" | "attrs")] => {
|
||||
// See https://github.com/python-attrs/attrs/blob/main/src/attr/__init__.py#L32
|
||||
| ["attr", func @ ("s" | "attributes" | "attrs")] => {
|
||||
// `.define`, `.frozen` and `.mutable` all default `auto_attribs` to `None`,
|
||||
// whereas `@attr.s` implicitly sets `auto_attribs=False`.
|
||||
// https://www.attrs.org/en/stable/api.html#attrs.define
|
||||
|
||||
@@ -128,7 +128,7 @@ pub(crate) fn post_init_default(checker: &Checker, function_def: &ast::StmtFunct
|
||||
// Need to stop fixes as soon as there is a parameter we cannot fix.
|
||||
// Otherwise, we risk a syntax error (a parameter without a default
|
||||
// following parameter with a default).
|
||||
stopped_fixes |= diagnostic.fix.is_none();
|
||||
stopped_fixes |= diagnostic.fix().is_none();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -98,3 +98,11 @@ RUF009_attrs.py:127:12: RUF009 Do not perform function call `G` in dataclass def
|
||||
127 | g: G = G()
|
||||
| ^^^ RUF009
|
||||
|
|
||||
|
||||
RUF009_attrs.py:144:20: RUF009 Do not perform function call `list` in dataclass defaults
|
||||
|
|
||||
142 | @attr.attributes
|
||||
143 | class TestAttrAttributes:
|
||||
144 | x: list[int] = list() # RUF009
|
||||
| ^^^^^^ RUF009
|
||||
|
|
||||
|
||||
@@ -7,9 +7,9 @@ use std::path::Path;
|
||||
#[cfg(not(fuzzing))]
|
||||
use anyhow::Result;
|
||||
use itertools::Itertools;
|
||||
use ruff_text_size::Ranged;
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_notebook::Notebook;
|
||||
#[cfg(not(fuzzing))]
|
||||
use ruff_notebook::NotebookError;
|
||||
@@ -23,7 +23,7 @@ use ruff_source_file::SourceFileBuilder;
|
||||
use crate::codes::Rule;
|
||||
use crate::fix::{FixResult, fix_file};
|
||||
use crate::linter::check_path;
|
||||
use crate::message::{Emitter, EmitterContext, OldDiagnostic, TextEmitter};
|
||||
use crate::message::{Emitter, EmitterContext, TextEmitter, create_syntax_error_diagnostic};
|
||||
use crate::package::PackageRoot;
|
||||
use crate::packaging::detect_package_root;
|
||||
use crate::settings::types::UnsafeFixes;
|
||||
@@ -42,7 +42,7 @@ pub(crate) fn test_resource_path(path: impl AsRef<Path>) -> std::path::PathBuf {
|
||||
pub(crate) fn test_path(
|
||||
path: impl AsRef<Path>,
|
||||
settings: &LinterSettings,
|
||||
) -> Result<Vec<OldDiagnostic>> {
|
||||
) -> Result<Vec<Diagnostic>> {
|
||||
let path = test_resource_path("fixtures").join(path);
|
||||
let source_type = PySourceType::from(&path);
|
||||
let source_kind = SourceKind::from_path(path.as_ref(), source_type)?.expect("valid source");
|
||||
@@ -51,7 +51,7 @@ pub(crate) fn test_path(
|
||||
|
||||
#[cfg(not(fuzzing))]
|
||||
pub(crate) struct TestedNotebook {
|
||||
pub(crate) diagnostics: Vec<OldDiagnostic>,
|
||||
pub(crate) diagnostics: Vec<Diagnostic>,
|
||||
pub(crate) source_notebook: Notebook,
|
||||
pub(crate) linted_notebook: Notebook,
|
||||
}
|
||||
@@ -87,7 +87,7 @@ pub(crate) fn assert_notebook_path(
|
||||
}
|
||||
|
||||
/// Run [`check_path`] on a snippet of Python code.
|
||||
pub fn test_snippet(contents: &str, settings: &LinterSettings) -> Vec<OldDiagnostic> {
|
||||
pub fn test_snippet(contents: &str, settings: &LinterSettings) -> Vec<Diagnostic> {
|
||||
let path = Path::new("<filename>");
|
||||
let contents = dedent(contents);
|
||||
test_contents(&SourceKind::Python(contents.into_owned()), path, settings).0
|
||||
@@ -111,7 +111,7 @@ pub(crate) fn test_contents<'a>(
|
||||
source_kind: &'a SourceKind,
|
||||
path: &Path,
|
||||
settings: &LinterSettings,
|
||||
) -> (Vec<OldDiagnostic>, Cow<'a, SourceKind>) {
|
||||
) -> (Vec<Diagnostic>, Cow<'a, SourceKind>) {
|
||||
let source_type = PySourceType::from(path);
|
||||
let target_version = settings.resolve_target_version(path);
|
||||
let options =
|
||||
@@ -211,8 +211,7 @@ pub(crate) fn test_contents<'a>(
|
||||
if parsed.has_invalid_syntax() && !source_has_errors {
|
||||
// Previous fix introduced a syntax error, abort
|
||||
let fixes = print_diagnostics(messages, path, source_kind);
|
||||
let syntax_errors =
|
||||
print_syntax_errors(parsed.errors(), path, &locator, &transformed);
|
||||
let syntax_errors = print_syntax_errors(parsed.errors(), path, &transformed);
|
||||
|
||||
panic!(
|
||||
"Fixed source has a syntax error where the source document does not. This is a bug in one of the generated fixes:
|
||||
@@ -280,9 +279,9 @@ Either ensure you always emit a fix or change `Violation::FIX_AVAILABILITY` to e
|
||||
|
||||
// Not strictly necessary but adds some coverage for this code path by overriding the
|
||||
// noqa offset and the source file
|
||||
let range = diagnostic.range();
|
||||
diagnostic.noqa_offset = Some(directives.noqa_line_for.resolve(range.start()));
|
||||
if let Some(annotation) = diagnostic.diagnostic.primary_annotation_mut() {
|
||||
let range = diagnostic.expect_range();
|
||||
diagnostic.set_noqa_offset(directives.noqa_line_for.resolve(range.start()));
|
||||
if let Some(annotation) = diagnostic.primary_annotation_mut() {
|
||||
annotation.set_span(
|
||||
ruff_db::diagnostic::Span::from(source_code.clone()).with_range(range),
|
||||
);
|
||||
@@ -291,26 +290,21 @@ Either ensure you always emit a fix or change `Violation::FIX_AVAILABILITY` to e
|
||||
diagnostic
|
||||
})
|
||||
.chain(parsed.errors().iter().map(|parse_error| {
|
||||
OldDiagnostic::from_parse_error(parse_error, &locator, source_code.clone())
|
||||
create_syntax_error_diagnostic(source_code.clone(), &parse_error.error, parse_error)
|
||||
}))
|
||||
.sorted()
|
||||
.collect();
|
||||
(messages, transformed)
|
||||
}
|
||||
|
||||
fn print_syntax_errors(
|
||||
errors: &[ParseError],
|
||||
path: &Path,
|
||||
locator: &Locator,
|
||||
source: &SourceKind,
|
||||
) -> String {
|
||||
fn print_syntax_errors(errors: &[ParseError], path: &Path, source: &SourceKind) -> String {
|
||||
let filename = path.file_name().unwrap().to_string_lossy();
|
||||
let source_file = SourceFileBuilder::new(filename.as_ref(), source.source_code()).finish();
|
||||
|
||||
let messages: Vec<_> = errors
|
||||
.iter()
|
||||
.map(|parse_error| {
|
||||
OldDiagnostic::from_parse_error(parse_error, locator, source_file.clone())
|
||||
create_syntax_error_diagnostic(source_file.clone(), &parse_error.error, parse_error)
|
||||
})
|
||||
.collect();
|
||||
|
||||
@@ -321,12 +315,8 @@ fn print_syntax_errors(
|
||||
}
|
||||
}
|
||||
|
||||
/// Print the [`Message::Diagnostic`]s in `messages`.
|
||||
fn print_diagnostics(
|
||||
mut diagnostics: Vec<OldDiagnostic>,
|
||||
path: &Path,
|
||||
source: &SourceKind,
|
||||
) -> String {
|
||||
/// Print the lint diagnostics in `diagnostics`.
|
||||
fn print_diagnostics(mut diagnostics: Vec<Diagnostic>, path: &Path, source: &SourceKind) -> String {
|
||||
diagnostics.retain(|msg| !msg.is_syntax_error());
|
||||
|
||||
if let Some(notebook) = source.as_ipy_notebook() {
|
||||
@@ -337,7 +327,7 @@ fn print_diagnostics(
|
||||
}
|
||||
|
||||
pub(crate) fn print_jupyter_messages(
|
||||
diagnostics: &[OldDiagnostic],
|
||||
diagnostics: &[Diagnostic],
|
||||
path: &Path,
|
||||
notebook: &Notebook,
|
||||
) -> String {
|
||||
@@ -361,7 +351,7 @@ pub(crate) fn print_jupyter_messages(
|
||||
String::from_utf8(output).unwrap()
|
||||
}
|
||||
|
||||
pub(crate) fn print_messages(diagnostics: &[OldDiagnostic]) -> String {
|
||||
pub(crate) fn print_messages(diagnostics: &[Diagnostic]) -> String {
|
||||
let mut output = Vec::new();
|
||||
|
||||
TextEmitter::default()
|
||||
|
||||
@@ -42,6 +42,12 @@ impl From<LexicalError> for ParseError {
|
||||
}
|
||||
}
|
||||
|
||||
impl Ranged for ParseError {
|
||||
fn range(&self) -> TextRange {
|
||||
self.location
|
||||
}
|
||||
}
|
||||
|
||||
impl ParseError {
|
||||
pub fn error(self) -> ParseErrorType {
|
||||
self.error
|
||||
|
||||
@@ -981,6 +981,12 @@ impl Display for SemanticSyntaxError {
|
||||
}
|
||||
}
|
||||
|
||||
impl Ranged for SemanticSyntaxError {
|
||||
fn range(&self) -> TextRange {
|
||||
self.range
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash, get_size2::GetSize)]
|
||||
pub enum SemanticSyntaxErrorKind {
|
||||
/// Represents the use of a `__future__` import after the beginning of a file.
|
||||
|
||||
@@ -13,6 +13,7 @@ license = { workspace = true }
|
||||
[lib]
|
||||
|
||||
[dependencies]
|
||||
ruff_db = { workspace = true }
|
||||
ruff_diagnostics = { workspace = true }
|
||||
ruff_formatter = { workspace = true }
|
||||
ruff_linter = { workspace = true }
|
||||
|
||||
@@ -9,13 +9,13 @@ use crate::{
|
||||
resolve::is_document_excluded_for_linting,
|
||||
session::DocumentQuery,
|
||||
};
|
||||
use ruff_db::diagnostic::Diagnostic;
|
||||
use ruff_diagnostics::{Applicability, Edit, Fix};
|
||||
use ruff_linter::{
|
||||
Locator,
|
||||
directives::{Flags, extract_directives},
|
||||
generate_noqa_edits,
|
||||
linter::check_path,
|
||||
message::OldDiagnostic,
|
||||
package::PackageRoot,
|
||||
packaging::detect_package_root,
|
||||
settings::flags,
|
||||
@@ -228,13 +228,13 @@ pub(crate) fn fixes_for_diagnostics(
|
||||
/// Generates an LSP diagnostic with an associated cell index for the diagnostic to go in.
|
||||
/// If the source kind is a text document, the cell index will always be `0`.
|
||||
fn to_lsp_diagnostic(
|
||||
diagnostic: &OldDiagnostic,
|
||||
diagnostic: &Diagnostic,
|
||||
noqa_edit: Option<Edit>,
|
||||
source_kind: &SourceKind,
|
||||
index: &LineIndex,
|
||||
encoding: PositionEncoding,
|
||||
) -> (usize, lsp_types::Diagnostic) {
|
||||
let diagnostic_range = diagnostic.range();
|
||||
let diagnostic_range = diagnostic.expect_range();
|
||||
let name = diagnostic.name();
|
||||
let body = diagnostic.body().to_string();
|
||||
let fix = diagnostic.fix();
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff_wasm"
|
||||
version = "0.12.1"
|
||||
version = "0.12.2"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -210,8 +210,8 @@ impl Workspace {
|
||||
.map(|msg| ExpandedMessage {
|
||||
code: msg.secondary_code().map(ToString::to_string),
|
||||
message: msg.body().to_string(),
|
||||
start_location: source_code.line_column(msg.start()).into(),
|
||||
end_location: source_code.line_column(msg.end()).into(),
|
||||
start_location: source_code.line_column(msg.expect_range().start()).into(),
|
||||
end_location: source_code.line_column(msg.expect_range().end()).into(),
|
||||
fix: msg.fix().map(|fix| ExpandedFix {
|
||||
message: msg.suggestion().map(ToString::to_string),
|
||||
edits: fix
|
||||
|
||||
@@ -1996,7 +1996,8 @@ pub struct Flake8TidyImportsOptions {
|
||||
|
||||
/// List of specific modules that may not be imported at module level, and should instead be
|
||||
/// imported lazily (e.g., within a function definition, or an `if TYPE_CHECKING:`
|
||||
/// block, or some other nested context).
|
||||
/// block, or some other nested context). This also affects the rule `import-outside-top-level`
|
||||
/// if `banned-module-level-imports` is enabled.
|
||||
#[option(
|
||||
default = r#"[]"#,
|
||||
value_type = r#"list[str]"#,
|
||||
@@ -3586,7 +3587,7 @@ pub struct FormatOptions {
|
||||
/// Setting `skip-magic-trailing-comma = true` changes the formatting to:
|
||||
///
|
||||
/// ```python
|
||||
/// # The arguments remain on separate lines because of the trailing comma after `b`
|
||||
/// # The arguments are collapsed to a single line because the trailing comma is ignored
|
||||
/// def test(a, b):
|
||||
/// pass
|
||||
/// ```
|
||||
|
||||
@@ -5,7 +5,7 @@ use ruff_db::parsed::{ParsedModuleRef, parsed_module};
|
||||
use ruff_python_ast as ast;
|
||||
use ruff_python_parser::{Token, TokenAt, TokenKind};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
use ty_python_semantic::{Completion, SemanticModel};
|
||||
use ty_python_semantic::{Completion, NameKind, SemanticModel};
|
||||
|
||||
use crate::Db;
|
||||
use crate::find_node::covering_node;
|
||||
@@ -325,38 +325,7 @@ fn import_from_tokens(tokens: &[Token]) -> Option<&Token> {
|
||||
/// This has the effect of putting all dunder attributes after "normal"
|
||||
/// attributes, and all single-underscore attributes after dunder attributes.
|
||||
fn compare_suggestions(c1: &Completion, c2: &Completion) -> Ordering {
|
||||
/// A helper type for sorting completions based only on name.
|
||||
///
|
||||
/// This sorts "normal" names first, then dunder names and finally
|
||||
/// single-underscore names. This matches the order of the variants defined for
|
||||
/// this enum, which is in turn picked up by the derived trait implementation
|
||||
/// for `Ord`.
|
||||
#[derive(Clone, Copy, Eq, PartialEq, PartialOrd, Ord)]
|
||||
enum Kind {
|
||||
Normal,
|
||||
Dunder,
|
||||
Sunder,
|
||||
}
|
||||
|
||||
impl Kind {
|
||||
fn classify(c: &Completion) -> Kind {
|
||||
// Dunder needs a prefix and suffix double underscore.
|
||||
// When there's only a prefix double underscore, this
|
||||
// results in explicit name mangling. We let that be
|
||||
// classified as-if they were single underscore names.
|
||||
//
|
||||
// Ref: <https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers>
|
||||
if c.name.starts_with("__") && c.name.ends_with("__") {
|
||||
Kind::Dunder
|
||||
} else if c.name.starts_with('_') {
|
||||
Kind::Sunder
|
||||
} else {
|
||||
Kind::Normal
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let (kind1, kind2) = (Kind::classify(c1), Kind::classify(c2));
|
||||
let (kind1, kind2) = (NameKind::classify(&c1.name), NameKind::classify(&c2.name));
|
||||
kind1.cmp(&kind2).then_with(|| c1.name.cmp(&c2.name))
|
||||
}
|
||||
|
||||
@@ -472,6 +441,11 @@ mod tests {
|
||||
",
|
||||
);
|
||||
test.assert_completions_include("filter");
|
||||
// Sunder items should be filtered out
|
||||
test.assert_completions_do_not_include("_T");
|
||||
// Dunder attributes should not be stripped
|
||||
test.assert_completions_include("__annotations__");
|
||||
// See `private_symbols_in_stub` for more comprehensive testing private of symbol filtering.
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -536,6 +510,112 @@ re.<CURSOR>
|
||||
test.assert_completions_include("findall");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn private_symbols_in_stub() {
|
||||
let test = CursorTest::builder()
|
||||
.source(
|
||||
"package/__init__.pyi",
|
||||
r#"\
|
||||
from typing import TypeAlias, Literal, TypeVar, ParamSpec, TypeVarTuple, Protocol
|
||||
|
||||
public_name = 1
|
||||
_private_name = 1
|
||||
__mangled_name = 1
|
||||
__dunder_name__ = 1
|
||||
|
||||
public_type_var = TypeVar("public_type_var")
|
||||
_private_type_var = TypeVar("_private_type_var")
|
||||
__mangled_type_var = TypeVar("__mangled_type_var")
|
||||
|
||||
public_param_spec = ParamSpec("public_param_spec")
|
||||
_private_param_spec = ParamSpec("_private_param_spec")
|
||||
|
||||
public_type_var_tuple = TypeVarTuple("public_type_var_tuple")
|
||||
_private_type_var_tuple = TypeVarTuple("_private_type_var_tuple")
|
||||
|
||||
public_explicit_type_alias: TypeAlias = Literal[1]
|
||||
_private_explicit_type_alias: TypeAlias = Literal[1]
|
||||
|
||||
class PublicProtocol(Protocol):
|
||||
def method(self) -> None: ...
|
||||
|
||||
class _PrivateProtocol(Protocol):
|
||||
def method(self) -> None: ...
|
||||
"#,
|
||||
)
|
||||
.source("main.py", "import package; package.<CURSOR>")
|
||||
.build();
|
||||
test.assert_completions_include("public_name");
|
||||
test.assert_completions_include("_private_name");
|
||||
test.assert_completions_include("__mangled_name");
|
||||
test.assert_completions_include("__dunder_name__");
|
||||
test.assert_completions_include("public_type_var");
|
||||
test.assert_completions_do_not_include("_private_type_var");
|
||||
test.assert_completions_do_not_include("__mangled_type_var");
|
||||
test.assert_completions_include("public_param_spec");
|
||||
test.assert_completions_do_not_include("_private_param_spec");
|
||||
test.assert_completions_include("public_type_var_tuple");
|
||||
test.assert_completions_do_not_include("_private_type_var_tuple");
|
||||
test.assert_completions_include("public_explicit_type_alias");
|
||||
test.assert_completions_include("_private_explicit_type_alias");
|
||||
test.assert_completions_include("PublicProtocol");
|
||||
test.assert_completions_do_not_include("_PrivateProtocol");
|
||||
}
|
||||
|
||||
/// Unlike [`private_symbols_in_stub`], this test doesn't use a `.pyi` file so all of the names
|
||||
/// are visible.
|
||||
#[test]
|
||||
fn private_symbols_in_module() {
|
||||
let test = CursorTest::builder()
|
||||
.source(
|
||||
"package/__init__.py",
|
||||
r#"\
|
||||
from typing import TypeAlias, Literal, TypeVar, ParamSpec, TypeVarTuple, Protocol
|
||||
|
||||
public_name = 1
|
||||
_private_name = 1
|
||||
__mangled_name = 1
|
||||
__dunder_name__ = 1
|
||||
|
||||
public_type_var = TypeVar("public_type_var")
|
||||
_private_type_var = TypeVar("_private_type_var")
|
||||
__mangled_type_var = TypeVar("__mangled_type_var")
|
||||
|
||||
public_param_spec = ParamSpec("public_param_spec")
|
||||
_private_param_spec = ParamSpec("_private_param_spec")
|
||||
|
||||
public_type_var_tuple = TypeVarTuple("public_type_var_tuple")
|
||||
_private_type_var_tuple = TypeVarTuple("_private_type_var_tuple")
|
||||
|
||||
public_explicit_type_alias: TypeAlias = Literal[1]
|
||||
_private_explicit_type_alias: TypeAlias = Literal[1]
|
||||
|
||||
class PublicProtocol(Protocol):
|
||||
def method(self) -> None: ...
|
||||
|
||||
class _PrivateProtocol(Protocol):
|
||||
def method(self) -> None: ...
|
||||
"#,
|
||||
)
|
||||
.source("main.py", "import package; package.<CURSOR>")
|
||||
.build();
|
||||
test.assert_completions_include("public_name");
|
||||
test.assert_completions_include("_private_name");
|
||||
test.assert_completions_include("__mangled_name");
|
||||
test.assert_completions_include("__dunder_name__");
|
||||
test.assert_completions_include("public_type_var");
|
||||
test.assert_completions_include("_private_type_var");
|
||||
test.assert_completions_include("__mangled_type_var");
|
||||
test.assert_completions_include("public_param_spec");
|
||||
test.assert_completions_include("_private_param_spec");
|
||||
test.assert_completions_include("public_type_var_tuple");
|
||||
test.assert_completions_include("_private_type_var_tuple");
|
||||
test.assert_completions_include("public_explicit_type_alias");
|
||||
test.assert_completions_include("_private_explicit_type_alias");
|
||||
test.assert_completions_include("PublicProtocol");
|
||||
test.assert_completions_include("_PrivateProtocol");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn one_function_prefix() {
|
||||
let test = cursor_test(
|
||||
|
||||
@@ -5,10 +5,7 @@ pub use db::{CheckMode, Db, ProjectDatabase, SalsaMemoryDump};
|
||||
use files::{Index, Indexed, IndexedFiles};
|
||||
use metadata::settings::Settings;
|
||||
pub use metadata::{ProjectMetadata, ProjectMetadataError};
|
||||
use ruff_db::diagnostic::{
|
||||
Annotation, Diagnostic, DiagnosticId, Severity, Span, SubDiagnostic, create_parse_diagnostic,
|
||||
create_unsupported_syntax_diagnostic,
|
||||
};
|
||||
use ruff_db::diagnostic::{Annotation, Diagnostic, DiagnosticId, Severity, Span, SubDiagnostic};
|
||||
use ruff_db::files::File;
|
||||
use ruff_db::parsed::parsed_module;
|
||||
use ruff_db::source::{SourceTextError, source_text};
|
||||
@@ -503,11 +500,11 @@ impl Project {
|
||||
parsed_ref
|
||||
.errors()
|
||||
.iter()
|
||||
.map(|error| create_parse_diagnostic(file, error)),
|
||||
.map(|error| Diagnostic::syntax_error(file, &error.error, error)),
|
||||
);
|
||||
|
||||
diagnostics.extend(parsed_ref.unsupported_syntax_errors().iter().map(|error| {
|
||||
let mut error = create_unsupported_syntax_diagnostic(file, error);
|
||||
let mut error = Diagnostic::syntax_error(file, error, error);
|
||||
add_inferred_python_version_hint_to_diagnostic(db, &mut error, "parsing syntax");
|
||||
error
|
||||
}));
|
||||
|
||||
@@ -343,7 +343,7 @@ def _(c: Callable[[int, Unpack[Ts]], int]):
|
||||
from typing import Callable
|
||||
|
||||
def _(c: Callable[[int], int]):
|
||||
reveal_type(c.__init__) # revealed: def __init__(self) -> None
|
||||
reveal_type(c.__init__) # revealed: bound method object.__init__() -> None
|
||||
reveal_type(c.__class__) # revealed: type
|
||||
reveal_type(c.__call__) # revealed: (int, /) -> int
|
||||
```
|
||||
|
||||
@@ -1574,7 +1574,7 @@ class B(Any): ...
|
||||
class C(B, A): ...
|
||||
|
||||
reveal_type(C.__mro__) # revealed: tuple[<class 'C'>, <class 'B'>, Any, <class 'A'>, <class 'object'>]
|
||||
reveal_type(C.x) # revealed: Literal[1] & Any
|
||||
reveal_type(C.x) # revealed: @Todo(Type::Intersection.call())
|
||||
```
|
||||
|
||||
## Classes with custom `__getattr__` methods
|
||||
|
||||
@@ -201,6 +201,36 @@ type IntOrStr = int | str
|
||||
reveal_type(IntOrStr.__or__) # revealed: bound method typing.TypeAliasType.__or__(right: Any) -> _SpecialForm
|
||||
```
|
||||
|
||||
## Method calls on types not disjoint from `None`
|
||||
|
||||
Very few methods are defined on `object`, `None`, and other types not disjoint from `None`. However,
|
||||
descriptor-binding behaviour works on these types in exactly the same way as descriptor binding on
|
||||
other types. This is despite the fact that `None` is used as a sentinel internally by the descriptor
|
||||
protocol to indicate that a method was accessed on the class itself rather than an instance of the
|
||||
class:
|
||||
|
||||
```py
|
||||
from typing import Protocol, Literal
|
||||
from ty_extensions import AlwaysFalsy
|
||||
|
||||
class Foo: ...
|
||||
|
||||
class SupportsStr(Protocol):
|
||||
def __str__(self) -> str: ...
|
||||
|
||||
class Falsy(Protocol):
|
||||
def __bool__(self) -> Literal[False]: ...
|
||||
|
||||
def _(a: object, b: SupportsStr, c: Falsy, d: AlwaysFalsy, e: None, f: Foo | None):
|
||||
a.__str__()
|
||||
b.__str__()
|
||||
c.__str__()
|
||||
d.__str__()
|
||||
# TODO: these should not error
|
||||
e.__str__() # error: [missing-argument]
|
||||
f.__str__() # error: [missing-argument]
|
||||
```
|
||||
|
||||
## Error cases: Calling `__get__` for methods
|
||||
|
||||
The `__get__` method on `types.FunctionType` has the following overloaded signature in typeshed:
|
||||
@@ -234,16 +264,18 @@ method_wrapper(C())
|
||||
method_wrapper(C(), None)
|
||||
method_wrapper(None, C)
|
||||
|
||||
# Passing `None` without an `owner` argument is an
|
||||
# error: [invalid-argument-type] "Argument to method wrapper `__get__` of function `f` is incorrect: Expected `~None`, found `None`"
|
||||
reveal_type(object.__str__.__get__(object(), None)()) # revealed: str
|
||||
|
||||
# TODO: passing `None` without an `owner` argument fails at runtime.
|
||||
# Ideally we would emit a diagnostic here:
|
||||
method_wrapper(None)
|
||||
|
||||
# Passing something that is not assignable to `type` as the `owner` argument is an
|
||||
# error: [no-matching-overload] "No overload of method wrapper `__get__` of function `f` matches arguments"
|
||||
method_wrapper(None, 1)
|
||||
|
||||
# Passing `None` as the `owner` argument when `instance` is `None` is an
|
||||
# error: [no-matching-overload] "No overload of method wrapper `__get__` of function `f` matches arguments"
|
||||
# TODO: passing `None` as the `owner` argument when `instance` is `None` fails at runtime.
|
||||
# Ideally we would emit a diagnostic here.
|
||||
method_wrapper(None, None)
|
||||
|
||||
# Calling `__get__` without any arguments is an
|
||||
|
||||
@@ -619,8 +619,9 @@ wrapper_descriptor()
|
||||
# error: [no-matching-overload] "No overload of wrapper descriptor `FunctionType.__get__` matches arguments"
|
||||
wrapper_descriptor(f)
|
||||
|
||||
# Calling it without the `owner` argument if `instance` is not `None` is an
|
||||
# error: [invalid-argument-type] "Argument to wrapper descriptor `FunctionType.__get__` is incorrect: Expected `~None`, found `None`"
|
||||
# TODO: Calling it without the `owner` argument if `instance` is not `None` fails at runtime.
|
||||
# Ideally we would emit a diagnostic here,
|
||||
# but this is hard to model without introducing false positives elsewhere
|
||||
wrapper_descriptor(f, None)
|
||||
|
||||
# But calling it with an instance is fine (in this case, the `owner` argument is optional):
|
||||
|
||||
@@ -2,27 +2,53 @@
|
||||
|
||||
## Basic functionality
|
||||
|
||||
<!-- snapshot-diagnostics -->
|
||||
`assert_never` makes sure that the type of the argument is `Never`.
|
||||
|
||||
`assert_never` makes sure that the type of the argument is `Never`. If it is not, a
|
||||
`type-assertion-failure` diagnostic is emitted.
|
||||
### Correct usage
|
||||
|
||||
```py
|
||||
from typing_extensions import assert_never, Never, Any
|
||||
from ty_extensions import Unknown
|
||||
|
||||
def _(never: Never, any_: Any, unknown: Unknown, flag: bool):
|
||||
def _(never: Never):
|
||||
assert_never(never) # fine
|
||||
```
|
||||
|
||||
### Diagnostics
|
||||
|
||||
<!-- snapshot-diagnostics -->
|
||||
|
||||
If it is not, a `type-assertion-failure` diagnostic is emitted.
|
||||
|
||||
```py
|
||||
from typing_extensions import assert_never, Never, Any
|
||||
from ty_extensions import Unknown
|
||||
|
||||
def _():
|
||||
assert_never(0) # error: [type-assertion-failure]
|
||||
|
||||
def _():
|
||||
assert_never("") # error: [type-assertion-failure]
|
||||
|
||||
def _():
|
||||
assert_never(None) # error: [type-assertion-failure]
|
||||
|
||||
def _():
|
||||
assert_never([]) # error: [type-assertion-failure]
|
||||
|
||||
def _():
|
||||
assert_never({}) # error: [type-assertion-failure]
|
||||
|
||||
def _():
|
||||
assert_never(()) # error: [type-assertion-failure]
|
||||
|
||||
def _(flag: bool, never: Never):
|
||||
assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
|
||||
def _(any_: Any):
|
||||
assert_never(any_) # error: [type-assertion-failure]
|
||||
|
||||
def _(unknown: Unknown):
|
||||
assert_never(unknown) # error: [type-assertion-failure]
|
||||
```
|
||||
|
||||
|
||||
@@ -205,3 +205,18 @@ python-version = "3.13"
|
||||
import aifc # error: [unresolved-import]
|
||||
from distutils import sysconfig # error: [unresolved-import]
|
||||
```
|
||||
|
||||
## Cannot shadow core standard library modules
|
||||
|
||||
`types.py`:
|
||||
|
||||
```py
|
||||
x: int
|
||||
```
|
||||
|
||||
```py
|
||||
# error: [unresolved-import]
|
||||
from types import x
|
||||
|
||||
from types import FunctionType
|
||||
```
|
||||
|
||||
@@ -158,15 +158,13 @@ from nonexistent_module import UnknownClass # error: [unresolved-import]
|
||||
|
||||
class C(UnknownClass): ...
|
||||
|
||||
# TODO: should be `type[type] & Unknown`
|
||||
reveal_type(C.__class__) # revealed: <class 'type'>
|
||||
reveal_type(C.__class__) # revealed: type[type] & Unknown
|
||||
|
||||
class M(type): ...
|
||||
class A(metaclass=M): ...
|
||||
class B(A, UnknownClass): ...
|
||||
|
||||
# TODO: should be `type[M] & Unknown`
|
||||
reveal_type(B.__class__) # revealed: <class 'M'>
|
||||
reveal_type(B.__class__) # revealed: type[M] & Unknown
|
||||
```
|
||||
|
||||
## Duplicate
|
||||
@@ -176,7 +174,7 @@ class M(type): ...
|
||||
class A(metaclass=M): ...
|
||||
class B(A, A): ... # error: [duplicate-base] "Duplicate base class `A`"
|
||||
|
||||
reveal_type(B.__class__) # revealed: <class 'M'>
|
||||
reveal_type(B.__class__) # revealed: type[M] & Unknown
|
||||
```
|
||||
|
||||
## Non-class
|
||||
|
||||
@@ -639,16 +639,16 @@ python-version = "3.13"
|
||||
|
||||
```pyi
|
||||
class C(C.a): ...
|
||||
reveal_type(C.__class__) # revealed: <class 'type'>
|
||||
reveal_type(C.__class__) # revealed: type[type] & Unknown
|
||||
reveal_type(C.__mro__) # revealed: tuple[<class 'C'>, Unknown, <class 'object'>]
|
||||
|
||||
class D(D.a):
|
||||
a: D
|
||||
reveal_type(D.__class__) # revealed: <class 'type'>
|
||||
reveal_type(D.__class__) # revealed: type[type] & Unknown
|
||||
reveal_type(D.__mro__) # revealed: tuple[<class 'D'>, Unknown, <class 'object'>]
|
||||
|
||||
class E[T](E.a): ...
|
||||
reveal_type(E.__class__) # revealed: <class 'type'>
|
||||
reveal_type(E.__class__) # revealed: type[type] & Unknown
|
||||
reveal_type(E.__mro__) # revealed: tuple[<class 'E[Unknown]'>, Unknown, typing.Generic, <class 'object'>]
|
||||
|
||||
class F[T](F(), F): ... # error: [cyclic-class-definition]
|
||||
|
||||
@@ -1862,6 +1862,21 @@ class Bar(Protocol):
|
||||
static_assert(is_equivalent_to(Foo, Bar))
|
||||
```
|
||||
|
||||
### Disjointness of recursive protocol and recursive final type
|
||||
|
||||
```py
|
||||
from typing import Protocol
|
||||
from ty_extensions import is_disjoint_from, static_assert
|
||||
|
||||
class Proto(Protocol):
|
||||
x: "Proto"
|
||||
|
||||
class Nominal:
|
||||
x: "Nominal"
|
||||
|
||||
static_assert(not is_disjoint_from(Proto, Nominal))
|
||||
```
|
||||
|
||||
### Regression test: narrowing with self-referential protocols
|
||||
|
||||
This snippet caused us to panic on an early version of the implementation for protocols.
|
||||
|
||||
@@ -3,7 +3,7 @@ source: crates/ty_test/src/lib.rs
|
||||
expression: snapshot
|
||||
---
|
||||
---
|
||||
mdtest name: assert_never.md - `assert_never` - Basic functionality
|
||||
mdtest name: assert_never.md - `assert_never` - Basic functionality - Diagnostics
|
||||
mdtest path: crates/ty_python_semantic/resources/mdtest/directives/assert_never.md
|
||||
---
|
||||
|
||||
@@ -15,35 +15,47 @@ mdtest path: crates/ty_python_semantic/resources/mdtest/directives/assert_never.
|
||||
1 | from typing_extensions import assert_never, Never, Any
|
||||
2 | from ty_extensions import Unknown
|
||||
3 |
|
||||
4 | def _(never: Never, any_: Any, unknown: Unknown, flag: bool):
|
||||
5 | assert_never(never) # fine
|
||||
4 | def _():
|
||||
5 | assert_never(0) # error: [type-assertion-failure]
|
||||
6 |
|
||||
7 | assert_never(0) # error: [type-assertion-failure]
|
||||
7 | def _():
|
||||
8 | assert_never("") # error: [type-assertion-failure]
|
||||
9 | assert_never(None) # error: [type-assertion-failure]
|
||||
10 | assert_never([]) # error: [type-assertion-failure]
|
||||
11 | assert_never({}) # error: [type-assertion-failure]
|
||||
12 | assert_never(()) # error: [type-assertion-failure]
|
||||
13 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
14 |
|
||||
15 | assert_never(any_) # error: [type-assertion-failure]
|
||||
16 | assert_never(unknown) # error: [type-assertion-failure]
|
||||
9 |
|
||||
10 | def _():
|
||||
11 | assert_never(None) # error: [type-assertion-failure]
|
||||
12 |
|
||||
13 | def _():
|
||||
14 | assert_never([]) # error: [type-assertion-failure]
|
||||
15 |
|
||||
16 | def _():
|
||||
17 | assert_never({}) # error: [type-assertion-failure]
|
||||
18 |
|
||||
19 | def _():
|
||||
20 | assert_never(()) # error: [type-assertion-failure]
|
||||
21 |
|
||||
22 | def _(flag: bool, never: Never):
|
||||
23 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
24 |
|
||||
25 | def _(any_: Any):
|
||||
26 | assert_never(any_) # error: [type-assertion-failure]
|
||||
27 |
|
||||
28 | def _(unknown: Unknown):
|
||||
29 | assert_never(unknown) # error: [type-assertion-failure]
|
||||
```
|
||||
|
||||
# Diagnostics
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:7:5
|
||||
--> src/mdtest_snippet.py:5:5
|
||||
|
|
||||
5 | assert_never(never) # fine
|
||||
6 |
|
||||
7 | assert_never(0) # error: [type-assertion-failure]
|
||||
4 | def _():
|
||||
5 | assert_never(0) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^-^
|
||||
| |
|
||||
| Inferred type of argument is `Literal[0]`
|
||||
8 | assert_never("") # error: [type-assertion-failure]
|
||||
9 | assert_never(None) # error: [type-assertion-failure]
|
||||
6 |
|
||||
7 | def _():
|
||||
|
|
||||
info: `Never` and `Literal[0]` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -54,13 +66,13 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:8:5
|
||||
|
|
||||
7 | assert_never(0) # error: [type-assertion-failure]
|
||||
7 | def _():
|
||||
8 | assert_never("") # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^--^
|
||||
| |
|
||||
| Inferred type of argument is `Literal[""]`
|
||||
9 | assert_never(None) # error: [type-assertion-failure]
|
||||
10 | assert_never([]) # error: [type-assertion-failure]
|
||||
9 |
|
||||
10 | def _():
|
||||
|
|
||||
info: `Never` and `Literal[""]` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -69,16 +81,15 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:9:5
|
||||
--> src/mdtest_snippet.py:11:5
|
||||
|
|
||||
7 | assert_never(0) # error: [type-assertion-failure]
|
||||
8 | assert_never("") # error: [type-assertion-failure]
|
||||
9 | assert_never(None) # error: [type-assertion-failure]
|
||||
10 | def _():
|
||||
11 | assert_never(None) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^----^
|
||||
| |
|
||||
| Inferred type of argument is `None`
|
||||
10 | assert_never([]) # error: [type-assertion-failure]
|
||||
11 | assert_never({}) # error: [type-assertion-failure]
|
||||
12 |
|
||||
13 | def _():
|
||||
|
|
||||
info: `Never` and `None` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -87,16 +98,15 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:10:5
|
||||
--> src/mdtest_snippet.py:14:5
|
||||
|
|
||||
8 | assert_never("") # error: [type-assertion-failure]
|
||||
9 | assert_never(None) # error: [type-assertion-failure]
|
||||
10 | assert_never([]) # error: [type-assertion-failure]
|
||||
13 | def _():
|
||||
14 | assert_never([]) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^--^
|
||||
| |
|
||||
| Inferred type of argument is `list[Unknown]`
|
||||
11 | assert_never({}) # error: [type-assertion-failure]
|
||||
12 | assert_never(()) # error: [type-assertion-failure]
|
||||
15 |
|
||||
16 | def _():
|
||||
|
|
||||
info: `Never` and `list[Unknown]` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -105,16 +115,15 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:11:5
|
||||
--> src/mdtest_snippet.py:17:5
|
||||
|
|
||||
9 | assert_never(None) # error: [type-assertion-failure]
|
||||
10 | assert_never([]) # error: [type-assertion-failure]
|
||||
11 | assert_never({}) # error: [type-assertion-failure]
|
||||
16 | def _():
|
||||
17 | assert_never({}) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^--^
|
||||
| |
|
||||
| Inferred type of argument is `dict[Unknown, Unknown]`
|
||||
12 | assert_never(()) # error: [type-assertion-failure]
|
||||
13 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
18 |
|
||||
19 | def _():
|
||||
|
|
||||
info: `Never` and `dict[Unknown, Unknown]` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -123,15 +132,15 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:12:5
|
||||
--> src/mdtest_snippet.py:20:5
|
||||
|
|
||||
10 | assert_never([]) # error: [type-assertion-failure]
|
||||
11 | assert_never({}) # error: [type-assertion-failure]
|
||||
12 | assert_never(()) # error: [type-assertion-failure]
|
||||
19 | def _():
|
||||
20 | assert_never(()) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^--^
|
||||
| |
|
||||
| Inferred type of argument is `tuple[()]`
|
||||
13 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
21 |
|
||||
22 | def _(flag: bool, never: Never):
|
||||
|
|
||||
info: `Never` and `tuple[()]` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -140,16 +149,15 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:13:5
|
||||
--> src/mdtest_snippet.py:23:5
|
||||
|
|
||||
11 | assert_never({}) # error: [type-assertion-failure]
|
||||
12 | assert_never(()) # error: [type-assertion-failure]
|
||||
13 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
22 | def _(flag: bool, never: Never):
|
||||
23 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^--------------------^
|
||||
| |
|
||||
| Inferred type of argument is `Literal[1]`
|
||||
14 |
|
||||
15 | assert_never(any_) # error: [type-assertion-failure]
|
||||
24 |
|
||||
25 | def _(any_: Any):
|
||||
|
|
||||
info: `Never` and `Literal[1]` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -158,15 +166,15 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:15:5
|
||||
--> src/mdtest_snippet.py:26:5
|
||||
|
|
||||
13 | assert_never(1 if flag else never) # error: [type-assertion-failure]
|
||||
14 |
|
||||
15 | assert_never(any_) # error: [type-assertion-failure]
|
||||
25 | def _(any_: Any):
|
||||
26 | assert_never(any_) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^----^
|
||||
| |
|
||||
| Inferred type of argument is `Any`
|
||||
16 | assert_never(unknown) # error: [type-assertion-failure]
|
||||
27 |
|
||||
28 | def _(unknown: Unknown):
|
||||
|
|
||||
info: `Never` and `Any` are not equivalent types
|
||||
info: rule `type-assertion-failure` is enabled by default
|
||||
@@ -175,10 +183,10 @@ info: rule `type-assertion-failure` is enabled by default
|
||||
|
||||
```
|
||||
error[type-assertion-failure]: Argument does not have asserted type `Never`
|
||||
--> src/mdtest_snippet.py:16:5
|
||||
--> src/mdtest_snippet.py:29:5
|
||||
|
|
||||
15 | assert_never(any_) # error: [type-assertion-failure]
|
||||
16 | assert_never(unknown) # error: [type-assertion-failure]
|
||||
28 | def _(unknown: Unknown):
|
||||
29 | assert_never(unknown) # error: [type-assertion-failure]
|
||||
| ^^^^^^^^^^^^^-------^
|
||||
| |
|
||||
| Inferred type of argument is `Unknown`
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user