Compare commits

..

4 Commits

Author SHA1 Message Date
Charlie Marsh
9ae4fb3b9f Add benches 2024-02-09 15:39:15 -05:00
Charlie Marsh
c67d68271d Use shared finder 2024-02-09 15:39:15 -05:00
Charlie Marsh
56b148bb43 Box other strings 2024-02-09 15:39:15 -05:00
Charlie Marsh
0a5a4f6d92 Remove unnecessary string cloning from the parser 2024-02-09 15:39:15 -05:00
182 changed files with 2823 additions and 6838 deletions

View File

@@ -1,8 +0,0 @@
[profile.ci]
# Print out output for failing tests as soon as they fail, and also at the end
# of the run (for easy scrollability).
failure-output = "immediate-final"
# Do not cancel the test run on the first failure.
fail-fast = false
status-level = "skip"

View File

@@ -111,23 +111,13 @@ jobs:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo insta test --all-features --unreferenced reject --test-runner nextest
run: cargo insta test --all --all-features --unreferenced reject
# Check for broken links in the documentation.
- run: cargo doc --all --no-deps
env:
@@ -148,16 +138,15 @@ jobs:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install cargo nextest"
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
run: |
cargo nextest run --all-features --profile ci
cargo test --all-features --doc
# We can't reject unreferenced snapshots on windows because flake8_executable can't run on windows
run: cargo insta test --all --exclude ruff_dev --all-features
cargo-test-wasm:
name: "cargo test (wasm)"
@@ -418,7 +407,7 @@ jobs:
- uses: actions/setup-python@v5
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.9.0
uses: webfactory/ssh-agent@v0.8.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"

View File

@@ -23,7 +23,7 @@ jobs:
- uses: actions/setup-python@v5
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.9.0
uses: webfactory/ssh-agent@v0.8.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"

View File

@@ -26,10 +26,6 @@ Welcome! We're happy to have you here. Thank you in advance for your contributio
- [`cargo dev`](#cargo-dev)
- [Subsystems](#subsystems)
- [Compilation Pipeline](#compilation-pipeline)
- [Import Categorization](#import-categorization)
- [Project root](#project-root)
- [Package root](#package-root)
- [Import categorization](#import-categorization-1)
## The Basics
@@ -67,7 +63,7 @@ You'll also need [Insta](https://insta.rs/docs/) to update snapshot tests:
cargo install cargo-insta
```
And you'll need pre-commit to run some validation checks:
and pre-commit to run some validation checks:
```shell
pipx install pre-commit # or `pip install pre-commit` if you have a virtualenv
@@ -80,16 +76,6 @@ when making a commit:
pre-commit install
```
We recommend [nextest](https://nexte.st/) to run Ruff's test suite (via `cargo nextest run`),
though it's not strictly necessary:
```shell
cargo install cargo-nextest --locked
```
Throughout this guide, any usages of `cargo test` can be replaced with `cargo nextest run`,
if you choose to install `nextest`.
### Development
After cloning the repository, run Ruff locally from the repository root with:
@@ -387,11 +373,6 @@ We have several ways of benchmarking and profiling Ruff:
- Microbenchmarks which run the linter or the formatter on individual files. These run on pull requests.
- Profiling the linter on either the microbenchmarks or entire projects
> \[!NOTE\]
> When running benchmarks, ensure that your CPU is otherwise idle (e.g., close any background
> applications, like web browsers). You may also want to switch your CPU to a "performance"
> mode, if it exists, especially when benchmarking short-lived processes.
### CPython Benchmark
First, clone [CPython](https://github.com/python/cpython). It's a large and diverse Python codebase,

659
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -21,7 +21,7 @@ bincode = { version = "1.3.3" }
bitflags = { version = "2.4.1" }
bstr = { version = "1.9.0" }
cachedir = { version = "0.3.1" }
chrono = { version = "0.4.34", default-features = false, features = ["clock"] }
chrono = { version = "0.4.33", default-features = false, features = ["clock"] }
clap = { version = "4.4.18", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "2.0.0" }
@@ -44,7 +44,7 @@ hexf-parse = { version ="0.2.1"}
ignore = { version = "0.4.22" }
imara-diff ={ version = "0.1.5"}
imperative = { version = "1.0.4" }
indicatif ={ version = "0.17.8"}
indicatif ={ version = "0.17.7"}
indoc ={ version = "2.0.4"}
insta = { version = "1.34.0", feature = ["filters", "glob"] }
insta-cmd = { version = "0.4.0" }
@@ -92,7 +92,7 @@ strum_macros = { version = "0.25.3" }
syn = { version = "2.0.40" }
tempfile = { version ="3.9.0"}
test-case = { version = "3.3.1" }
thiserror = { version = "1.0.57" }
thiserror = { version = "1.0.51" }
tikv-jemallocator = { version ="0.5.0"}
toml = { version = "0.8.9" }
tracing = { version = "0.1.40" }

View File

@@ -48,9 +48,7 @@ serde = { workspace = true }
serde_json = { workspace = true }
shellexpand = { workspace = true }
strum = { workspace = true, features = [] }
tempfile = { workspace = true }
thiserror = { workspace = true }
toml = { workspace = true }
tracing = { workspace = true, features = ["log"] }
walkdir = { workspace = true }
wild = { workspace = true }

View File

@@ -1,18 +1,12 @@
use std::cmp::Ordering;
use std::fmt::Formatter;
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
use anyhow::bail;
use clap::builder::{TypedValueParser, ValueParserFactory};
use clap::{command, Parser};
use colored::Colorize;
use path_absolutize::path_dedot;
use regex::Regex;
use rustc_hash::FxHashMap;
use toml;
use ruff_linter::line_width::LineLength;
use ruff_linter::logging::LogLevel;
@@ -25,7 +19,7 @@ use ruff_linter::{warn_user, RuleParser, RuleSelector, RuleSelectorParser};
use ruff_source_file::{LineIndex, OneIndexed};
use ruff_text_size::TextRange;
use ruff_workspace::configuration::{Configuration, RuleSelection};
use ruff_workspace::options::{Options, PycodestyleOptions};
use ruff_workspace::options::PycodestyleOptions;
use ruff_workspace::resolver::ConfigurationTransformer;
#[derive(Debug, Parser)]
@@ -161,20 +155,10 @@ pub struct CheckCommand {
preview: bool,
#[clap(long, overrides_with("preview"), hide = true)]
no_preview: bool,
/// Either a path to a TOML configuration file (`pyproject.toml` or `ruff.toml`),
/// or a TOML `<KEY> = <VALUE>` pair
/// (such as you might find in a `ruff.toml` configuration file)
/// overriding a specific configuration option.
/// Overrides of individual settings using this option always take precedence
/// over all configuration files, including configuration files that were also
/// specified using `--config`.
#[arg(
long,
action = clap::ArgAction::Append,
value_name = "CONFIG_OPTION",
value_parser = ConfigArgumentParser,
)]
pub config: Vec<SingleConfigArgument>,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for
/// configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Comma-separated list of rule codes to enable (or ALL, to enable all rules).
#[arg(
long,
@@ -307,15 +291,7 @@ pub struct CheckCommand {
#[arg(short, long, env = "RUFF_NO_CACHE", help_heading = "Miscellaneous")]
pub no_cache: bool,
/// Ignore all configuration files.
//
// Note: We can't mark this as conflicting with `--config` here
// as `--config` can be used for specifying configuration overrides
// as well as configuration files.
// Specifying a configuration file conflicts with `--isolated`;
// specifying a configuration override does not.
// If a user specifies `ruff check --isolated --config=ruff.toml`,
// we emit an error later on, after the initial parsing by clap.
#[arg(long, help_heading = "Miscellaneous")]
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
pub isolated: bool,
/// Path to the cache directory.
#[arg(long, env = "RUFF_CACHE_DIR", help_heading = "Miscellaneous")]
@@ -408,20 +384,9 @@ pub struct FormatCommand {
/// difference between the current file and how the formatted file would look like.
#[arg(long)]
pub diff: bool,
/// Either a path to a TOML configuration file (`pyproject.toml` or `ruff.toml`),
/// or a TOML `<KEY> = <VALUE>` pair
/// (such as you might find in a `ruff.toml` configuration file)
/// overriding a specific configuration option.
/// Overrides of individual settings using this option always take precedence
/// over all configuration files, including configuration files that were also
/// specified using `--config`.
#[arg(
long,
action = clap::ArgAction::Append,
value_name = "CONFIG_OPTION",
value_parser = ConfigArgumentParser,
)]
pub config: Vec<SingleConfigArgument>,
/// Path to the `pyproject.toml` or `ruff.toml` file to use for configuration.
#[arg(long, conflicts_with = "isolated")]
pub config: Option<PathBuf>,
/// Disable cache reads.
#[arg(short, long, env = "RUFF_NO_CACHE", help_heading = "Miscellaneous")]
@@ -463,15 +428,7 @@ pub struct FormatCommand {
#[arg(long, help_heading = "Format configuration")]
pub line_length: Option<LineLength>,
/// Ignore all configuration files.
//
// Note: We can't mark this as conflicting with `--config` here
// as `--config` can be used for specifying configuration overrides
// as well as configuration files.
// Specifying a configuration file conflicts with `--isolated`;
// specifying a configuration override does not.
// If a user specifies `ruff check --isolated --config=ruff.toml`,
// we emit an error later on, after the initial parsing by clap.
#[arg(long, help_heading = "Miscellaneous")]
#[arg(long, conflicts_with = "config", help_heading = "Miscellaneous")]
pub isolated: bool,
/// The name of the file when passing it through stdin.
#[arg(long, help_heading = "Miscellaneous")]
@@ -558,181 +515,101 @@ impl From<&LogLevelArgs> for LogLevel {
}
}
/// Configuration-related arguments passed via the CLI.
#[derive(Default)]
pub struct ConfigArguments {
/// Path to a pyproject.toml or ruff.toml configuration file (etc.).
/// Either 0 or 1 configuration file paths may be provided on the command line.
config_file: Option<PathBuf>,
/// Overrides provided via the `--config "KEY=VALUE"` option.
/// An arbitrary number of these overrides may be provided on the command line.
/// These overrides take precedence over all configuration files,
/// even configuration files that were also specified using `--config`.
overrides: Configuration,
/// Overrides provided via dedicated flags such as `--line-length` etc.
/// These overrides take precedence over all configuration files,
/// and also over all overrides specified using any `--config "KEY=VALUE"` flags.
per_flag_overrides: ExplicitConfigOverrides,
}
impl ConfigArguments {
pub fn config_file(&self) -> Option<&Path> {
self.config_file.as_deref()
}
fn from_cli_arguments(
config_options: Vec<SingleConfigArgument>,
per_flag_overrides: ExplicitConfigOverrides,
isolated: bool,
) -> anyhow::Result<Self> {
let mut new = Self {
per_flag_overrides,
..Self::default()
};
for option in config_options {
match option {
SingleConfigArgument::SettingsOverride(overridden_option) => {
let overridden_option = Arc::try_unwrap(overridden_option)
.unwrap_or_else(|option| option.deref().clone());
new.overrides = new.overrides.combine(Configuration::from_options(
overridden_option,
None,
&path_dedot::CWD,
)?);
}
SingleConfigArgument::FilePath(path) => {
if isolated {
bail!(
"\
The argument `--config={}` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
",
path.display()
);
}
if let Some(ref config_file) = new.config_file {
let (first, second) = (config_file.display(), path.display());
bail!(
"\
You cannot specify more than one configuration file on the command line.
tip: remove either `--config={first}` or `--config={second}`.
For more information, try `--help`.
"
);
}
new.config_file = Some(path);
}
}
}
Ok(new)
}
}
impl ConfigurationTransformer for ConfigArguments {
fn transform(&self, config: Configuration) -> Configuration {
let with_config_overrides = self.overrides.clone().combine(config);
self.per_flag_overrides.transform(with_config_overrides)
}
}
impl CheckCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> anyhow::Result<(CheckArguments, ConfigArguments)> {
let check_arguments = CheckArguments {
add_noqa: self.add_noqa,
diff: self.diff,
ecosystem_ci: self.ecosystem_ci,
exit_non_zero_on_fix: self.exit_non_zero_on_fix,
exit_zero: self.exit_zero,
files: self.files,
ignore_noqa: self.ignore_noqa,
isolated: self.isolated,
no_cache: self.no_cache,
output_file: self.output_file,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
};
let cli_overrides = ExplicitConfigOverrides {
dummy_variable_rgx: self.dummy_variable_rgx,
exclude: self.exclude,
extend_exclude: self.extend_exclude,
extend_fixable: self.extend_fixable,
extend_ignore: self.extend_ignore,
extend_per_file_ignores: self.extend_per_file_ignores,
extend_select: self.extend_select,
extend_unfixable: self.extend_unfixable,
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
per_file_ignores: self.per_file_ignores,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
respect_gitignore: resolve_bool_arg(self.respect_gitignore, self.no_respect_gitignore),
select: self.select,
target_version: self.target_version,
unfixable: self.unfixable,
// TODO(charlie): Included in `pyproject.toml`, but not inherited.
cache_dir: self.cache_dir,
fix: resolve_bool_arg(self.fix, self.no_fix),
fix_only: resolve_bool_arg(self.fix_only, self.no_fix_only),
unsafe_fixes: resolve_bool_arg(self.unsafe_fixes, self.no_unsafe_fixes)
.map(UnsafeFixes::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
output_format: resolve_output_format(
self.output_format,
resolve_bool_arg(self.show_source, self.no_show_source),
resolve_bool_arg(self.preview, self.no_preview).unwrap_or_default(),
),
show_fixes: resolve_bool_arg(self.show_fixes, self.no_show_fixes),
extension: self.extension,
};
let config_args =
ConfigArguments::from_cli_arguments(self.config, cli_overrides, self.isolated)?;
Ok((check_arguments, config_args))
pub fn partition(self) -> (CheckArguments, CliOverrides) {
(
CheckArguments {
add_noqa: self.add_noqa,
config: self.config,
diff: self.diff,
ecosystem_ci: self.ecosystem_ci,
exit_non_zero_on_fix: self.exit_non_zero_on_fix,
exit_zero: self.exit_zero,
files: self.files,
ignore_noqa: self.ignore_noqa,
isolated: self.isolated,
no_cache: self.no_cache,
output_file: self.output_file,
show_files: self.show_files,
show_settings: self.show_settings,
statistics: self.statistics,
stdin_filename: self.stdin_filename,
watch: self.watch,
},
CliOverrides {
dummy_variable_rgx: self.dummy_variable_rgx,
exclude: self.exclude,
extend_exclude: self.extend_exclude,
extend_fixable: self.extend_fixable,
extend_ignore: self.extend_ignore,
extend_per_file_ignores: self.extend_per_file_ignores,
extend_select: self.extend_select,
extend_unfixable: self.extend_unfixable,
fixable: self.fixable,
ignore: self.ignore,
line_length: self.line_length,
per_file_ignores: self.per_file_ignores,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
select: self.select,
target_version: self.target_version,
unfixable: self.unfixable,
// TODO(charlie): Included in `pyproject.toml`, but not inherited.
cache_dir: self.cache_dir,
fix: resolve_bool_arg(self.fix, self.no_fix),
fix_only: resolve_bool_arg(self.fix_only, self.no_fix_only),
unsafe_fixes: resolve_bool_arg(self.unsafe_fixes, self.no_unsafe_fixes)
.map(UnsafeFixes::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
output_format: resolve_output_format(
self.output_format,
resolve_bool_arg(self.show_source, self.no_show_source),
resolve_bool_arg(self.preview, self.no_preview).unwrap_or_default(),
),
show_fixes: resolve_bool_arg(self.show_fixes, self.no_show_fixes),
extension: self.extension,
},
)
}
}
impl FormatCommand {
/// Partition the CLI into command-line arguments and configuration
/// overrides.
pub fn partition(self) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
let format_arguments = FormatArguments {
check: self.check,
diff: self.diff,
files: self.files,
isolated: self.isolated,
no_cache: self.no_cache,
stdin_filename: self.stdin_filename,
range: self.range,
};
pub fn partition(self) -> (FormatArguments, CliOverrides) {
(
FormatArguments {
check: self.check,
diff: self.diff,
config: self.config,
files: self.files,
isolated: self.isolated,
no_cache: self.no_cache,
stdin_filename: self.stdin_filename,
range: self.range,
},
CliOverrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(
self.respect_gitignore,
self.no_respect_gitignore,
),
exclude: self.exclude,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
cache_dir: self.cache_dir,
extension: self.extension,
let cli_overrides = ExplicitConfigOverrides {
line_length: self.line_length,
respect_gitignore: resolve_bool_arg(self.respect_gitignore, self.no_respect_gitignore),
exclude: self.exclude,
preview: resolve_bool_arg(self.preview, self.no_preview).map(PreviewMode::from),
force_exclude: resolve_bool_arg(self.force_exclude, self.no_force_exclude),
target_version: self.target_version,
cache_dir: self.cache_dir,
extension: self.extension,
// Unsupported on the formatter CLI, but required on `Overrides`.
..ExplicitConfigOverrides::default()
};
let config_args =
ConfigArguments::from_cli_arguments(self.config, cli_overrides, self.isolated)?;
Ok((format_arguments, config_args))
// Unsupported on the formatter CLI, but required on `Overrides`.
..CliOverrides::default()
},
)
}
}
@@ -745,154 +622,6 @@ fn resolve_bool_arg(yes: bool, no: bool) -> Option<bool> {
}
}
#[derive(Debug)]
enum TomlParseFailureKind {
SyntaxError,
UnknownOption,
}
impl std::fmt::Display for TomlParseFailureKind {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let display = match self {
Self::SyntaxError => "The supplied argument is not valid TOML",
Self::UnknownOption => {
"Could not parse the supplied argument as a `ruff.toml` configuration option"
}
};
write!(f, "{display}")
}
}
#[derive(Debug)]
struct TomlParseFailure {
kind: TomlParseFailureKind,
underlying_error: toml::de::Error,
}
impl std::fmt::Display for TomlParseFailure {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let TomlParseFailure {
kind,
underlying_error,
} = self;
let display = format!("{kind}:\n\n{underlying_error}");
write!(f, "{}", display.trim_end())
}
}
/// Enumeration to represent a single `--config` argument
/// passed via the CLI.
///
/// Using the `--config` flag, users may pass 0 or 1 paths
/// to configuration files and an arbitrary number of
/// "inline TOML" overrides for specific settings.
///
/// For example:
///
/// ```sh
/// ruff check --config "path/to/ruff.toml" --config "extend-select=['E501', 'F841']" --config "lint.per-file-ignores = {'some_file.py' = ['F841']}"
/// ```
#[derive(Clone, Debug)]
pub enum SingleConfigArgument {
FilePath(PathBuf),
SettingsOverride(Arc<Options>),
}
#[derive(Clone)]
pub struct ConfigArgumentParser;
impl ValueParserFactory for SingleConfigArgument {
type Parser = ConfigArgumentParser;
fn value_parser() -> Self::Parser {
ConfigArgumentParser
}
}
impl TypedValueParser for ConfigArgumentParser {
type Value = SingleConfigArgument;
fn parse_ref(
&self,
cmd: &clap::Command,
arg: Option<&clap::Arg>,
value: &std::ffi::OsStr,
) -> Result<Self::Value, clap::Error> {
let path_to_config_file = PathBuf::from(value);
if path_to_config_file.exists() {
return Ok(SingleConfigArgument::FilePath(path_to_config_file));
}
let value = value
.to_str()
.ok_or_else(|| clap::Error::new(clap::error::ErrorKind::InvalidUtf8))?;
let toml_parse_error = match toml::Table::from_str(value) {
Ok(table) => match table.try_into() {
Ok(option) => return Ok(SingleConfigArgument::SettingsOverride(Arc::new(option))),
Err(underlying_error) => TomlParseFailure {
kind: TomlParseFailureKind::UnknownOption,
underlying_error,
},
},
Err(underlying_error) => TomlParseFailure {
kind: TomlParseFailureKind::SyntaxError,
underlying_error,
},
};
let mut new_error = clap::Error::new(clap::error::ErrorKind::ValueValidation).with_cmd(cmd);
if let Some(arg) = arg {
new_error.insert(
clap::error::ContextKind::InvalidArg,
clap::error::ContextValue::String(arg.to_string()),
);
}
new_error.insert(
clap::error::ContextKind::InvalidValue,
clap::error::ContextValue::String(value.to_string()),
);
// small hack so that multiline tips
// have the same indent on the left-hand side:
let tip_indent = " ".repeat(" tip: ".len());
let mut tip = format!(
"\
A `--config` flag must either be a path to a `.toml` configuration file
{tip_indent}or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
{tip_indent}option"
);
// Here we do some heuristics to try to figure out whether
// the user was trying to pass in a path to a configuration file
// or some inline TOML.
// We want to display the most helpful error to the user as possible.
if std::path::Path::new(value)
.extension()
.map_or(false, |ext| ext.eq_ignore_ascii_case("toml"))
{
if !value.contains('=') {
tip.push_str(&format!(
"
It looks like you were trying to pass a path to a configuration file.
The path `{value}` does not exist"
));
}
} else if value.contains('=') {
tip.push_str(&format!("\n\n{toml_parse_error}"));
}
new_error.insert(
clap::error::ContextKind::Suggested,
clap::error::ContextValue::StyledStrs(vec![tip.into()]),
);
Err(new_error)
}
}
fn resolve_output_format(
output_format: Option<SerializationFormat>,
show_sources: Option<bool>,
@@ -935,6 +664,7 @@ fn resolve_output_format(
#[allow(clippy::struct_excessive_bools)]
pub struct CheckArguments {
pub add_noqa: bool,
pub config: Option<PathBuf>,
pub diff: bool,
pub ecosystem_ci: bool,
pub exit_non_zero_on_fix: bool,
@@ -958,6 +688,7 @@ pub struct FormatArguments {
pub check: bool,
pub no_cache: bool,
pub diff: bool,
pub config: Option<PathBuf>,
pub files: Vec<PathBuf>,
pub isolated: bool,
pub stdin_filename: Option<PathBuf>,
@@ -1153,40 +884,39 @@ impl LineColumnParseError {
}
}
/// Configuration overrides provided via dedicated CLI flags:
/// `--line-length`, `--respect-gitignore`, etc.
/// CLI settings that function as configuration overrides.
#[derive(Clone, Default)]
#[allow(clippy::struct_excessive_bools)]
struct ExplicitConfigOverrides {
dummy_variable_rgx: Option<Regex>,
exclude: Option<Vec<FilePattern>>,
extend_exclude: Option<Vec<FilePattern>>,
extend_fixable: Option<Vec<RuleSelector>>,
extend_ignore: Option<Vec<RuleSelector>>,
extend_select: Option<Vec<RuleSelector>>,
extend_unfixable: Option<Vec<RuleSelector>>,
fixable: Option<Vec<RuleSelector>>,
ignore: Option<Vec<RuleSelector>>,
line_length: Option<LineLength>,
per_file_ignores: Option<Vec<PatternPrefixPair>>,
extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
preview: Option<PreviewMode>,
respect_gitignore: Option<bool>,
select: Option<Vec<RuleSelector>>,
target_version: Option<PythonVersion>,
unfixable: Option<Vec<RuleSelector>>,
pub struct CliOverrides {
pub dummy_variable_rgx: Option<Regex>,
pub exclude: Option<Vec<FilePattern>>,
pub extend_exclude: Option<Vec<FilePattern>>,
pub extend_fixable: Option<Vec<RuleSelector>>,
pub extend_ignore: Option<Vec<RuleSelector>>,
pub extend_select: Option<Vec<RuleSelector>>,
pub extend_unfixable: Option<Vec<RuleSelector>>,
pub fixable: Option<Vec<RuleSelector>>,
pub ignore: Option<Vec<RuleSelector>>,
pub line_length: Option<LineLength>,
pub per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub extend_per_file_ignores: Option<Vec<PatternPrefixPair>>,
pub preview: Option<PreviewMode>,
pub respect_gitignore: Option<bool>,
pub select: Option<Vec<RuleSelector>>,
pub target_version: Option<PythonVersion>,
pub unfixable: Option<Vec<RuleSelector>>,
// TODO(charlie): Captured in pyproject.toml as a default, but not part of `Settings`.
cache_dir: Option<PathBuf>,
fix: Option<bool>,
fix_only: Option<bool>,
unsafe_fixes: Option<UnsafeFixes>,
force_exclude: Option<bool>,
output_format: Option<SerializationFormat>,
show_fixes: Option<bool>,
extension: Option<Vec<ExtensionPair>>,
pub cache_dir: Option<PathBuf>,
pub fix: Option<bool>,
pub fix_only: Option<bool>,
pub unsafe_fixes: Option<UnsafeFixes>,
pub force_exclude: Option<bool>,
pub output_format: Option<SerializationFormat>,
pub show_fixes: Option<bool>,
pub extension: Option<Vec<ExtensionPair>>,
}
impl ConfigurationTransformer for ExplicitConfigOverrides {
impl ConfigurationTransformer for CliOverrides {
fn transform(&self, mut config: Configuration) -> Configuration {
if let Some(cache_dir) = &self.cache_dir {
config.cache_dir = Some(cache_dir.clone());

View File

@@ -1,7 +1,7 @@
use std::fmt::Debug;
use std::fs::{self, File};
use std::hash::Hasher;
use std::io::{self, BufReader, Write};
use std::io::{self, BufReader, BufWriter, Write};
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Mutex;
@@ -15,7 +15,6 @@ use rayon::iter::ParallelIterator;
use rayon::iter::{IntoParallelIterator, ParallelBridge};
use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize};
use tempfile::NamedTempFile;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_diagnostics::{DiagnosticKind, Fix};
@@ -166,29 +165,15 @@ impl Cache {
return Ok(());
}
// Write the cache to a temporary file first and then rename it for an "atomic" write.
// Protects against data loss if the process is killed during the write and races between different ruff
// processes, resulting in a corrupted cache file. https://github.com/astral-sh/ruff/issues/8147#issuecomment-1943345964
let mut temp_file =
NamedTempFile::new_in(self.path.parent().expect("Write path must have a parent"))
.context("Failed to create temporary file")?;
// Serialize to in-memory buffer because hyperfine benchmark showed that it's faster than
// using a `BufWriter` and our cache files are small enough that streaming isn't necessary.
let serialized =
bincode::serialize(&self.package).context("Failed to serialize cache data")?;
temp_file
.write_all(&serialized)
.context("Failed to write serialized cache to temporary file.")?;
temp_file.persist(&self.path).with_context(|| {
let file = File::create(&self.path)
.with_context(|| format!("Failed to create cache file '{}'", self.path.display()))?;
let writer = BufWriter::new(file);
bincode::serialize_into(writer, &self.package).with_context(|| {
format!(
"Failed to rename temporary cache file to {}",
"Failed to serialise cache to file '{}'",
self.path.display()
)
})?;
Ok(())
})
}
/// Applies the pending changes without storing the cache to disk.

View File

@@ -12,17 +12,17 @@ use ruff_linter::warn_user_once;
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Add `noqa` directives to a collection of files.
pub(crate) fn add_noqa(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
) -> Result<usize> {
// Collect all the files to check.
let start = Instant::now();
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
let duration = start.elapsed();
debug!("Identified files to lint in: {:?}", duration);

View File

@@ -24,7 +24,7 @@ use ruff_workspace::resolver::{
match_exclusion, python_files_in_path, PyprojectConfig, ResolvedFile,
};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
use crate::cache::{Cache, PackageCacheMap, PackageCaches};
use crate::diagnostics::Diagnostics;
use crate::panic::catch_unwind;
@@ -34,7 +34,7 @@ use crate::panic::catch_unwind;
pub(crate) fn check(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
cache: flags::Cache,
noqa: flags::Noqa,
fix_mode: flags::FixMode,
@@ -42,7 +42,7 @@ pub(crate) fn check(
) -> Result<Diagnostics> {
// Collect all the Python files to check.
let start = Instant::now();
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
debug!("Identified files to lint in: {:?}", start.elapsed());
if paths.is_empty() {
@@ -233,7 +233,7 @@ mod test {
use ruff_workspace::resolver::{PyprojectConfig, PyprojectDiscoveryStrategy};
use ruff_workspace::Settings;
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
use super::check;
@@ -272,7 +272,7 @@ mod test {
// Notebooks are not included by default
&[tempdir.path().to_path_buf(), notebook],
&pyproject_config,
&ConfigArguments::default(),
&CliOverrides::default(),
flags::Cache::Disabled,
flags::Noqa::Disabled,
flags::FixMode::Generate,

View File

@@ -6,7 +6,7 @@ use ruff_linter::packaging;
use ruff_linter::settings::flags;
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, PyprojectConfig, Resolver};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
use crate::diagnostics::{lint_stdin, Diagnostics};
use crate::stdin::{parrot_stdin, read_from_stdin};
@@ -14,7 +14,7 @@ use crate::stdin::{parrot_stdin, read_from_stdin};
pub(crate) fn check_stdin(
filename: Option<&Path>,
pyproject_config: &PyprojectConfig,
overrides: &ConfigArguments,
overrides: &CliOverrides,
noqa: flags::Noqa,
fix_mode: flags::FixMode,
) -> Result<Diagnostics> {

View File

@@ -29,7 +29,7 @@ use ruff_text_size::{TextLen, TextRange, TextSize};
use ruff_workspace::resolver::{match_exclusion, python_files_in_path, ResolvedFile, Resolver};
use ruff_workspace::FormatterSettings;
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::args::{CliOverrides, FormatArguments, FormatRange};
use crate::cache::{Cache, FileCacheKey, PackageCacheMap, PackageCaches};
use crate::panic::{catch_unwind, PanicError};
use crate::resolve::resolve;
@@ -60,17 +60,18 @@ impl FormatMode {
/// Format a set of files, and return the exit status.
pub(crate) fn format(
cli: FormatArguments,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
log_level: LogLevel,
) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
config_arguments,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
let mode = FormatMode::from_cli(&cli);
let files = resolve_default_files(cli.files, false);
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(&files, &pyproject_config, overrides)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");

View File

@@ -9,7 +9,7 @@ use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{match_exclusion, python_file_at_path, Resolver};
use ruff_workspace::FormatterSettings;
use crate::args::{ConfigArguments, FormatArguments, FormatRange};
use crate::args::{CliOverrides, FormatArguments, FormatRange};
use crate::commands::format::{
format_source, warn_incompatible_formatter_settings, FormatCommandError, FormatMode,
FormatResult, FormattedSource,
@@ -19,13 +19,11 @@ use crate::stdin::{parrot_stdin, read_from_stdin};
use crate::ExitStatus;
/// Run the formatter over a single file, read from `stdin`.
pub(crate) fn format_stdin(
cli: &FormatArguments,
config_arguments: &ConfigArguments,
) -> Result<ExitStatus> {
pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &CliOverrides) -> Result<ExitStatus> {
let pyproject_config = resolve(
cli.isolated,
config_arguments,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
@@ -36,7 +34,7 @@ pub(crate) fn format_stdin(
if resolver.force_exclude() {
if let Some(filename) = cli.stdin_filename.as_deref() {
if !python_file_at_path(filename, &mut resolver, config_arguments)? {
if !python_file_at_path(filename, &mut resolver, overrides)? {
if mode.is_write() {
parrot_stdin()?;
}

View File

@@ -7,17 +7,17 @@ use itertools::Itertools;
use ruff_linter::warn_user_once;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Show the list of files to be checked based on current settings.
pub(crate) fn show_files(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
writer: &mut impl Write,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, _resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, _resolver) = python_files_in_path(files, pyproject_config, overrides)?;
if paths.is_empty() {
warn_user_once!("No Python files found under the given path(s)");

View File

@@ -6,17 +6,17 @@ use itertools::Itertools;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Print the user-facing configuration settings.
pub(crate) fn show_settings(
files: &[PathBuf],
pyproject_config: &PyprojectConfig,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
writer: &mut impl Write,
) -> Result<()> {
// Collect all files in the hierarchy.
let (paths, resolver) = python_files_in_path(files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(files, pyproject_config, overrides)?;
// Print the list of files.
let Some(path) = paths

View File

@@ -204,23 +204,24 @@ pub fn run(
}
fn format(args: FormatCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, config_arguments) = args.partition()?;
let (cli, overrides) = args.partition();
if is_stdin(&cli.files, cli.stdin_filename.as_deref()) {
commands::format_stdin::format_stdin(&cli, &config_arguments)
commands::format_stdin::format_stdin(&cli, &overrides)
} else {
commands::format::format(cli, &config_arguments, log_level)
commands::format::format(cli, &overrides, log_level)
}
}
pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let (cli, config_arguments) = args.partition()?;
let (cli, overrides) = args.partition();
// Construct the "default" settings. These are used when no `pyproject.toml`
// files are present, or files are injected from outside of the hierarchy.
let pyproject_config = resolve::resolve(
cli.isolated,
&config_arguments,
cli.config.as_deref(),
&overrides,
cli.stdin_filename.as_deref(),
)?;
@@ -238,21 +239,11 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let files = resolve_default_files(cli.files, is_stdin);
if cli.show_settings {
commands::show_settings::show_settings(
&files,
&pyproject_config,
&config_arguments,
&mut writer,
)?;
commands::show_settings::show_settings(&files, &pyproject_config, &overrides, &mut writer)?;
return Ok(ExitStatus::Success);
}
if cli.show_files {
commands::show_files::show_files(
&files,
&pyproject_config,
&config_arguments,
&mut writer,
)?;
commands::show_files::show_files(&files, &pyproject_config, &overrides, &mut writer)?;
return Ok(ExitStatus::Success);
}
@@ -311,8 +302,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
if !fix_mode.is_generate() {
warn_user!("--fix is incompatible with --add-noqa.");
}
let modifications =
commands::add_noqa::add_noqa(&files, &pyproject_config, &config_arguments)?;
let modifications = commands::add_noqa::add_noqa(&files, &pyproject_config, &overrides)?;
if modifications > 0 && log_level >= LogLevel::Default {
let s = if modifications == 1 { "" } else { "s" };
#[allow(clippy::print_stderr)]
@@ -362,7 +352,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::check::check(
&files,
&pyproject_config,
&config_arguments,
&overrides,
cache.into(),
noqa.into(),
fix_mode,
@@ -384,7 +374,8 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
if matches!(change_kind, ChangeKind::Configuration) {
pyproject_config = resolve::resolve(
cli.isolated,
&config_arguments,
cli.config.as_deref(),
&overrides,
cli.stdin_filename.as_deref(),
)?;
}
@@ -394,7 +385,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
let messages = commands::check::check(
&files,
&pyproject_config,
&config_arguments,
&overrides,
cache.into(),
noqa.into(),
fix_mode,
@@ -411,7 +402,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
commands::check_stdin::check_stdin(
cli.stdin_filename.map(fs::normalize_path).as_deref(),
&pyproject_config,
&config_arguments,
&overrides,
noqa.into(),
fix_mode,
)?
@@ -419,7 +410,7 @@ pub fn check(args: CheckCommand, log_level: LogLevel) -> Result<ExitStatus> {
commands::check::check(
&files,
&pyproject_config,
&config_arguments,
&overrides,
cache.into(),
noqa.into(),
fix_mode,

View File

@@ -11,18 +11,19 @@ use ruff_workspace::resolver::{
Relativity,
};
use crate::args::ConfigArguments;
use crate::args::CliOverrides;
/// Resolve the relevant settings strategy and defaults for the current
/// invocation.
pub fn resolve(
isolated: bool,
config_arguments: &ConfigArguments,
config: Option<&Path>,
overrides: &CliOverrides,
stdin_filename: Option<&Path>,
) -> Result<PyprojectConfig> {
// First priority: if we're running in isolated mode, use the default settings.
if isolated {
let config = config_arguments.transform(Configuration::default());
let config = overrides.transform(Configuration::default());
let settings = config.into_settings(&path_dedot::CWD)?;
debug!("Isolated mode, not reading any pyproject.toml");
return Ok(PyprojectConfig::new(
@@ -35,13 +36,12 @@ pub fn resolve(
// Second priority: the user specified a `pyproject.toml` file. Use that
// `pyproject.toml` for _all_ configuration, and resolve paths relative to the
// current working directory. (This matches ESLint's behavior.)
if let Some(pyproject) = config_arguments
.config_file()
if let Some(pyproject) = config
.map(|config| config.display().to_string())
.map(|config| shellexpand::full(&config).map(|config| PathBuf::from(config.as_ref())))
.transpose()?
{
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, config_arguments)?;
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
debug!(
"Using user-specified configuration file at: {}",
pyproject.display()
@@ -67,7 +67,7 @@ pub fn resolve(
"Using configuration file (via parent) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Parent, config_arguments)?;
let settings = resolve_root_settings(&pyproject, Relativity::Parent, overrides)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
@@ -84,7 +84,7 @@ pub fn resolve(
"Using configuration file (via cwd) at: {}",
pyproject.display()
);
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, config_arguments)?;
let settings = resolve_root_settings(&pyproject, Relativity::Cwd, overrides)?;
return Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,
settings,
@@ -97,7 +97,7 @@ pub fn resolve(
// "closest" `pyproject.toml` file for every Python file later on, so these act
// as the "default" settings.)
debug!("Using Ruff default settings");
let config = config_arguments.transform(Configuration::default());
let config = overrides.transform(Configuration::default());
let settings = config.into_settings(&path_dedot::CWD)?;
Ok(PyprojectConfig::new(
PyprojectDiscoveryStrategy::Hierarchical,

View File

@@ -90,179 +90,6 @@ fn format_warn_stdin_filename_with_files() {
"###);
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo.toml", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo.toml' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
It looks like you were trying to pass a path to a configuration file.
The path `foo.toml` does not exist
For more information, try '--help'.
"###);
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo = bar", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo = bar' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 7
|
1 | foo = bar
| ^
invalid string
expected `"`, `'`
For more information, try '--help'.
"###);
}
#[test]
fn too_many_config_files() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
let ruff2_dot_toml = tempdir.path().join("ruff2.toml");
fs::File::create(&ruff_dot_toml)?;
fs::File::create(&ruff2_dot_toml)?;
let expected_stderr = format!(
"\
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config={}` or `--config={}`.
For more information, try `--help`.
",
ruff_dot_toml.display(),
ruff2_dot_toml.display(),
);
let cmd = Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg(".")
.output()?;
let stderr = std::str::from_utf8(&cmd.stderr)?;
assert_eq!(stderr, expected_stderr);
Ok(())
}
#[test]
fn config_file_and_isolated() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
fs::File::create(&ruff_dot_toml)?;
let expected_stderr = format!(
"\
ruff failed
Cause: The argument `--config={}` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
",
ruff_dot_toml.display(),
);
let cmd = Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg(".")
.output()?;
let stderr = std::str::from_utf8(&cmd.stderr)?;
assert_eq!(stderr, expected_stderr);
Ok(())
}
#[test]
fn config_override_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "line-length = 100")?;
let fixture = r#"
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_toml)
// This overrides the long line length set in the config file
.args(["--config", "line-length=80"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
def foo():
print(
"looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string"
)
----- stderr -----
"###);
Ok(())
}
#[test]
fn config_doubly_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "line-length = 70")?;
let fixture = r#"
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.arg("format")
.arg("--config")
.arg(&ruff_toml)
// This overrides the long line length set in the config file...
.args(["--config", "line-length=80"])
// ...but this overrides them both:
.args(["--line-length", "100"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
----- stderr -----
"###);
Ok(())
}
#[test]
fn format_options() -> Result<()> {
let tempdir = TempDir::new()?;

View File

@@ -510,341 +510,6 @@ ignore = ["D203", "D212"]
Ok(())
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo.toml", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo.toml' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
It looks like you were trying to pass a path to a configuration file.
The path `foo.toml` does not exist
For more information, try '--help'.
"###);
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo = bar", "."]), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'foo = bar' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 7
|
1 | foo = bar
| ^
invalid string
expected `"`, `'`
For more information, try '--help'.
"###);
}
#[test]
fn too_many_config_files() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
let ruff2_dot_toml = tempdir.path().join("ruff2.toml");
fs::File::create(&ruff_dot_toml)?;
fs::File::create(&ruff2_dot_toml)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
"###);
});
Ok(())
}
#[test]
fn config_file_and_isolated() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_dot_toml = tempdir.path().join("ruff.toml");
fs::File::create(&ruff_dot_toml)?;
insta::with_settings!({
filters => vec![(tempdir_filter(&tempdir).as_str(), "[TMP]/")]
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: The argument `--config=[TMP]/ruff.toml` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
"###);
});
Ok(())
}
#[test]
fn config_override_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
line-length = 100
[lint]
select = ["I"]
[lint.isort]
combine-as-imports = true
"#,
)?;
let fixture = r#"
from foo import (
aaaaaaaaaaaaaaaaaaa,
bbbbbbbbbbb as bbbbbbbbbbbbbbbb,
cccccccccccccccc,
ddddddddddd as ddddddddddddd,
eeeeeeeeeeeeeee,
ffffffffffff as ffffffffffffff,
ggggggggggggg,
hhhhhhh as hhhhhhhhhhh,
iiiiiiiiiiiiii,
jjjjjjjjjjjjj as jjjjjj,
)
x = "longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss"
"#;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "line-length=90"])
.args(["--config", "lint.extend-select=['E501', 'F841']"])
.args(["--config", "lint.isort.combine-as-imports = false"])
.arg("-")
.pass_stdin(fixture), @r###"
success: false
exit_code: 1
----- stdout -----
-:2:1: I001 [*] Import block is un-sorted or un-formatted
-:15:91: E501 Line too long (97 > 90)
Found 2 errors.
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
Ok(())
}
#[test]
fn valid_toml_but_nonexistent_option_provided_via_config_argument() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args([".", "--config", "extend-select=['F481']"]), // No such code as F481!
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F481']' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
Could not parse the supplied argument as a `ruff.toml` configuration option:
Unknown rule selector: `F481`
For more information, try '--help'.
"###);
}
#[test]
fn each_toml_option_requires_a_new_flag_1() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// commas can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'], line-length=90"]),
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F841'], line-length=90' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 23
|
1 | extend-select=['F841'], line-length=90
| ^
expected newline, `#`
For more information, try '--help'.
"###);
}
#[test]
fn each_toml_option_requires_a_new_flag_2() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// spaces *also* can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'] line-length=90"]),
@r###"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: invalid value 'extend-select=['F841'] line-length=90' for '--config <CONFIG_OPTION>'
tip: A `--config` flag must either be a path to a `.toml` configuration file
or a TOML `<KEY> = <VALUE>` pair overriding a specific configuration
option
The supplied argument is not valid TOML:
TOML parse error at line 1, column 24
|
1 | extend-select=['F841'] line-length=90
| ^
expected newline, `#`
For more information, try '--help'.
"###);
}
#[test]
fn config_doubly_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(
&ruff_toml,
r#"
line-length = 100
[lint]
select=["E501"]
"#,
)?;
let fixture = "x = 'longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss'";
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
// The --line-length flag takes priority over both the config file
// and the `--config="line-length=110"` flag,
// despite them both being specified after this flag on the command line:
.args(["--line-length", "90"])
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "line-length=110"])
.arg("-")
.pass_stdin(fixture), @r###"
success: false
exit_code: 1
----- stdout -----
-:1:91: E501 Line too long (97 > 90)
Found 1 error.
----- stderr -----
"###);
Ok(())
}
#[test]
fn complex_config_setting_overridden_via_cli() -> Result<()> {
let tempdir = TempDir::new()?;
let ruff_toml = tempdir.path().join("ruff.toml");
fs::write(&ruff_toml, "lint.select = ['N801']")?;
let fixture = "class violates_n801: pass";
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.arg("--config")
.arg(&ruff_toml)
.args(["--config", "lint.per-file-ignores = {'generated.py' = ['N801']}"])
.args(["--stdin-filename", "generated.py"])
.arg("-")
.pass_stdin(fixture), @r###"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
"###);
Ok(())
}
#[test]
fn deprecated_config_option_overridden_via_cli() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "select=['N801']", "-"])
.pass_stdin("class lowercase: ..."),
@r###"
success: false
exit_code: 1
----- stdout -----
-:1:7: N801 Class name `lowercase` should use CapWords convention
Found 1 error.
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your `--config` CLI arguments:
- 'select' -> 'lint.select'
"###);
}
#[test]
fn extension() -> Result<()> {
let tempdir = TempDir::new()?;

View File

@@ -25,15 +25,6 @@ import cycles. They also increase the cognitive load of reading the code.
If an import statement is used to check for the availability or existence
of a module, consider using `importlib.util.find_spec` instead.
If an import statement is used to re-export a symbol as part of a module's
public interface, consider using a "redundant" import alias, which
instructs Ruff (and other tools) to respect the re-export, and avoid
marking it as unused, as in:
```python
from module import member as member
```
## Example
```python
import numpy as np # unused import
@@ -60,12 +51,11 @@ else:
```
## Options
- `lint.ignore-init-module-imports`
- `lint.pyflakes.extend-generics`
## References
- [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
- [Typing documentation: interface conventions](https://typing.readthedocs.io/en/latest/source/libraries.html#library-interface-public-and-private-symbols)
----- stderr -----

View File

@@ -13,7 +13,6 @@ license = { workspace = true }
[lib]
bench = false
doctest = false
[[bench]]
name = "linter"

View File

@@ -27,7 +27,7 @@ use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::EnvFilter;
use ruff::args::{ConfigArguments, FormatArguments, FormatCommand, LogLevelArgs};
use ruff::args::{CliOverrides, FormatArguments, FormatCommand, LogLevelArgs};
use ruff::resolve::resolve;
use ruff_formatter::{FormatError, LineWidth, PrintError};
use ruff_linter::logging::LogLevel;
@@ -38,23 +38,24 @@ use ruff_python_formatter::{
use ruff_python_parser::ParseError;
use ruff_workspace::resolver::{python_files_in_path, PyprojectConfig, ResolvedFile, Resolver};
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, ConfigArguments)> {
fn parse_cli(dirs: &[PathBuf]) -> anyhow::Result<(FormatArguments, CliOverrides)> {
let args_matches = FormatCommand::command()
.no_binary_name(true)
.get_matches_from(dirs);
let arguments: FormatCommand = FormatCommand::from_arg_matches(&args_matches)?;
let (cli, config_arguments) = arguments.partition()?;
Ok((cli, config_arguments))
let (cli, overrides) = arguments.partition();
Ok((cli, overrides))
}
/// Find the [`PyprojectConfig`] to use for formatting.
fn find_pyproject_config(
cli: &FormatArguments,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
) -> anyhow::Result<PyprojectConfig> {
let mut pyproject_config = resolve(
cli.isolated,
config_arguments,
cli.config.as_deref(),
overrides,
cli.stdin_filename.as_deref(),
)?;
// We don't want to format pyproject.toml
@@ -71,9 +72,9 @@ fn find_pyproject_config(
fn ruff_check_paths<'a>(
pyproject_config: &'a PyprojectConfig,
cli: &FormatArguments,
config_arguments: &ConfigArguments,
overrides: &CliOverrides,
) -> anyhow::Result<(Vec<Result<ResolvedFile, ignore::Error>>, Resolver<'a>)> {
let (paths, resolver) = python_files_in_path(&cli.files, pyproject_config, config_arguments)?;
let (paths, resolver) = python_files_in_path(&cli.files, pyproject_config, overrides)?;
Ok((paths, resolver))
}

View File

@@ -11,7 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_text_size = { path = "../ruff_text_size" }

View File

@@ -1,8 +1,7 @@
use std::cell::Cell;
use std::num::NonZeroU8;
use crate::format_element::PrintMode;
use crate::{GroupId, TextSize};
use std::cell::Cell;
use std::num::NonZeroU8;
/// A Tag marking the start and end of some content to which some special formatting should be applied.
///
@@ -100,10 +99,6 @@ pub enum Tag {
}
impl Tag {
pub const fn align(count: NonZeroU8) -> Tag {
Tag::StartAlign(Align(count))
}
/// Returns `true` if `self` is any start tag.
pub const fn is_start(&self) -> bool {
matches!(

View File

@@ -11,7 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_macros = { path = "../ruff_macros" }

View File

@@ -11,25 +11,13 @@ class _UnusedTypedDict2(typing.TypedDict):
class _UsedTypedDict(TypedDict):
foo: bytes
foo: bytes
class _CustomClass(_UsedTypedDict):
bar: list[int]
_UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
_UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
def uses_UsedTypedDict3(arg: _UsedTypedDict3) -> None: ...
# In `.py` files, we don't flag unused definitions in class scopes (unlike in `.pyi`
# files).
class _CustomClass3:
class _UnusedTypeDict4(TypedDict):
pass
def method(self) -> None:
_CustomClass3._UnusedTypeDict4()

View File

@@ -35,13 +35,3 @@ _UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
_UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
def uses_UsedTypedDict3(arg: _UsedTypedDict3) -> None: ...
# In `.pyi` files, we flag unused definitions in class scopes as well as in the global
# scope (unlike in `.py` files).
class _CustomClass3:
class _UnusedTypeDict4(TypedDict):
pass
def method(self) -> None:
_CustomClass3._UnusedTypeDict4()

View File

@@ -36,47 +36,35 @@ for i in list( # Comment
): # PERF101
pass
for i in list(foo_dict): # OK
for i in list(foo_dict): # Ok
pass
for i in list(1): # OK
for i in list(1): # Ok
pass
for i in list(foo_int): # OK
for i in list(foo_int): # Ok
pass
import itertools
for i in itertools.product(foo_int): # OK
for i in itertools.product(foo_int): # Ok
pass
for i in list(foo_list): # OK
for i in list(foo_list): # Ok
foo_list.append(i + 1)
for i in list(foo_list): # PERF101
# Make sure we match the correct list
other_list.append(i + 1)
for i in list(foo_tuple): # OK
for i in list(foo_tuple): # Ok
foo_tuple.append(i + 1)
for i in list(foo_set): # OK
for i in list(foo_set): # Ok
foo_set.append(i + 1)
x, y, nested_tuple = (1, 2, (3, 4, 5))
for i in list(nested_tuple): # PERF101
pass
for i in list(foo_list): # OK
if True:
foo_list.append(i + 1)
for i in list(foo_list): # OK
if True:
foo_list[i] = i + 1
for i in list(foo_list): # OK
if True:
del foo_list[i + 1]

View File

@@ -4,12 +4,7 @@
1 in (
1, 2, 3
)
fruits = ["cherry", "grapes"]
"cherry" in fruits
_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in ("a", "b")}
# OK
fruits in [[1, 2, 3], [4, 5, 6]]
fruits in [1, 2, 3]
1 in [[1, 2, 3], [4, 5, 6]]
_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in (["a", "b"], ["c", "d"])}
fruits = ["cherry", "grapes"]
"cherry" in fruits

View File

@@ -35,15 +35,6 @@ if argc != 0: # correct
if argc != 1: # correct
pass
if argc != -1.0: # correct
pass
if argc != 0.0: # correct
pass
if argc != 1.0: # correct
pass
if argc != 2: # [magic-value-comparison]
pass
@@ -53,12 +44,6 @@ if argc != -2: # [magic-value-comparison]
if argc != +2: # [magic-value-comparison]
pass
if argc != -2.0: # [magic-value-comparison]
pass
if argc != +2.0: # [magic-value-comparison]
pass
if __name__ == "__main__": # correct
pass

View File

@@ -1,75 +0,0 @@
import codecs
import io
from pathlib import Path
# Errors
with open("FURB129.py") as f:
for _line in f.readlines():
pass
a = [line.lower() for line in f.readlines()]
b = {line.upper() for line in f.readlines()}
c = {line.lower(): line.upper() for line in f.readlines()}
with Path("FURB129.py").open() as f:
for _line in f.readlines():
pass
for _line in open("FURB129.py").readlines():
pass
for _line in Path("FURB129.py").open().readlines():
pass
def func():
f = Path("FURB129.py").open()
for _line in f.readlines():
pass
f.close()
def func(f: io.BytesIO):
for _line in f.readlines():
pass
def func():
with (open("FURB129.py") as f, foo as bar):
for _line in f.readlines():
pass
for _line in bar.readlines():
pass
# False positives
def func(f):
for _line in f.readlines():
pass
def func(f: codecs.StreamReader):
for _line in f.readlines():
pass
def func():
class A:
def readlines(self) -> list[str]:
return ["a", "b", "c"]
return A()
for _line in func().readlines():
pass
# OK
for _line in ["a", "b", "c"]:
pass
with open("FURB129.py") as f:
for _line in f:
pass
for _line in f.readlines(10):
pass
for _not_line in f.readline():
pass

View File

@@ -162,26 +162,3 @@ async def f(x: bool):
T = asyncio.create_task(asyncio.sleep(1))
else:
T = None
# Error
def f():
loop = asyncio.new_event_loop()
loop.create_task(main()) # Error
# Error
def f():
loop = asyncio.get_event_loop()
loop.create_task(main()) # Error
# OK
def f():
global task
loop = asyncio.new_event_loop()
task = loop.create_task(main()) # Error
# OK
def f():
global task
loop = asyncio.get_event_loop()
task = loop.create_task(main()) # Error

View File

@@ -250,23 +250,6 @@ __all__ = (
,
)
__all__ = ( # comment about the opening paren
# multiline strange comment 0a
# multiline strange comment 0b
"foo" # inline comment about foo
# multiline strange comment 1a
# multiline strange comment 1b
, # comment about the comma??
# comment about bar part a
# comment about bar part b
"bar" # inline comment about bar
# strange multiline comment comment 2a
# strange multiline comment 2b
,
# strange multiline comment 3a
# strange multiline comment 3b
) # comment about the closing paren
###################################
# These should all not get flagged:
###################################

View File

@@ -188,10 +188,6 @@ class BezierBuilder4:
,
)
__slots__ = {"foo", "bar",
"baz", "bingo"
}
###################################
# These should all not get flagged:
###################################

View File

@@ -2,14 +2,11 @@ use ruff_python_ast::Comprehension;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::{flake8_simplify, refurb};
use crate::rules::flake8_simplify;
/// Run lint rules over a [`Comprehension`] syntax nodes.
pub(crate) fn comprehension(comprehension: &Comprehension, checker: &mut Checker) {
if checker.enabled(Rule::InDictKeys) {
flake8_simplify::rules::key_in_dict_comprehension(checker, comprehension);
}
if checker.enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_comprehension(checker, comprehension);
}
}

View File

@@ -281,21 +281,17 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
}
if checker.source_type.is_stub()
|| matches!(scope.kind, ScopeKind::Module | ScopeKind::Function(_))
{
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::AsyncioDanglingTask) {

View File

@@ -1317,9 +1317,6 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::UnnecessaryDictIndexLookup) {
pylint::rules::unnecessary_dict_index_lookup(checker, for_stmt);
}
if checker.enabled(Rule::ReadlinesInFor) {
refurb::rules::readlines_in_for(checker, for_stmt);
}
if !is_async {
if checker.enabled(Rule::ReimplementedBuiltin) {
flake8_simplify::rules::convert_for_loop_to_any_all(checker, stmt);

View File

@@ -40,7 +40,7 @@ use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::helpers::{
collect_import_from_member, extract_handled_exceptions, is_docstring_stmt, to_module_path,
collect_import_from_member, extract_handled_exceptions, to_module_path,
};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::str::trailing_quote;
@@ -71,38 +71,6 @@ mod analyze;
mod annotation;
mod deferred;
/// State representing whether a docstring is expected or not for the next statement.
#[derive(Default, Debug, Copy, Clone, PartialEq)]
enum DocstringState {
/// The next statement is expected to be a docstring, but not necessarily so.
///
/// For example, in the following code:
///
/// ```python
/// class Foo:
/// pass
///
///
/// def bar(x, y):
/// """Docstring."""
/// return x + y
/// ```
///
/// For `Foo`, the state is expected when the checker is visiting the class
/// body but isn't going to be present. While, for `bar` function, the docstring
/// is expected and present.
#[default]
Expected,
Other,
}
impl DocstringState {
/// Returns `true` if the next statement is expected to be a docstring.
const fn is_expected(self) -> bool {
matches!(self, DocstringState::Expected)
}
}
pub(crate) struct Checker<'a> {
/// The [`Path`] to the file under analysis.
path: &'a Path,
@@ -146,8 +114,6 @@ pub(crate) struct Checker<'a> {
pub(crate) flake8_bugbear_seen: Vec<TextRange>,
/// The end offset of the last visited statement.
last_stmt_end: TextSize,
/// A state describing if a docstring is expected or not.
docstring_state: DocstringState,
}
impl<'a> Checker<'a> {
@@ -187,7 +153,6 @@ impl<'a> Checker<'a> {
cell_offsets,
notebook_index,
last_stmt_end: TextSize::default(),
docstring_state: DocstringState::default(),
}
}
}
@@ -340,16 +305,19 @@ where
self.semantic.flags -= SemanticModelFlags::IMPORT_BOUNDARY;
}
// Track whether we've seen module docstrings, non-imports, etc.
// Track whether we've seen docstrings, non-imports, etc.
match stmt {
Stmt::Expr(ast::StmtExpr { value, .. })
if !self.semantic.seen_module_docstring_boundary()
if !self
.semantic
.flags
.intersects(SemanticModelFlags::MODULE_DOCSTRING)
&& value.is_string_literal_expr() =>
{
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
}
Stmt::ImportFrom(ast::StmtImportFrom { module, names, .. }) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
// Allow __future__ imports until we see a non-__future__ import.
if let Some("__future__") = module.as_deref() {
@@ -364,11 +332,11 @@ where
}
}
Stmt::Import(_) => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::FUTURES_BOUNDARY;
}
_ => {
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING_BOUNDARY;
self.semantic.flags |= SemanticModelFlags::MODULE_DOCSTRING;
self.semantic.flags |= SemanticModelFlags::FUTURES_BOUNDARY;
if !(self.semantic.seen_import_boundary()
|| helpers::is_assignment_to_a_dunder(stmt)
@@ -385,16 +353,6 @@ where
// the node.
let flags_snapshot = self.semantic.flags;
// Update the semantic model if it is in a docstring. This should be done after the
// flags snapshot to ensure that it gets reset once the statement is analyzed.
if self.docstring_state.is_expected() {
if is_docstring_stmt(stmt) {
self.semantic.flags |= SemanticModelFlags::DOCSTRING;
}
// Reset the state irrespective of whether the statement is a docstring or not.
self.docstring_state = DocstringState::Other;
}
// Step 1: Binding
match stmt {
Stmt::AugAssign(ast::StmtAugAssign {
@@ -696,8 +654,6 @@ where
self.semantic.set_globals(globals);
}
// Set the docstring state before visiting the class body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
@@ -1332,16 +1288,6 @@ where
self.semantic.flags |= SemanticModelFlags::F_STRING;
visitor::walk_expr(self, expr);
}
Expr::NamedExpr(ast::ExprNamedExpr {
target,
value,
range: _,
}) => {
self.visit_expr(value);
self.semantic.flags |= SemanticModelFlags::NAMED_EXPRESSION_ASSIGNMENT;
self.visit_expr(target);
}
_ => visitor::walk_expr(self, expr),
}
@@ -1558,8 +1504,6 @@ impl<'a> Checker<'a> {
unreachable!("Generator expression must contain at least one generator");
};
let flags = self.semantic.flags;
// Generators are compiled as nested functions. (This may change with PEP 709.)
// As such, the `iter` of the first generator is evaluated in the outer scope, while all
// subsequent nodes are evaluated in the inner scope.
@@ -1589,22 +1533,14 @@ impl<'a> Checker<'a> {
// `x` is local to `foo`, and the `T` in `y=T` skips the class scope when resolving.
self.visit_expr(&generator.iter);
self.semantic.push_scope(ScopeKind::Generator);
self.semantic.flags = flags | SemanticModelFlags::COMPREHENSION_ASSIGNMENT;
self.visit_expr(&generator.target);
self.semantic.flags = flags;
for expr in &generator.ifs {
self.visit_boolean_test(expr);
}
for generator in iterator {
self.visit_expr(&generator.iter);
self.semantic.flags = flags | SemanticModelFlags::COMPREHENSION_ASSIGNMENT;
self.visit_expr(&generator.target);
self.semantic.flags = flags;
for expr in &generator.ifs {
self.visit_boolean_test(expr);
}
@@ -1803,21 +1739,11 @@ impl<'a> Checker<'a> {
return;
}
// A binding within a `for` must be a loop variable, as in:
// ```python
// for x in range(10):
// ...
// ```
if parent.is_for_stmt() {
self.add_binding(id, expr.range(), BindingKind::LoopVar, flags);
return;
}
// A binding within a `with` must be an item, as in:
// ```python
// with open("file.txt") as fp:
// ...
// ```
if parent.is_with_stmt() {
self.add_binding(id, expr.range(), BindingKind::WithItemVar, flags);
return;
@@ -1873,26 +1799,17 @@ impl<'a> Checker<'a> {
}
// If the expression is the left-hand side of a walrus operator, then it's a named
// expression assignment, as in:
// ```python
// if (x := 10) > 5:
// ...
// ```
if self.semantic.in_named_expression_assignment() {
// expression assignment.
if self
.semantic
.current_expressions()
.filter_map(Expr::as_named_expr_expr)
.any(|parent| parent.target.as_ref() == expr)
{
self.add_binding(id, expr.range(), BindingKind::NamedExprAssignment, flags);
return;
}
// If the expression is part of a comprehension target, then it's a comprehension variable
// assignment, as in:
// ```python
// [x for x in range(10)]
// ```
if self.semantic.in_comprehension_assignment() {
self.add_binding(id, expr.range(), BindingKind::ComprehensionVar, flags);
return;
}
self.add_binding(id, expr.range(), BindingKind::Assignment, flags);
}
@@ -2008,8 +1925,6 @@ impl<'a> Checker<'a> {
};
self.visit_parameters(parameters);
// Set the docstring state before visiting the function body.
self.docstring_state = DocstringState::Expected;
self.visit_body(body);
}
}

View File

@@ -1025,7 +1025,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
#[allow(deprecated)]
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
(Refurb, "118") => (RuleGroup::Preview, rules::refurb::rules::ReimplementedOperator),
(Refurb, "129") => (RuleGroup::Preview, rules::refurb::rules::ReadlinesInFor),
#[allow(deprecated)]
(Refurb, "131") => (RuleGroup::Nursery, rules::refurb::rules::DeleteFullSlice),
#[allow(deprecated)]

View File

@@ -248,7 +248,6 @@ impl Renamer {
| BindingKind::Assignment
| BindingKind::BoundException
| BindingKind::LoopVar
| BindingKind::ComprehensionVar
| BindingKind::WithItemVar
| BindingKind::Global
| BindingKind::Nonlocal(_)

View File

@@ -118,7 +118,8 @@ fn is_open_call_from_pathlib(func: &Expr, semantic: &SemanticModel) -> bool {
let binding = semantic.binding(binding_id);
let Some(Expr::Call(call)) = analyze::typing::find_binding_value(binding, semantic) else {
let Some(Expr::Call(call)) = analyze::typing::find_binding_value(&name.id, binding, semantic)
else {
return false;
};

View File

@@ -15,11 +15,13 @@ PYI049.py:9:7: PYI049 Private TypedDict `_UnusedTypedDict2` is never used
10 | bar: int
|
PYI049.py:21:1: PYI049 Private TypedDict `_UnusedTypedDict3` is never used
PYI049.py:20:1: PYI049 Private TypedDict `_UnusedTypedDict3` is never used
|
21 | _UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
18 | bar: list[int]
19 |
20 | _UnusedTypedDict3 = TypedDict("_UnusedTypedDict3", {"foo": int})
| ^^^^^^^^^^^^^^^^^ PYI049
22 | _UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
21 | _UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
|

View File

@@ -24,13 +24,4 @@ PYI049.pyi:34:1: PYI049 Private TypedDict `_UnusedTypedDict3` is never used
35 | _UsedTypedDict3 = TypedDict("_UsedTypedDict3", {"bar": bytes})
|
PYI049.pyi:43:11: PYI049 Private TypedDict `_UnusedTypeDict4` is never used
|
41 | # scope (unlike in `.py` files).
42 | class _CustomClass3:
43 | class _UnusedTypeDict4(TypedDict):
| ^^^^^^^^^^^^^^^^ PYI049
44 | pass
|

View File

@@ -76,7 +76,8 @@ pub(crate) fn enumerate_for_loop(checker: &mut Checker, for_stmt: &ast::StmtFor)
}
// Ensure that the index variable was initialized to 0.
let Some(value) = typing::find_binding_value(binding, checker.semantic()) else {
let Some(value) = typing::find_binding_value(&index.id, binding, checker.semantic())
else {
continue;
};
if !matches!(

View File

@@ -10,7 +10,7 @@ use ruff_macros::{derive_message_formats, violation};
///
/// When possible, using `Path` object methods such as `Path.stat()` can
/// improve readability over the `os` module's counterparts (e.g.,
/// `os.path.getatime()`).
/// `os.path.getsize()`).
///
/// Note that `os` functions may be preferable if performance is a concern,
/// e.g., in hot loops.
@@ -19,19 +19,19 @@ use ruff_macros::{derive_message_formats, violation};
/// ```python
/// import os
///
/// os.path.getatime(__file__)
/// os.path.getsize(__file__)
/// ```
///
/// Use instead:
/// ```python
/// from pathlib import Path
///
/// Path(__file__).stat().st_atime
/// Path(__file__).stat().st_size
/// ```
///
/// ## References
/// - [Python documentation: `Path.stat`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.stat)
/// - [Python documentation: `os.path.getatime`](https://docs.python.org/3/library/os.path.html#os.path.getatime)
/// - [Python documentation: `os.path.getsize`](https://docs.python.org/3/library/os.path.html#os.path.getsize)
/// - [PEP 428](https://peps.python.org/pep-0428/)
/// - [Correspondence between `os` and `pathlib`](https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module)
/// - [Why you should be using pathlib](https://treyhunner.com/2018/12/why-you-should-be-using-pathlib/)

View File

@@ -2,7 +2,7 @@ use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
/// ## What it does
/// Checks for uses of `os.path.getctime`.
/// Checks for uses of `os.path.getatime`.
///
/// ## Why is this bad?
/// `pathlib` offers a high-level API for path manipulation, as compared to
@@ -10,7 +10,7 @@ use ruff_macros::{derive_message_formats, violation};
///
/// When possible, using `Path` object methods such as `Path.stat()` can
/// improve readability over the `os` module's counterparts (e.g.,
/// `os.path.getctime()`).
/// `os.path.getsize()`).
///
/// Note that `os` functions may be preferable if performance is a concern,
/// e.g., in hot loops.
@@ -19,19 +19,19 @@ use ruff_macros::{derive_message_formats, violation};
/// ```python
/// import os
///
/// os.path.getctime(__file__)
/// os.path.getsize(__file__)
/// ```
///
/// Use instead:
/// ```python
/// from pathlib import Path
///
/// Path(__file__).stat().st_ctime
/// Path(__file__).stat().st_size
/// ```
///
/// ## References
/// - [Python documentation: `Path.stat`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.stat)
/// - [Python documentation: `os.path.getctime`](https://docs.python.org/3/library/os.path.html#os.path.getctime)
/// - [Python documentation: `os.path.getsize`](https://docs.python.org/3/library/os.path.html#os.path.getsize)
/// - [PEP 428](https://peps.python.org/pep-0428/)
/// - [Correspondence between `os` and `pathlib`](https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module)
/// - [Why you should be using pathlib](https://treyhunner.com/2018/12/why-you-should-be-using-pathlib/)

View File

@@ -2,7 +2,7 @@ use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
/// ## What it does
/// Checks for uses of `os.path.getmtime`.
/// Checks for uses of `os.path.getatime`.
///
/// ## Why is this bad?
/// `pathlib` offers a high-level API for path manipulation, as compared to
@@ -10,7 +10,7 @@ use ruff_macros::{derive_message_formats, violation};
///
/// When possible, using `Path` object methods such as `Path.stat()` can
/// improve readability over the `os` module's counterparts (e.g.,
/// `os.path.getmtime()`).
/// `os.path.getsize()`).
///
/// Note that `os` functions may be preferable if performance is a concern,
/// e.g., in hot loops.
@@ -19,19 +19,19 @@ use ruff_macros::{derive_message_formats, violation};
/// ```python
/// import os
///
/// os.path.getmtime(__file__)
/// os.path.getsize(__file__)
/// ```
///
/// Use instead:
/// ```python
/// from pathlib import Path
///
/// Path(__file__).stat().st_mtime
/// Path(__file__).stat().st_size
/// ```
///
/// ## References
/// - [Python documentation: `Path.stat`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.stat)
/// - [Python documentation: `os.path.getmtime`](https://docs.python.org/3/library/os.path.html#os.path.getmtime)
/// - [Python documentation: `os.path.getsize`](https://docs.python.org/3/library/os.path.html#os.path.getsize)
/// - [PEP 428](https://peps.python.org/pep-0428/)
/// - [Correspondence between `os` and `pathlib`](https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module)
/// - [Why you should be using pathlib](https://treyhunner.com/2018/12/why-you-should-be-using-pathlib/)

View File

@@ -419,20 +419,23 @@ mod tests {
Ok(())
}
#[test_case(Path::new("line_ending_crlf.py"))]
#[test_case(Path::new("line_ending_lf.py"))]
fn source_code_style(path: &Path) -> Result<()> {
let snapshot = format!("{}", path.to_string_lossy());
let diagnostics = test_path(
Path::new("isort").join(path).as_path(),
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
..LinterSettings::for_rule(Rule::UnsortedImports)
},
)?;
crate::assert_messages!(snapshot, diagnostics);
Ok(())
}
// Test currently disabled as line endings are automatically converted to
// platform-appropriate ones in CI/CD #[test_case(Path::new("
// line_ending_crlf.py"))] #[test_case(Path::new("line_ending_lf.py"))]
// fn source_code_style(path: &Path) -> Result<()> {
// let snapshot = format!("{}", path.to_string_lossy());
// let diagnostics = test_path(
// Path::new("isort")
// .join(path)
// .as_path(),
// &LinterSettings {
// src: vec![test_resource_path("fixtures/isort")],
// ..LinterSettings::for_rule(Rule::UnsortedImports)
// },
// )?;
// crate::assert_messages!(snapshot, diagnostics);
// Ok(())
// }
#[test_case(Path::new("separate_local_folder_imports.py"))]
fn known_local_folder(path: &Path) -> Result<()> {

View File

@@ -1,23 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---
line_ending_crlf.py:1:1: I001 [*] Import block is un-sorted or un-formatted
|
1 | / from long_module_name import member_one, member_two, member_three, member_four, member_five
2 | |
| |_^ I001
|
= help: Organize imports
Safe fix
1 |-from long_module_name import member_one, member_two, member_three, member_four, member_five
1 |+from long_module_name import (
2 |+ member_five,
3 |+ member_four,
4 |+ member_one,
5 |+ member_three,
6 |+ member_two,
7 |+)
2 8 |

View File

@@ -1,23 +0,0 @@
---
source: crates/ruff_linter/src/rules/isort/mod.rs
---
line_ending_lf.py:1:1: I001 [*] Import block is un-sorted or un-formatted
|
1 | / from long_module_name import member_one, member_two, member_three, member_four, member_five
2 | |
| |_^ I001
|
= help: Organize imports
Safe fix
1 |-from long_module_name import member_one, member_two, member_three, member_four, member_five
1 |+from long_module_name import (
2 |+ member_five,
3 |+ member_four,
4 |+ member_one,
5 |+ member_three,
6 |+ member_two,
7 |+)
2 8 |

View File

@@ -47,7 +47,6 @@ pub(super) fn test_expression(expr: &Expr, semantic: &SemanticModel) -> Resoluti
| BindingKind::Assignment
| BindingKind::NamedExprAssignment
| BindingKind::LoopVar
| BindingKind::ComprehensionVar
| BindingKind::Global
| BindingKind::Nonlocal(_) => Resolution::RelevantLocal,
BindingKind::Import(import) if matches!(import.call_path(), ["pandas"]) => {

View File

@@ -1,6 +1,5 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::statement_visitor::{walk_stmt, StatementVisitor};
use ruff_python_ast::{self as ast, Arguments, Expr, Stmt};
use ruff_python_semantic::analyze::typing::find_assigned_value;
use ruff_text_size::TextRange;
@@ -99,25 +98,22 @@ pub(crate) fn unnecessary_list_cast(checker: &mut Checker, iter: &Expr, body: &[
range: iterable_range,
..
}) => {
// If the variable is being appended to, don't suggest removing the cast:
//
// ```python
// items = ["foo", "bar"]
// for item in list(items):
// items.append("baz")
// ```
//
// Here, removing the `list()` cast would change the behavior of the code.
if body.iter().any(|stmt| match_append(stmt, id)) {
return;
}
let Some(value) = find_assigned_value(id, checker.semantic()) else {
return;
};
if matches!(value, Expr::Tuple(_) | Expr::List(_) | Expr::Set(_)) {
// If the variable is being modified to, don't suggest removing the cast:
//
// ```python
// items = ["foo", "bar"]
// for item in list(items):
// items.append("baz")
// ```
//
// Here, removing the `list()` cast would change the behavior of the code.
let mut visitor = MutationVisitor::new(id);
visitor.visit_body(body);
if visitor.is_mutated {
return;
}
let mut diagnostic = Diagnostic::new(UnnecessaryListCast, *list_range);
diagnostic.set_fix(remove_cast(*list_range, *iterable_range));
checker.diagnostics.push(diagnostic);
@@ -127,6 +123,28 @@ pub(crate) fn unnecessary_list_cast(checker: &mut Checker, iter: &Expr, body: &[
}
}
/// Check if a statement is an `append` call to a given identifier.
///
/// For example, `foo.append(bar)` would return `true` if `id` is `foo`.
fn match_append(stmt: &Stmt, id: &str) -> bool {
let Some(ast::StmtExpr { value, .. }) = stmt.as_expr_stmt() else {
return false;
};
let Some(ast::ExprCall { func, .. }) = value.as_call_expr() else {
return false;
};
let Some(ast::ExprAttribute { value, attr, .. }) = func.as_attribute_expr() else {
return false;
};
if attr != "append" {
return false;
}
let Some(ast::ExprName { id: target_id, .. }) = value.as_name_expr() else {
return false;
};
target_id == id
}
/// Generate a [`Fix`] to remove a `list` cast from an expression.
fn remove_cast(list_range: TextRange, iterable_range: TextRange) -> Fix {
Fix::safe_edits(
@@ -134,95 +152,3 @@ fn remove_cast(list_range: TextRange, iterable_range: TextRange) -> Fix {
[Edit::deletion(iterable_range.end(), list_range.end())],
)
}
/// A [`StatementVisitor`] that (conservatively) identifies mutations to a variable.
#[derive(Default)]
pub(crate) struct MutationVisitor<'a> {
pub(crate) target: &'a str,
pub(crate) is_mutated: bool,
}
impl<'a> MutationVisitor<'a> {
pub(crate) fn new(target: &'a str) -> Self {
Self {
target,
is_mutated: false,
}
}
}
impl<'a, 'b> StatementVisitor<'b> for MutationVisitor<'a>
where
'b: 'a,
{
fn visit_stmt(&mut self, stmt: &'b Stmt) {
if match_mutation(stmt, self.target) {
self.is_mutated = true;
} else {
walk_stmt(self, stmt);
}
}
}
/// Check if a statement is (probably) a modification to the list assigned to the given identifier.
///
/// For example, `foo.append(bar)` would return `true` if `id` is `foo`.
fn match_mutation(stmt: &Stmt, id: &str) -> bool {
match stmt {
// Ex) `foo.append(bar)`
Stmt::Expr(ast::StmtExpr { value, .. }) => {
let Some(ast::ExprCall { func, .. }) = value.as_call_expr() else {
return false;
};
let Some(ast::ExprAttribute { value, attr, .. }) = func.as_attribute_expr() else {
return false;
};
if !matches!(
attr.as_str(),
"append" | "insert" | "extend" | "remove" | "pop" | "clear" | "reverse" | "sort"
) {
return false;
}
let Some(ast::ExprName { id: target_id, .. }) = value.as_name_expr() else {
return false;
};
target_id == id
}
// Ex) `foo[0] = bar`
Stmt::Assign(ast::StmtAssign { targets, .. }) => targets.iter().any(|target| {
if let Some(ast::ExprSubscript { value: target, .. }) = target.as_subscript_expr() {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
return target_id == id;
}
}
false
}),
// Ex) `foo += bar`
Stmt::AugAssign(ast::StmtAugAssign { target, .. }) => {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
target_id == id
} else {
false
}
}
// Ex) `foo[0]: int = bar`
Stmt::AnnAssign(ast::StmtAnnAssign { target, .. }) => {
if let Some(ast::ExprSubscript { value: target, .. }) = target.as_subscript_expr() {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
return target_id == id;
}
}
false
}
// Ex) `del foo[0]`
Stmt::Delete(ast::StmtDelete { targets, .. }) => targets.iter().any(|target| {
if let Some(ast::ExprSubscript { value: target, .. }) = target.as_subscript_expr() {
if let Some(ast::ExprName { id: target_id, .. }) = target.as_name_expr() {
return target_id == id;
}
}
false
}),
_ => false,
}
}

View File

@@ -178,7 +178,7 @@ PERF101.py:34:10: PERF101 [*] Do not cast an iterable to `list` before iterating
34 |+for i in {1, 2, 3}: # PERF101
37 35 | pass
38 36 |
39 37 | for i in list(foo_dict): # OK
39 37 | for i in list(foo_dict): # Ok
PERF101.py:57:10: PERF101 [*] Do not cast an iterable to `list` before iterating over it
|
@@ -192,7 +192,7 @@ PERF101.py:57:10: PERF101 [*] Do not cast an iterable to `list` before iterating
= help: Remove `list()` cast
Safe fix
54 54 | for i in list(foo_list): # OK
54 54 | for i in list(foo_list): # Ok
55 55 | foo_list.append(i + 1)
56 56 |
57 |-for i in list(foo_list): # PERF101
@@ -218,7 +218,5 @@ PERF101.py:69:10: PERF101 [*] Do not cast an iterable to `list` before iterating
69 |-for i in list(nested_tuple): # PERF101
69 |+for i in nested_tuple: # PERF101
70 70 | pass
71 71 |
72 72 | for i in list(foo_list): # OK

View File

@@ -28,15 +28,6 @@ enum UnusedImportContext {
/// If an import statement is used to check for the availability or existence
/// of a module, consider using `importlib.util.find_spec` instead.
///
/// If an import statement is used to re-export a symbol as part of a module's
/// public interface, consider using a "redundant" import alias, which
/// instructs Ruff (and other tools) to respect the re-export, and avoid
/// marking it as unused, as in:
///
/// ```python
/// from module import member as member
/// ```
///
/// ## Example
/// ```python
/// import numpy as np # unused import
@@ -63,12 +54,11 @@ enum UnusedImportContext {
/// ```
///
/// ## Options
/// - `lint.ignore-init-module-imports`
/// - `lint.pyflakes.extend-generics`
///
/// ## References
/// - [Python documentation: `import`](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
/// - [Python documentation: `importlib.util.find_spec`](https://docs.python.org/3/library/importlib.html#importlib.util.find_spec)
/// - [Typing documentation: interface conventions](https://typing.readthedocs.io/en/latest/source/libraries.html#library-interface-public-and-private-symbols)
#[violation]
pub struct UnusedImport {
name: String,

View File

@@ -1,7 +1,6 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, CmpOp, Expr};
use ruff_python_semantic::analyze::typing;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -26,8 +25,7 @@ use crate::checkers::ast::Checker;
/// ## Fix safety
/// This rule's fix is marked as unsafe, as the use of a `set` literal will
/// error at runtime if the sequence contains unhashable elements (like lists
/// or dictionaries). While Ruff will attempt to infer the hashability of the
/// elements, it may not always be able to do so.
/// or dictionaries).
///
/// ## References
/// - [Whats New In Python 3.2](https://docs.python.org/3/whatsnew/3.2.html#optimizations)
@@ -59,40 +57,7 @@ pub(crate) fn literal_membership(checker: &mut Checker, compare: &ast::ExprCompa
return;
};
let elts = match right {
Expr::List(ast::ExprList { elts, .. }) => elts,
Expr::Tuple(ast::ExprTuple { elts, .. }) => elts,
_ => return,
};
// If `left`, or any of the elements in `right`, are known to _not_ be hashable, return.
if std::iter::once(compare.left.as_ref())
.chain(elts)
.any(|expr| match expr {
// Expressions that are known _not_ to be hashable.
Expr::List(_)
| Expr::Set(_)
| Expr::Dict(_)
| Expr::ListComp(_)
| Expr::SetComp(_)
| Expr::DictComp(_)
| Expr::GeneratorExp(_)
| Expr::Await(_)
| Expr::Yield(_)
| Expr::YieldFrom(_) => true,
// Expressions that can be _inferred_ not to be hashable.
Expr::Name(name) => {
let Some(id) = checker.semantic().resolve_name(name) else {
return false;
};
let binding = checker.semantic().binding(id);
typing::is_list(binding, checker.semantic())
|| typing::is_dict(binding, checker.semantic())
|| typing::is_set(binding, checker.semantic())
}
_ => false,
})
{
if !matches!(right, Expr::List(_) | Expr::Tuple(_)) {
return;
}

View File

@@ -86,10 +86,8 @@ fn is_magic_value(literal_expr: LiteralExpressionRef, allowed_types: &[ConstantT
!matches!(value.to_str(), "" | "__main__")
}
LiteralExpressionRef::NumberLiteral(ast::ExprNumberLiteral { value, .. }) => match value {
#[allow(clippy::float_cmp)]
ast::Number::Float(value) => !(*value == 0.0 || *value == 1.0),
ast::Number::Int(value) => !matches!(*value, Int::ZERO | Int::ONE),
ast::Number::Complex { .. } => true,
_ => true,
},
LiteralExpressionRef::BytesLiteral(_) => true,
}

View File

@@ -52,7 +52,6 @@ pub(crate) fn non_ascii_name(binding: &Binding, locator: &Locator) -> Option<Dia
BindingKind::Assignment => Kind::Assignment,
BindingKind::TypeParam => Kind::TypeParam,
BindingKind::LoopVar => Kind::LoopVar,
BindingKind::ComprehensionVar => Kind::ComprenhensionVar,
BindingKind::WithItemVar => Kind::WithItemVar,
BindingKind::Global => Kind::Global,
BindingKind::Nonlocal(_) => Kind::Nonlocal,
@@ -89,7 +88,6 @@ enum Kind {
Assignment,
TypeParam,
LoopVar,
ComprenhensionVar,
WithItemVar,
Global,
Nonlocal,
@@ -107,7 +105,6 @@ impl fmt::Display for Kind {
Kind::Assignment => f.write_str("Variable"),
Kind::TypeParam => f.write_str("Type parameter"),
Kind::LoopVar => f.write_str("Variable"),
Kind::ComprenhensionVar => f.write_str("Variable"),
Kind::WithItemVar => f.write_str("Variable"),
Kind::Global => f.write_str("Global"),
Kind::Nonlocal => f.write_str("Nonlocal"),

View File

@@ -10,67 +10,49 @@ magic_value_comparison.py:5:4: PLR2004 Magic value used in comparison, consider
6 | pass
|
magic_value_comparison.py:47:12: PLR2004 Magic value used in comparison, consider replacing `2` with a constant variable
magic_value_comparison.py:38:12: PLR2004 Magic value used in comparison, consider replacing `2` with a constant variable
|
45 | pass
46 |
47 | if argc != 2: # [magic-value-comparison]
36 | pass
37 |
38 | if argc != 2: # [magic-value-comparison]
| ^ PLR2004
48 | pass
39 | pass
|
magic_value_comparison.py:50:12: PLR2004 Magic value used in comparison, consider replacing `-2` with a constant variable
magic_value_comparison.py:41:12: PLR2004 Magic value used in comparison, consider replacing `-2` with a constant variable
|
48 | pass
49 |
50 | if argc != -2: # [magic-value-comparison]
39 | pass
40 |
41 | if argc != -2: # [magic-value-comparison]
| ^^ PLR2004
51 | pass
42 | pass
|
magic_value_comparison.py:53:12: PLR2004 Magic value used in comparison, consider replacing `+2` with a constant variable
magic_value_comparison.py:44:12: PLR2004 Magic value used in comparison, consider replacing `+2` with a constant variable
|
51 | pass
52 |
53 | if argc != +2: # [magic-value-comparison]
42 | pass
43 |
44 | if argc != +2: # [magic-value-comparison]
| ^^ PLR2004
54 | pass
45 | pass
|
magic_value_comparison.py:56:12: PLR2004 Magic value used in comparison, consider replacing `-2.0` with a constant variable
magic_value_comparison.py:65:21: PLR2004 Magic value used in comparison, consider replacing `3.141592653589793238` with a constant variable
|
54 | pass
55 |
56 | if argc != -2.0: # [magic-value-comparison]
| ^^^^ PLR2004
57 | pass
|
magic_value_comparison.py:59:12: PLR2004 Magic value used in comparison, consider replacing `+2.0` with a constant variable
|
57 | pass
58 |
59 | if argc != +2.0: # [magic-value-comparison]
| ^^^^ PLR2004
60 | pass
|
magic_value_comparison.py:80:21: PLR2004 Magic value used in comparison, consider replacing `3.141592653589793238` with a constant variable
|
78 | pi_estimation = 3.14
79 |
80 | if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
63 | pi_estimation = 3.14
64 |
65 | if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
| ^^^^^^^^^^^^^^^^^^^^ PLR2004
81 | pass
66 | pass
|
magic_value_comparison.py:86:21: PLR2004 Magic value used in comparison, consider replacing `0x3` with a constant variable
magic_value_comparison.py:71:21: PLR2004 Magic value used in comparison, consider replacing `0x3` with a constant variable
|
84 | pass
85 |
86 | if pi_estimation == 0x3: # [magic-value-comparison]
69 | pass
70 |
71 | if pi_estimation == 0x3: # [magic-value-comparison]
| ^^^ PLR2004
87 | pass
72 | pass
|

View File

@@ -48,8 +48,8 @@ literal_membership.py:4:6: PLR6201 [*] Use a `set` literal when testing for memb
5 | | 1, 2, 3
6 | | )
| |_^ PLR6201
7 | fruits = ["cherry", "grapes"]
8 | "cherry" in fruits
7 |
8 | # OK
|
= help: Convert to `set`
@@ -62,29 +62,8 @@ literal_membership.py:4:6: PLR6201 [*] Use a `set` literal when testing for memb
5 5 | 1, 2, 3
6 |-)
6 |+}
7 7 | fruits = ["cherry", "grapes"]
8 8 | "cherry" in fruits
9 9 | _ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in ("a", "b")}
literal_membership.py:9:70: PLR6201 [*] Use a `set` literal when testing for membership
|
7 | fruits = ["cherry", "grapes"]
8 | "cherry" in fruits
9 | _ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in ("a", "b")}
| ^^^^^^^^^^ PLR6201
10 |
11 | # OK
|
= help: Convert to `set`
Unsafe fix
6 6 | )
7 7 | fruits = ["cherry", "grapes"]
8 8 | "cherry" in fruits
9 |-_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in ("a", "b")}
9 |+_ = {key: value for key, value in {"a": 1, "b": 2}.items() if key in {"a", "b"}}
10 10 |
11 11 | # OK
12 12 | fruits in [[1, 2, 3], [4, 5, 6]]
7 7 |
8 8 | # OK
9 9 | fruits = ["cherry", "grapes"]

View File

@@ -1,49 +1,31 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
magic_value_comparison.py:56:12: PLR2004 Magic value used in comparison, consider replacing `-2.0` with a constant variable
|
54 | pass
55 |
56 | if argc != -2.0: # [magic-value-comparison]
| ^^^^ PLR2004
57 | pass
|
magic_value_comparison.py:59:12: PLR2004 Magic value used in comparison, consider replacing `+2.0` with a constant variable
magic_value_comparison.py:59:22: PLR2004 Magic value used in comparison, consider replacing `"Hunter2"` with a constant variable
|
57 | pass
58 |
59 | if argc != +2.0: # [magic-value-comparison]
| ^^^^ PLR2004
59 | if input_password == "Hunter2": # correct
| ^^^^^^^^^ PLR2004
60 | pass
|
magic_value_comparison.py:74:22: PLR2004 Magic value used in comparison, consider replacing `"Hunter2"` with a constant variable
magic_value_comparison.py:65:21: PLR2004 Magic value used in comparison, consider replacing `3.141592653589793238` with a constant variable
|
72 | pass
73 |
74 | if input_password == "Hunter2": # correct
| ^^^^^^^^^ PLR2004
75 | pass
|
magic_value_comparison.py:80:21: PLR2004 Magic value used in comparison, consider replacing `3.141592653589793238` with a constant variable
|
78 | pi_estimation = 3.14
79 |
80 | if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
63 | pi_estimation = 3.14
64 |
65 | if pi_estimation == 3.141592653589793238: # [magic-value-comparison]
| ^^^^^^^^^^^^^^^^^^^^ PLR2004
81 | pass
66 | pass
|
magic_value_comparison.py:92:18: PLR2004 Magic value used in comparison, consider replacing `b"something"` with a constant variable
magic_value_comparison.py:77:18: PLR2004 Magic value used in comparison, consider replacing `b"something"` with a constant variable
|
90 | user_input = b"Hello, There!"
91 |
92 | if user_input == b"something": # correct
75 | user_input = b"Hello, There!"
76 |
77 | if user_input == b"something": # correct
| ^^^^^^^^^^^^ PLR2004
93 | pass
78 | pass
|

View File

@@ -17,7 +17,6 @@ mod tests {
#[test_case(Rule::ReadWholeFile, Path::new("FURB101.py"))]
#[test_case(Rule::RepeatedAppend, Path::new("FURB113.py"))]
#[test_case(Rule::ReimplementedOperator, Path::new("FURB118.py"))]
#[test_case(Rule::ReadlinesInFor, Path::new("FURB129.py"))]
#[test_case(Rule::DeleteFullSlice, Path::new("FURB131.py"))]
#[test_case(Rule::CheckAndRemoveFromSet, Path::new("FURB132.py"))]
#[test_case(Rule::IfExprMinMax, Path::new("FURB136.py"))]

View File

@@ -9,7 +9,6 @@ pub(crate) use math_constant::*;
pub(crate) use metaclass_abcmeta::*;
pub(crate) use print_empty_string::*;
pub(crate) use read_whole_file::*;
pub(crate) use readlines_in_for::*;
pub(crate) use redundant_log_base::*;
pub(crate) use regex_flag_alias::*;
pub(crate) use reimplemented_operator::*;
@@ -31,7 +30,6 @@ mod math_constant;
mod metaclass_abcmeta;
mod print_empty_string;
mod read_whole_file;
mod readlines_in_for;
mod redundant_log_base;
mod regex_flag_alias;
mod reimplemented_operator;

View File

@@ -1,92 +0,0 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{Comprehension, Expr, StmtFor};
use ruff_python_semantic::analyze::typing;
use ruff_python_semantic::analyze::typing::is_io_base_expr;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for uses of `readlines()` when iterating over a file line-by-line.
///
/// ## Why is this bad?
/// Rather than iterating over all lines in a file by calling `readlines()`,
/// it's more convenient and performant to iterate over the file object
/// directly.
///
/// ## Example
/// ```python
/// with open("file.txt") as fp:
/// for line in fp.readlines():
/// ...
/// ```
///
/// Use instead:
/// ```python
/// with open("file.txt") as fp:
/// for line in fp:
/// ...
/// ```
///
/// ## References
/// - [Python documentation: `io.IOBase.readlines`](https://docs.python.org/3/library/io.html#io.IOBase.readlines)
#[violation]
pub(crate) struct ReadlinesInFor;
impl AlwaysFixableViolation for ReadlinesInFor {
#[derive_message_formats]
fn message(&self) -> String {
format!("Instead of calling `readlines()`, iterate over file object directly")
}
fn fix_title(&self) -> String {
"Remove `readlines()`".into()
}
}
/// FURB129
pub(crate) fn readlines_in_for(checker: &mut Checker, for_stmt: &StmtFor) {
readlines_in_iter(checker, for_stmt.iter.as_ref());
}
/// FURB129
pub(crate) fn readlines_in_comprehension(checker: &mut Checker, comprehension: &Comprehension) {
readlines_in_iter(checker, &comprehension.iter);
}
fn readlines_in_iter(checker: &mut Checker, iter_expr: &Expr) {
let Expr::Call(expr_call) = iter_expr else {
return;
};
let Expr::Attribute(expr_attr) = expr_call.func.as_ref() else {
return;
};
if expr_attr.attr.as_str() != "readlines" || !expr_call.arguments.is_empty() {
return;
}
// Determine whether `fp` in `fp.readlines()` was bound to a file object.
if let Expr::Name(name) = expr_attr.value.as_ref() {
if !checker
.semantic()
.resolve_name(name)
.map(|id| checker.semantic().binding(id))
.is_some_and(|binding| typing::is_io_base(binding, checker.semantic()))
{
return;
}
} else {
if !is_io_base_expr(expr_attr.value.as_ref(), checker.semantic()) {
return;
}
}
let mut diagnostic = Diagnostic::new(ReadlinesInFor, expr_call.range());
diagnostic.set_fix(Fix::unsafe_edit(Edit::range_deletion(
expr_call.range().add_start(expr_attr.value.range().len()),
)));
checker.diagnostics.push(diagnostic);
}

View File

@@ -1,207 +0,0 @@
---
source: crates/ruff_linter/src/rules/refurb/mod.rs
---
FURB129.py:7:18: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
5 | # Errors
6 | with open("FURB129.py") as f:
7 | for _line in f.readlines():
| ^^^^^^^^^^^^^ FURB129
8 | pass
9 | a = [line.lower() for line in f.readlines()]
|
= help: Remove `readlines()`
Unsafe fix
4 4 |
5 5 | # Errors
6 6 | with open("FURB129.py") as f:
7 |- for _line in f.readlines():
7 |+ for _line in f:
8 8 | pass
9 9 | a = [line.lower() for line in f.readlines()]
10 10 | b = {line.upper() for line in f.readlines()}
FURB129.py:9:35: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
7 | for _line in f.readlines():
8 | pass
9 | a = [line.lower() for line in f.readlines()]
| ^^^^^^^^^^^^^ FURB129
10 | b = {line.upper() for line in f.readlines()}
11 | c = {line.lower(): line.upper() for line in f.readlines()}
|
= help: Remove `readlines()`
Unsafe fix
6 6 | with open("FURB129.py") as f:
7 7 | for _line in f.readlines():
8 8 | pass
9 |- a = [line.lower() for line in f.readlines()]
9 |+ a = [line.lower() for line in f]
10 10 | b = {line.upper() for line in f.readlines()}
11 11 | c = {line.lower(): line.upper() for line in f.readlines()}
12 12 |
FURB129.py:10:35: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
8 | pass
9 | a = [line.lower() for line in f.readlines()]
10 | b = {line.upper() for line in f.readlines()}
| ^^^^^^^^^^^^^ FURB129
11 | c = {line.lower(): line.upper() for line in f.readlines()}
|
= help: Remove `readlines()`
Unsafe fix
7 7 | for _line in f.readlines():
8 8 | pass
9 9 | a = [line.lower() for line in f.readlines()]
10 |- b = {line.upper() for line in f.readlines()}
10 |+ b = {line.upper() for line in f}
11 11 | c = {line.lower(): line.upper() for line in f.readlines()}
12 12 |
13 13 | with Path("FURB129.py").open() as f:
FURB129.py:11:49: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
9 | a = [line.lower() for line in f.readlines()]
10 | b = {line.upper() for line in f.readlines()}
11 | c = {line.lower(): line.upper() for line in f.readlines()}
| ^^^^^^^^^^^^^ FURB129
12 |
13 | with Path("FURB129.py").open() as f:
|
= help: Remove `readlines()`
Unsafe fix
8 8 | pass
9 9 | a = [line.lower() for line in f.readlines()]
10 10 | b = {line.upper() for line in f.readlines()}
11 |- c = {line.lower(): line.upper() for line in f.readlines()}
11 |+ c = {line.lower(): line.upper() for line in f}
12 12 |
13 13 | with Path("FURB129.py").open() as f:
14 14 | for _line in f.readlines():
FURB129.py:14:18: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
13 | with Path("FURB129.py").open() as f:
14 | for _line in f.readlines():
| ^^^^^^^^^^^^^ FURB129
15 | pass
|
= help: Remove `readlines()`
Unsafe fix
11 11 | c = {line.lower(): line.upper() for line in f.readlines()}
12 12 |
13 13 | with Path("FURB129.py").open() as f:
14 |- for _line in f.readlines():
14 |+ for _line in f:
15 15 | pass
16 16 |
17 17 | for _line in open("FURB129.py").readlines():
FURB129.py:17:14: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
15 | pass
16 |
17 | for _line in open("FURB129.py").readlines():
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB129
18 | pass
|
= help: Remove `readlines()`
Unsafe fix
14 14 | for _line in f.readlines():
15 15 | pass
16 16 |
17 |-for _line in open("FURB129.py").readlines():
17 |+for _line in open("FURB129.py"):
18 18 | pass
19 19 |
20 20 | for _line in Path("FURB129.py").open().readlines():
FURB129.py:20:14: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
18 | pass
19 |
20 | for _line in Path("FURB129.py").open().readlines():
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FURB129
21 | pass
|
= help: Remove `readlines()`
Unsafe fix
17 17 | for _line in open("FURB129.py").readlines():
18 18 | pass
19 19 |
20 |-for _line in Path("FURB129.py").open().readlines():
20 |+for _line in Path("FURB129.py").open():
21 21 | pass
22 22 |
23 23 |
FURB129.py:26:18: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
24 | def func():
25 | f = Path("FURB129.py").open()
26 | for _line in f.readlines():
| ^^^^^^^^^^^^^ FURB129
27 | pass
28 | f.close()
|
= help: Remove `readlines()`
Unsafe fix
23 23 |
24 24 | def func():
25 25 | f = Path("FURB129.py").open()
26 |- for _line in f.readlines():
26 |+ for _line in f:
27 27 | pass
28 28 | f.close()
29 29 |
FURB129.py:32:18: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
31 | def func(f: io.BytesIO):
32 | for _line in f.readlines():
| ^^^^^^^^^^^^^ FURB129
33 | pass
|
= help: Remove `readlines()`
Unsafe fix
29 29 |
30 30 |
31 31 | def func(f: io.BytesIO):
32 |- for _line in f.readlines():
32 |+ for _line in f:
33 33 | pass
34 34 |
35 35 |
FURB129.py:38:22: FURB129 [*] Instead of calling `readlines()`, iterate over file object directly
|
36 | def func():
37 | with (open("FURB129.py") as f, foo as bar):
38 | for _line in f.readlines():
| ^^^^^^^^^^^^^ FURB129
39 | pass
40 | for _line in bar.readlines():
|
= help: Remove `readlines()`
Unsafe fix
35 35 |
36 36 | def func():
37 37 | with (open("FURB129.py") as f, foo as bar):
38 |- for _line in f.readlines():
38 |+ for _line in f:
39 39 | pass
40 40 | for _line in bar.readlines():
41 41 | pass

View File

@@ -52,15 +52,14 @@ use ruff_text_size::Ranged;
/// - [The Python Standard Library](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task)
#[violation]
pub struct AsyncioDanglingTask {
expr: String,
method: Method,
}
impl Violation for AsyncioDanglingTask {
#[derive_message_formats]
fn message(&self) -> String {
let AsyncioDanglingTask { expr, method } = self;
format!("Store a reference to the return value of `{expr}.{method}`")
let AsyncioDanglingTask { method } = self;
format!("Store a reference to the return value of `asyncio.{method}`")
}
}
@@ -81,35 +80,23 @@ pub(crate) fn asyncio_dangling_task(expr: &Expr, semantic: &SemanticModel) -> Op
})
{
return Some(Diagnostic::new(
AsyncioDanglingTask {
expr: "asyncio".to_string(),
method,
},
AsyncioDanglingTask { method },
expr.range(),
));
}
// Ex) `loop = ...; loop.create_task(...)`
// Ex) `loop = asyncio.get_running_loop(); loop.create_task(...)`
if let Expr::Attribute(ast::ExprAttribute { attr, value, .. }) = func.as_ref() {
if attr == "create_task" {
if let Expr::Name(name) = value.as_ref() {
if typing::resolve_assignment(value, semantic).is_some_and(|call_path| {
matches!(
call_path.as_slice(),
[
"asyncio",
"get_event_loop" | "get_running_loop" | "new_event_loop"
]
)
}) {
return Some(Diagnostic::new(
AsyncioDanglingTask {
expr: name.id.to_string(),
method: Method::CreateTask,
},
expr.range(),
));
}
if typing::resolve_assignment(value, semantic).is_some_and(|call_path| {
matches!(call_path.as_slice(), ["asyncio", "get_running_loop"])
}) {
return Some(Diagnostic::new(
AsyncioDanglingTask {
method: Method::CreateTask,
},
expr.range(),
));
}
}
}

View File

@@ -895,27 +895,6 @@ fn multiline_string_sequence_postlude<'a>(
};
let postlude = locator.slice(TextRange::new(postlude_start, dunder_all_range_end));
// If the postlude consists solely of a closing parenthesis
// (not preceded by any whitespace/newlines),
// plus possibly a single trailing comma prior to the parenthesis,
// fixup the postlude so that the parenthesis appears on its own line,
// and so that the final item has a trailing comma.
// This produces formatting more similar
// to that which the formatter would produce.
if postlude.len() <= 2 {
let mut reversed_postlude_chars = postlude.chars().rev();
if let Some(closing_paren @ (')' | '}' | ']')) = reversed_postlude_chars.next() {
if reversed_postlude_chars.next().map_or(true, |c| c == ',') {
return Cow::Owned(format!(",{newline}{leading_indent}{closing_paren}"));
}
}
}
let newline_chars = ['\r', '\n'];
if !postlude.starts_with(newline_chars) {
return Cow::Borrowed(postlude);
}
// The rest of this function uses heuristics to
// avoid very long indents for the closing paren
// that don't match the style for the rest of the
@@ -941,6 +920,10 @@ fn multiline_string_sequence_postlude<'a>(
// "y",
// ]
// ```
let newline_chars = ['\r', '\n'];
if !postlude.starts_with(newline_chars) {
return Cow::Borrowed(postlude);
}
if TextSize::of(leading_indentation(
postlude.trim_start_matches(newline_chars),
)) <= TextSize::of(item_indent)
@@ -948,7 +931,7 @@ fn multiline_string_sequence_postlude<'a>(
return Cow::Borrowed(postlude);
}
let trimmed_postlude = postlude.trim_start();
if trimmed_postlude.starts_with([']', ')', '}']) {
if trimmed_postlude.starts_with([']', ')']) {
return Cow::Owned(format!("{newline}{leading_indent}{trimmed_postlude}"));
}
Cow::Borrowed(postlude)

View File

@@ -25,7 +25,7 @@ RUF006.py:68:12: RUF006 Store a reference to the return value of `asyncio.create
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF006
|
RUF006.py:74:26: RUF006 Store a reference to the return value of `loop.create_task`
RUF006.py:74:26: RUF006 Store a reference to the return value of `asyncio.create_task`
|
72 | def f():
73 | loop = asyncio.get_running_loop()
@@ -33,7 +33,7 @@ RUF006.py:74:26: RUF006 Store a reference to the return value of `loop.create_ta
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF006
|
RUF006.py:97:5: RUF006 Store a reference to the return value of `loop.create_task`
RUF006.py:97:5: RUF006 Store a reference to the return value of `asyncio.create_task`
|
95 | def f():
96 | loop = asyncio.get_running_loop()
@@ -41,24 +41,4 @@ RUF006.py:97:5: RUF006 Store a reference to the return value of `loop.create_tas
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF006
|
RUF006.py:170:5: RUF006 Store a reference to the return value of `loop.create_task`
|
168 | def f():
169 | loop = asyncio.new_event_loop()
170 | loop.create_task(main()) # Error
| ^^^^^^^^^^^^^^^^^^^^^^^^ RUF006
171 |
172 | # Error
|
RUF006.py:175:5: RUF006 Store a reference to the return value of `loop.create_task`
|
173 | def f():
174 | loop = asyncio.get_event_loop()
175 | loop.create_task(main()) # Error
| ^^^^^^^^^^^^^^^^^^^^^^^^ RUF006
176 |
177 | # OK
|

View File

@@ -386,24 +386,18 @@ RUF022.py:54:11: RUF022 [*] `__all__` is not sorted
76 70 | "SUNDAY",
77 71 | "THURSDAY",
78 72 | "TUESDAY",
79 |- "TextCalendar",
80 73 | "WEDNESDAY",
73 |+ "WEDNESDAY",
74 |+ "Calendar",
75 |+ "Day",
76 |+ "HTMLCalendar",
77 |+ "IllegalMonthError",
78 |+ "LocaleHTMLCalendar",
79 |+ "Month",
80 |+ "TextCalendar",
79 80 | "TextCalendar",
80 |- "WEDNESDAY",
81 81 | "calendar",
82 82 | "timegm",
83 83 | "weekday",
84 |- "weekheader"]
84 |+ "weekheader",
85 |+]
85 86 |
86 87 | ##########################################
87 88 | # Messier multiline __all__ definitions...
RUF022.py:91:11: RUF022 [*] `__all__` is not sorted
|
@@ -565,11 +559,10 @@ RUF022.py:110:11: RUF022 [*] `__all__` is not sorted
151 |+ "register_error",
152 |+ "replace_errors",
153 |+ "strict_errors",
154 |+ "xmlcharrefreplace_errors",
155 |+]
124 156 |
125 157 | __all__: tuple[str, ...] = ( # a comment about the opening paren
126 158 | # multiline comment about "bbb" part 1
154 |+ "xmlcharrefreplace_errors"]
124 155 |
125 156 | __all__: tuple[str, ...] = ( # a comment about the opening paren
126 157 | # multiline comment about "bbb" part 1
RUF022.py:125:28: RUF022 [*] `__all__` is not sorted
|
@@ -925,13 +918,13 @@ RUF022.py:225:11: RUF022 [*] `__all__` is not sorted
223 223 | ############################################################
224 224 |
225 225 | __all__ = (
226 |+ "dumps",
226 227 | "loads",
226 |- "loads",
227 |- "dumps",)
228 |+)
228 229 |
229 230 | __all__ = [
230 231 | "loads",
226 |+ "dumps",
227 |+ "loads",)
228 228 |
229 229 | __all__ = [
230 230 | "loads",
RUF022.py:229:11: RUF022 [*] `__all__` is not sorted
|
@@ -1009,7 +1002,7 @@ RUF022.py:243:11: RUF022 [*] `__all__` is not sorted
251 | | )
| |_^ RUF022
252 |
253 | __all__ = ( # comment about the opening paren
253 | ###################################
|
= help: Apply an isort-style sorting to `__all__`
@@ -1028,53 +1021,4 @@ RUF022.py:243:11: RUF022 [*] `__all__` is not sorted
250 249 | ,
251 250 | )
RUF022.py:253:11: RUF022 [*] `__all__` is not sorted
|
251 | )
252 |
253 | __all__ = ( # comment about the opening paren
| ___________^
254 | | # multiline strange comment 0a
255 | | # multiline strange comment 0b
256 | | "foo" # inline comment about foo
257 | | # multiline strange comment 1a
258 | | # multiline strange comment 1b
259 | | , # comment about the comma??
260 | | # comment about bar part a
261 | | # comment about bar part b
262 | | "bar" # inline comment about bar
263 | | # strange multiline comment comment 2a
264 | | # strange multiline comment 2b
265 | | ,
266 | | # strange multiline comment 3a
267 | | # strange multiline comment 3b
268 | | ) # comment about the closing paren
| |_^ RUF022
269 |
270 | ###################################
|
= help: Apply an isort-style sorting to `__all__`
Safe fix
251 251 | )
252 252 |
253 253 | __all__ = ( # comment about the opening paren
254 |- # multiline strange comment 0a
255 |- # multiline strange comment 0b
256 |- "foo" # inline comment about foo
257 254 | # multiline strange comment 1a
258 255 | # multiline strange comment 1b
259 |- , # comment about the comma??
256 |+ # comment about the comma??
260 257 | # comment about bar part a
261 258 | # comment about bar part b
262 |- "bar" # inline comment about bar
259 |+ "bar", # inline comment about bar
260 |+ # multiline strange comment 0a
261 |+ # multiline strange comment 0b
262 |+ "foo" # inline comment about foo
263 263 | # strange multiline comment comment 2a
264 264 | # strange multiline comment 2b
265 265 | ,

View File

@@ -564,11 +564,10 @@ RUF023.py:162:17: RUF023 [*] `BezierBuilder.__slots__` is not sorted
162 |+ __slots__ = (
163 |+ 'canvas',
164 |+ 'xp',
165 |+ 'yp',
166 |+ )
164 167 |
165 168 | class BezierBuilder2:
166 169 | __slots__ = {'xp', 'yp',
165 |+ 'yp',)
164 166 |
165 167 | class BezierBuilder2:
166 168 | __slots__ = {'xp', 'yp',
RUF023.py:166:17: RUF023 [*] `BezierBuilder2.__slots__` is not sorted
|
@@ -644,7 +643,7 @@ RUF023.py:181:17: RUF023 [*] `BezierBuilder4.__slots__` is not sorted
189 | | )
| |_____^ RUF023
190 |
191 | __slots__ = {"foo", "bar",
191 | ###################################
|
= help: Apply a natural sort to `BezierBuilder4.__slots__`
@@ -663,35 +662,4 @@ RUF023.py:181:17: RUF023 [*] `BezierBuilder4.__slots__` is not sorted
188 187 | ,
189 188 | )
RUF023.py:191:17: RUF023 [*] `BezierBuilder4.__slots__` is not sorted
|
189 | )
190 |
191 | __slots__ = {"foo", "bar",
| _________________^
192 | | "baz", "bingo"
193 | | }
| |__________________^ RUF023
194 |
195 | ###################################
|
= help: Apply a natural sort to `BezierBuilder4.__slots__`
Safe fix
188 188 | ,
189 189 | )
190 190 |
191 |- __slots__ = {"foo", "bar",
192 |- "baz", "bingo"
193 |- }
191 |+ __slots__ = {
192 |+ "bar",
193 |+ "baz",
194 |+ "bingo",
195 |+ "foo"
196 |+ }
194 197 |
195 198 | ###################################
196 199 | # These should all not get flagged:

View File

@@ -534,7 +534,7 @@ impl SerializationFormat {
}
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, Hash)]
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Hash)]
#[serde(try_from = "String")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub struct Version(String);

View File

@@ -92,9 +92,7 @@ pub fn derive_message_formats(_attr: TokenStream, item: TokenStream) -> TokenStr
///
/// Good:
///
/// ```ignroe
/// use ruff_macros::newtype_index;
///
/// ```rust
/// #[newtype_index]
/// #[derive(Ord, PartialOrd)]
/// struct MyIndex;

View File

@@ -11,7 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_diagnostics = { path = "../ruff_diagnostics" }

View File

@@ -935,7 +935,7 @@ where
}
}
/// A [`Visitor`] that collects all `return` statements in a function or method.
/// A [`StatementVisitor`] that collects all `return` statements in a function or method.
#[derive(Default)]
pub struct ReturnStatementVisitor<'a> {
pub returns: Vec<&'a ast::StmtReturn>,

View File

@@ -11,7 +11,6 @@ repository = { workspace = true }
license = { workspace = true }
[lib]
doctest = false
[dependencies]
ruff_python_ast = { path = "../ruff_python_ast" }

View File

@@ -1705,7 +1705,6 @@ class Foo:
assert_round_trip!(r#"f"{ chr(65) = !s}""#);
assert_round_trip!(r#"f"{ chr(65) = !r}""#);
assert_round_trip!(r#"f"{ chr(65) = :#x}""#);
assert_round_trip!(r#"f"{ ( chr(65) ) = }""#);
assert_round_trip!(r#"f"{a=!r:0.05f}""#);
}

View File

@@ -10,9 +10,6 @@ documentation = { workspace = true }
repository = { workspace = true }
license = { workspace = true }
[lib]
doctest= false
[dependencies]
ruff_cache = { path = "../ruff_cache" }
ruff_formatter = { path = "../ruff_formatter" }

View File

@@ -4,8 +4,4 @@ ij_formatter_enabled = false
["range_formatting/*.py"]
generated_code = true
ij_formatter_enabled = false
[docstring_tab_indentation.py]
generated_code = true
ij_formatter_enabled = false

View File

@@ -1,10 +0,0 @@
[
{
"indent_style": "tab",
"indent_width": 4
},
{
"indent_style": "tab",
"indent_width": 8
}
]

View File

@@ -1,72 +0,0 @@
# Tests the behavior of the formatter when it comes to tabs inside docstrings
# when using `indent_style="tab`
# The example below uses tabs exclusively. The formatter should preserve the tab indentation
# of `arg1`.
def tab_argument(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg with 2 tabs in front
"""
# The `arg1` is intended with spaces. The formatter should not change the spaces to a tab
# because it must assume that the spaces are used for alignment and not indentation.
def space_argument(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg with a tab and a space in front
"""
def under_indented(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg with a tab and a space in front
arg2: Not properly indented
"""
def under_indented_tabs(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg with a tab and a space in front
arg2: Not properly indented
"""
def spaces_tabs_over_indent(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg with a tab and a space in front
"""
# The docstring itself is indented with spaces but the argument is indented by a tab.
# Keep the tab indentation of the argument, convert th docstring indent to tabs.
def space_indented_docstring_containing_tabs(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg
"""
# The docstring uses tabs, spaces, tabs indentation.
# Fallback to use space indentation
def mixed_indentation(arg1: str) -> None:
"""
Arguments:
arg1: super duper arg with a tab and a space in front
"""
# The example shows an ascii art. The formatter should not change the spaces
# to tabs because it breaks the ASCII art when inspecting the docstring with `inspect.cleandoc(ascii_art.__doc__)`
# when using an indent width other than 8.
def ascii_art():
r"""
Look at this beautiful tree.
a
/ \
b c
/ \
d e
"""

View File

@@ -1,8 +0,0 @@
[
{
"source_type": "Ipynb"
},
{
"source_type": "Python"
}
]

View File

@@ -1,6 +0,0 @@
"""
This looks like a docstring but is not in a notebook because notebooks can't be imported as a module.
Ruff should leave it as is
""";
"another normal string"

View File

@@ -13,11 +13,10 @@ use ruff_text_size::{Ranged, TextRange};
use crate::comments::{leading_comments, trailing_comments, Comments, SourceComment};
use crate::expression::parentheses::{
in_parentheses_only_group, in_parentheses_only_if_group_breaks, in_parentheses_only_indent_end,
in_parentheses_only_indent_start, in_parentheses_only_soft_line_break,
in_parentheses_only_soft_line_break_or_space, is_expression_parenthesized,
write_in_parentheses_only_group_end_tag, write_in_parentheses_only_group_start_tag,
Parentheses,
in_parentheses_only_group, in_parentheses_only_if_group_breaks,
in_parentheses_only_soft_line_break, in_parentheses_only_soft_line_break_or_space,
is_expression_parenthesized, write_in_parentheses_only_group_end_tag,
write_in_parentheses_only_group_start_tag, Parentheses,
};
use crate::expression::OperatorPrecedence;
use crate::prelude::*;
@@ -288,7 +287,7 @@ impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
let flat_binary = self.flatten(&comments, f.context().source());
if self.is_bool_op() {
return in_parentheses_only_group(&flat_binary).fmt(f);
return in_parentheses_only_group(&&*flat_binary).fmt(f);
}
let source = f.context().source();
@@ -482,7 +481,7 @@ impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
// Finish the group that wraps all implicit concatenated strings
write_in_parentheses_only_group_end_tag(f);
} else {
in_parentheses_only_group(&flat_binary).fmt(f)?;
in_parentheses_only_group(&&*flat_binary).fmt(f)?;
}
Ok(())
@@ -528,12 +527,6 @@ impl<'a> Deref for FlatBinaryExpression<'a> {
}
}
impl Format<PyFormatContext<'_>> for FlatBinaryExpression<'_> {
fn fmt(&self, f: &mut Formatter<PyFormatContext<'_>>) -> FormatResult<()> {
Format::fmt(&**self, f)
}
}
/// Binary chain represented as a flat vector where operands are stored at even indices and operators
/// add odd indices.
///
@@ -649,7 +642,7 @@ impl<'a> FlatBinaryExpressionSlice<'a> {
}
/// Formats a binary chain slice by inserting soft line breaks before the lowest-precedence operators.
/// In other words: It splits the line before the lowest precedence operators (and it either splits
/// In other words: It splits the line before by the lowest precedence operators (and it either splits
/// all of them or none). For example, the lowest precedence operator for `a + b * c + d` is the `+` operator.
/// The expression either gets formatted as `a + b * c + d` if it fits on the line or as
/// ```python
@@ -685,64 +678,59 @@ impl Format<PyFormatContext<'_>> for FlatBinaryExpressionSlice<'_> {
let mut last_operator: Option<OperatorIndex> = None;
let lowest_precedence = self.lowest_precedence();
let lowest_precedence_operators = self
.operators()
.filter(|(_, operator)| operator.precedence() == lowest_precedence);
for (index, operator_part) in lowest_precedence_operators {
let left = self.between_operators(last_operator, index);
let right = self.after_operator(index);
for (index, operator_part) in self.operators() {
if operator_part.precedence() == lowest_precedence {
let left = self.between_operators(last_operator, index);
let right = self.after_operator(index);
let is_pow = operator_part.symbol.is_pow()
&& is_simple_power_expression(
left.last_operand().expression(),
right.first_operand().expression(),
f.context().comments().ranges(),
f.context().source(),
);
let is_pow = operator_part.symbol.is_pow()
&& is_simple_power_expression(
left.last_operand().expression(),
right.first_operand().expression(),
f.context().comments().ranges(),
f.context().source(),
);
if let Some(leading) = left.first_operand().leading_binary_comments() {
leading_comments(leading).fmt(f)?;
}
match &left.0 {
[OperandOrOperator::Operand(operand)] => operand.fmt(f)?,
_ => in_parentheses_only_group(&left).fmt(f)?,
}
if last_operator.is_none() {
in_parentheses_only_indent_start().fmt(f)?;
}
if let Some(trailing) = left.last_operand().trailing_binary_comments() {
trailing_comments(trailing).fmt(f)?;
}
if is_pow {
in_parentheses_only_soft_line_break().fmt(f)?;
} else {
in_parentheses_only_soft_line_break_or_space().fmt(f)?;
}
operator_part.fmt(f)?;
// Format the operator on its own line if the right side has any leading comments.
if operator_part.has_trailing_comments()
|| right.first_operand().has_unparenthesized_leading_comments(
f.context().comments(),
f.context().source(),
)
{
hard_line_break().fmt(f)?;
} else if is_pow {
if is_fix_power_op_line_length_enabled(f.context()) {
in_parentheses_only_if_group_breaks(&space()).fmt(f)?;
if let Some(leading) = left.first_operand().leading_binary_comments() {
leading_comments(leading).fmt(f)?;
}
} else {
space().fmt(f)?;
}
last_operator = Some(index);
match &left.0 {
[OperandOrOperator::Operand(operand)] => operand.fmt(f)?,
_ => in_parentheses_only_group(&left).fmt(f)?,
}
if let Some(trailing) = left.last_operand().trailing_binary_comments() {
trailing_comments(trailing).fmt(f)?;
}
if is_pow {
in_parentheses_only_soft_line_break().fmt(f)?;
} else {
in_parentheses_only_soft_line_break_or_space().fmt(f)?;
}
operator_part.fmt(f)?;
// Format the operator on its own line if the right side has any leading comments.
if operator_part.has_trailing_comments()
|| right.first_operand().has_unparenthesized_leading_comments(
f.context().comments(),
f.context().source(),
)
{
hard_line_break().fmt(f)?;
} else if is_pow {
if is_fix_power_op_line_length_enabled(f.context()) {
in_parentheses_only_if_group_breaks(&space()).fmt(f)?;
}
} else {
space().fmt(f)?;
}
last_operator = Some(index);
}
}
// Format the last right side
@@ -757,11 +745,9 @@ impl Format<PyFormatContext<'_>> for FlatBinaryExpressionSlice<'_> {
}
match &right.0 {
[OperandOrOperator::Operand(operand)] => operand.fmt(f)?,
_ => in_parentheses_only_group(&right).fmt(f)?,
[OperandOrOperator::Operand(operand)] => operand.fmt(f),
_ => in_parentheses_only_group(&right).fmt(f),
}
in_parentheses_only_indent_end().fmt(f)
}
}

View File

@@ -379,42 +379,6 @@ where
})
}
pub(super) fn in_parentheses_only_indent_start<'a>() -> impl Format<PyFormatContext<'a>> {
format_with(|f: &mut PyFormatter| {
match f.context().node_level() {
NodeLevel::TopLevel(_) | NodeLevel::CompoundStatement | NodeLevel::Expression(None) => {
// no-op, not parenthesized
}
NodeLevel::Expression(Some(parentheses_id)) => f.write_element(FormatElement::Tag(
Tag::StartIndentIfGroupBreaks(parentheses_id),
)),
NodeLevel::ParenthesizedExpression => {
f.write_element(FormatElement::Tag(Tag::StartIndent))
}
}
Ok(())
})
}
pub(super) fn in_parentheses_only_indent_end<'a>() -> impl Format<PyFormatContext<'a>> {
format_with(|f: &mut PyFormatter| {
match f.context().node_level() {
NodeLevel::TopLevel(_) | NodeLevel::CompoundStatement | NodeLevel::Expression(None) => {
// no-op, not parenthesized
}
NodeLevel::Expression(Some(_)) => {
f.write_element(FormatElement::Tag(Tag::EndIndentIfGroupBreaks))
}
NodeLevel::ParenthesizedExpression => {
f.write_element(FormatElement::Tag(Tag::EndIndent))
}
}
Ok(())
})
}
/// Format comments inside empty parentheses, brackets or curly braces.
///
/// Empty `()`, `[]` and `{}` are special because there can be dangling comments, and they can be in

View File

@@ -248,12 +248,6 @@ pub enum QuoteStyle {
Preserve,
}
impl QuoteStyle {
pub const fn is_preserve(self) -> bool {
matches!(self, QuoteStyle::Preserve)
}
}
impl fmt::Display for QuoteStyle {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {

View File

@@ -2,7 +2,8 @@ use ruff_python_ast::BytesLiteral;
use ruff_text_size::Ranged;
use crate::prelude::*;
use crate::string::{StringNormalizer, StringPart};
use crate::preview::is_hex_codes_in_unicode_sequences_enabled;
use crate::string::{Quoting, StringPart};
#[derive(Default)]
pub struct FormatBytesLiteral;
@@ -11,9 +12,14 @@ impl FormatNodeRule<BytesLiteral> for FormatBytesLiteral {
fn fmt_fields(&self, item: &BytesLiteral, f: &mut PyFormatter) -> FormatResult<()> {
let locator = f.context().locator();
StringNormalizer::from_context(f.context())
.with_preferred_quote_style(f.options().quote_style())
.normalize(&StringPart::from_source(item.range(), &locator), &locator)
StringPart::from_source(item.range(), &locator)
.normalize(
Quoting::CanChange,
&locator,
f.options().quote_style(),
f.context().docstring(),
is_hex_codes_in_unicode_sequences_enabled(f.context()),
)
.fmt(f)
}
}

View File

@@ -2,7 +2,8 @@ use ruff_python_ast::FString;
use ruff_text_size::Ranged;
use crate::prelude::*;
use crate::string::{Quoting, StringNormalizer, StringPart};
use crate::preview::is_hex_codes_in_unicode_sequences_enabled;
use crate::string::{Quoting, StringPart};
/// Formats an f-string which is part of a larger f-string expression.
///
@@ -25,12 +26,13 @@ impl Format<PyFormatContext<'_>> for FormatFString<'_> {
fn fmt(&self, f: &mut PyFormatter) -> FormatResult<()> {
let locator = f.context().locator();
let result = StringNormalizer::from_context(f.context())
.with_quoting(self.quoting)
.with_preferred_quote_style(f.options().quote_style())
let result = StringPart::from_source(self.value.range(), &locator)
.normalize(
&StringPart::from_source(self.value.range(), &locator),
self.quoting,
&locator,
f.options().quote_style(),
f.context().docstring(),
is_hex_codes_in_unicode_sequences_enabled(f.context()),
)
.fmt(f);

View File

@@ -2,7 +2,8 @@ use ruff_python_ast::StringLiteral;
use ruff_text_size::Ranged;
use crate::prelude::*;
use crate::string::{docstring, Quoting, StringNormalizer, StringPart};
use crate::preview::is_hex_codes_in_unicode_sequences_enabled;
use crate::string::{docstring, Quoting, StringPart};
use crate::QuoteStyle;
pub(crate) struct FormatStringLiteral<'a> {
@@ -49,22 +50,20 @@ impl Format<PyFormatContext<'_>> for FormatStringLiteral<'_> {
fn fmt(&self, f: &mut PyFormatter) -> FormatResult<()> {
let locator = f.context().locator();
let quote_style = f.options().quote_style();
let quote_style = if self.layout.is_docstring() && !quote_style.is_preserve() {
// Per PEP 8 and PEP 257, always prefer double quotes for docstrings,
// except when using quote-style=preserve
let quote_style = if self.layout.is_docstring() {
// Per PEP 8 and PEP 257, always prefer double quotes for docstrings
QuoteStyle::Double
} else {
quote_style
f.options().quote_style()
};
let normalized = StringNormalizer::from_context(f.context())
.with_quoting(self.layout.quoting())
.with_preferred_quote_style(quote_style)
.normalize(
&StringPart::from_source(self.value.range(), &locator),
&locator,
);
let normalized = StringPart::from_source(self.value.range(), &locator).normalize(
self.layout.quoting(),
&locator,
quote_style,
f.context().docstring(),
is_hex_codes_in_unicode_sequences_enabled(f.context()),
);
if self.layout.is_docstring() {
docstring::format(&normalized, f)

View File

@@ -214,9 +214,9 @@ impl<'ast> PreorderVisitor<'ast> for FindEnclosingNode<'_, 'ast> {
// Don't pick potential docstrings as the closest enclosing node because `suite.rs` than fails to identify them as
// docstrings and docstring formatting won't kick in.
// Format the enclosing node instead and slice the formatted docstring from the result.
let is_maybe_docstring = node.as_stmt_expr().is_some_and(|stmt| {
DocstringStmt::is_docstring_statement(stmt, self.context.options().source_type())
});
let is_maybe_docstring = node
.as_stmt_expr()
.is_some_and(|stmt| DocstringStmt::is_docstring_statement(stmt));
if is_maybe_docstring {
return TraversalSignal::Skip;

View File

@@ -103,9 +103,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
}
SuiteKind::Function => {
if let Some(docstring) =
DocstringStmt::try_from_statement(first, self.kind, source_type)
{
if let Some(docstring) = DocstringStmt::try_from_statement(first, self.kind) {
SuiteChildStatement::Docstring(docstring)
} else {
SuiteChildStatement::Other(first)
@@ -113,9 +111,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
}
SuiteKind::Class => {
if let Some(docstring) =
DocstringStmt::try_from_statement(first, self.kind, source_type)
{
if let Some(docstring) = DocstringStmt::try_from_statement(first, self.kind) {
if !comments.has_leading(first)
&& lines_before(first.start(), source) > 1
&& !source_type.is_stub()
@@ -147,9 +143,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
}
SuiteKind::TopLevel => {
if is_format_module_docstring_enabled(f.context()) {
if let Some(docstring) =
DocstringStmt::try_from_statement(first, self.kind, source_type)
{
if let Some(docstring) = DocstringStmt::try_from_statement(first, self.kind) {
SuiteChildStatement::Docstring(docstring)
} else {
SuiteChildStatement::Other(first)
@@ -190,8 +184,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
true
} else if is_module_docstring_newlines_enabled(f.context())
&& self.kind == SuiteKind::TopLevel
&& DocstringStmt::try_from_statement(first.statement(), self.kind, source_type)
.is_some()
&& DocstringStmt::try_from_statement(first.statement(), self.kind).is_some()
{
// Only in preview mode, insert a newline after a module level docstring, but treat
// it as a docstring otherwise. See: https://github.com/psf/black/pull/3932.
@@ -741,16 +734,7 @@ pub(crate) struct DocstringStmt<'a> {
impl<'a> DocstringStmt<'a> {
/// Checks if the statement is a simple string that can be formatted as a docstring
fn try_from_statement(
stmt: &'a Stmt,
suite_kind: SuiteKind,
source_type: PySourceType,
) -> Option<DocstringStmt<'a>> {
// Notebooks don't have a concept of modules, therefore, don't recognise the first string as the module docstring.
if source_type.is_ipynb() && suite_kind == SuiteKind::TopLevel {
return None;
}
fn try_from_statement(stmt: &'a Stmt, suite_kind: SuiteKind) -> Option<DocstringStmt<'a>> {
let Stmt::Expr(ast::StmtExpr { value, .. }) = stmt else {
return None;
};
@@ -768,11 +752,7 @@ impl<'a> DocstringStmt<'a> {
}
}
pub(crate) fn is_docstring_statement(stmt: &StmtExpr, source_type: PySourceType) -> bool {
if source_type.is_ipynb() {
return false;
}
pub(crate) fn is_docstring_statement(stmt: &StmtExpr) -> bool {
if let Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) = stmt.value.as_ref() {
!value.is_implicit_concatenated()
} else {

View File

@@ -2,13 +2,11 @@
// "reStructuredText."
#![allow(clippy::doc_markdown)]
use std::cmp::Ordering;
use std::{borrow::Cow, collections::VecDeque};
use itertools::Itertools;
use ruff_formatter::printer::SourceMapGeneration;
use ruff_python_parser::ParseError;
use {once_cell::sync::Lazy, regex::Regex};
use {
ruff_formatter::{write, FormatOptions, IndentStyle, LineWidth, Printed},
@@ -82,7 +80,9 @@ use super::{NormalizedString, QuoteChar};
/// ```
///
/// Tabs are counted by padding them to the next multiple of 8 according to
/// [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs).
/// [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs). When
/// we see indentation that contains a tab or any other none ascii-space whitespace we rewrite the
/// string.
///
/// Additionally, if any line in the docstring has less indentation than the docstring
/// (effectively a negative indentation wrt. to the current level), we pad all lines to the
@@ -104,10 +104,6 @@ use super::{NormalizedString, QuoteChar};
/// line c
/// """
/// ```
/// The indentation is rewritten to all-spaces when using [`IndentStyle::Space`].
/// The formatter preserves tab-indentations when using [`IndentStyle::Tab`], but doesn't convert
/// `indent-width * spaces` to tabs because doing so could break ASCII art and other docstrings
/// that use spaces for alignment.
pub(crate) fn format(normalized: &NormalizedString, f: &mut PyFormatter) -> FormatResult<()> {
let docstring = &normalized.text;
@@ -180,19 +176,19 @@ pub(crate) fn format(normalized: &NormalizedString, f: &mut PyFormatter) -> Form
// align it with the docstring statement. Conversely, if all lines are over-indented, we strip
// the extra indentation. We call this stripped indentation since it's relative to the block
// indent printer-made indentation.
let stripped_indentation = lines
let stripped_indentation_length = lines
.clone()
// We don't want to count whitespace-only lines as miss-indented
.filter(|line| !line.trim().is_empty())
.map(Indentation::from_str)
.min_by_key(|indentation| indentation.width())
.map(indentation_length)
.min()
.unwrap_or_default();
DocstringLinePrinter {
f,
action_queue: VecDeque::new(),
offset,
stripped_indentation,
stripped_indentation_length,
already_normalized,
quote_char: normalized.quotes.quote_char,
code_example: CodeExample::default(),
@@ -246,7 +242,7 @@ struct DocstringLinePrinter<'ast, 'buf, 'fmt, 'src> {
/// Indentation alignment based on the least indented line in the
/// docstring.
stripped_indentation: Indentation,
stripped_indentation_length: TextSize,
/// Whether the docstring is overall already considered normalized. When it
/// is, the formatter can take a fast path.
@@ -349,7 +345,7 @@ impl<'ast, 'buf, 'fmt, 'src> DocstringLinePrinter<'ast, 'buf, 'fmt, 'src> {
};
// This looks suspicious, but it's consistent with the whitespace
// normalization that will occur anyway.
let indent = " ".repeat(min_indent.width());
let indent = " ".repeat(min_indent.to_usize());
for docline in formatted_lines {
self.print_one(
&docline.map(|line| std::format!("{indent}{line}")),
@@ -359,7 +355,7 @@ impl<'ast, 'buf, 'fmt, 'src> DocstringLinePrinter<'ast, 'buf, 'fmt, 'src> {
CodeExampleKind::Markdown(fenced) => {
// This looks suspicious, but it's consistent with the whitespace
// normalization that will occur anyway.
let indent = " ".repeat(fenced.opening_fence_indent.width());
let indent = " ".repeat(fenced.opening_fence_indent.to_usize());
for docline in formatted_lines {
self.print_one(
&docline.map(|line| std::format!("{indent}{line}")),
@@ -391,58 +387,12 @@ impl<'ast, 'buf, 'fmt, 'src> DocstringLinePrinter<'ast, 'buf, 'fmt, 'src> {
};
}
let indent_offset = match self.f.options().indent_style() {
// Normalize all indent to spaces.
IndentStyle::Space => {
let tab_or_non_ascii_space = trim_end
.chars()
.take_while(|c| c.is_whitespace())
.any(|c| c != ' ');
let tab_or_non_ascii_space = trim_end
.chars()
.take_while(|c| c.is_whitespace())
.any(|c| c != ' ');
if tab_or_non_ascii_space {
None
} else {
// It's guaranteed that the `indent` is all spaces because `tab_or_non_ascii_space` is
// `false` (indent contains neither tabs nor non-space whitespace).
let stripped_indentation_len = self.stripped_indentation.text_len();
// Take the string with the trailing whitespace removed, then also
// skip the leading whitespace.
Some(stripped_indentation_len)
}
}
IndentStyle::Tab => {
let line_indent = Indentation::from_str(trim_end);
let non_ascii_whitespace = trim_end
.chars()
.take_while(|c| c.is_whitespace())
.any(|c| !matches!(c, ' ' | '\t'));
let trimmed = line_indent.trim_start(self.stripped_indentation);
// Preserve tabs that are used for indentation, but only if the indent isn't
// * a mix of tabs and spaces
// * the `stripped_indentation` is a prefix of the line's indent
// * the trimmed indent isn't spaces followed by tabs because that would result in a
// mixed tab, spaces, tab indentation, resulting in instabilities.
let preserve_indent = !non_ascii_whitespace
&& trimmed.is_some_and(|trimmed| !trimmed.is_spaces_tabs());
preserve_indent.then_some(self.stripped_indentation.text_len())
}
};
if let Some(indent_offset) = indent_offset {
// Take the string with the trailing whitespace removed, then also
// skip the leading whitespace.
if self.already_normalized {
let trimmed_line_range =
TextRange::at(line.offset, trim_end.text_len()).add_start(indent_offset);
source_text_slice(trimmed_line_range).fmt(self.f)?;
} else {
text(&trim_end[indent_offset.to_usize()..]).fmt(self.f)?;
}
} else {
if tab_or_non_ascii_space {
// We strip the indentation that is shared with the docstring
// statement, unless a line was indented less than the docstring
// statement, in which case we strip only this much indentation to
@@ -450,11 +400,21 @@ impl<'ast, 'buf, 'fmt, 'src> DocstringLinePrinter<'ast, 'buf, 'fmt, 'src> {
// overindented, in which case we strip the additional whitespace
// (see example in [`format_docstring`] doc comment). We then
// prepend the in-docstring indentation to the string.
let indent_len =
Indentation::from_str(trim_end).width() - self.stripped_indentation.width();
let in_docstring_indent = " ".repeat(indent_len) + trim_end.trim_start();
let indent_len = indentation_length(trim_end) - self.stripped_indentation_length;
let in_docstring_indent = " ".repeat(usize::from(indent_len)) + trim_end.trim_start();
text(&in_docstring_indent).fmt(self.f)?;
};
} else {
// Take the string with the trailing whitespace removed, then also
// skip the leading whitespace.
let trimmed_line_range = TextRange::at(line.offset, trim_end.text_len())
.add_start(self.stripped_indentation_length);
if self.already_normalized {
source_text_slice(trimmed_line_range).fmt(self.f)?;
} else {
// All indents are ascii spaces, so the slicing is correct.
text(&trim_end[usize::from(self.stripped_indentation_length)..]).fmt(self.f)?;
}
}
// We handled the case that the closing quotes are on their own line
// above (the last line is empty except for whitespace). If they are on
@@ -935,7 +895,8 @@ struct CodeExampleRst<'src> {
/// The lines that have been seen so far that make up the block.
lines: Vec<CodeExampleLine<'src>>,
/// The indent of the line "opening" this block in columns.
/// The indent of the line "opening" this block measured via
/// `indentation_length`.
///
/// It can either be the indent of a line ending with `::` (for a literal
/// block) or the indent of a line starting with `.. ` (a directive).
@@ -943,9 +904,9 @@ struct CodeExampleRst<'src> {
/// The content body of a block needs to be indented more than the line
/// opening the block, so we use this indentation to look for indentation
/// that is "more than" it.
opening_indent: Indentation,
opening_indent: TextSize,
/// The minimum indent of the block in columns.
/// The minimum indent of the block measured via `indentation_length`.
///
/// This is `None` until the first such line is seen. If no such line is
/// found, then we consider it an invalid block and bail out of trying to
@@ -962,7 +923,7 @@ struct CodeExampleRst<'src> {
/// When the code snippet has been extracted, it is re-built before being
/// reformatted. The minimum indent is stripped from each line when it is
/// re-built.
min_indent: Option<Indentation>,
min_indent: Option<TextSize>,
/// Whether this is a directive block or not. When not a directive, this is
/// a literal block. The main difference between them is that they start
@@ -1011,7 +972,7 @@ impl<'src> CodeExampleRst<'src> {
}
Some(CodeExampleRst {
lines: vec![],
opening_indent: Indentation::from_str(opening_indent),
opening_indent: indentation_length(opening_indent),
min_indent: None,
is_directive: false,
})
@@ -1049,7 +1010,7 @@ impl<'src> CodeExampleRst<'src> {
}
Some(CodeExampleRst {
lines: vec![],
opening_indent: Indentation::from_str(original.line),
opening_indent: indentation_length(original.line),
min_indent: None,
is_directive: true,
})
@@ -1069,7 +1030,7 @@ impl<'src> CodeExampleRst<'src> {
line.code = if line.original.line.trim().is_empty() {
""
} else {
min_indent.trim_start_str(line.original.line)
indentation_trim(min_indent, line.original.line)
};
}
&self.lines
@@ -1106,9 +1067,7 @@ impl<'src> CodeExampleRst<'src> {
// an empty line followed by an unindented non-empty line.
if let Some(next) = original.next {
let (next_indent, next_rest) = indent_with_suffix(next);
if !next_rest.is_empty()
&& Indentation::from_str(next_indent) <= self.opening_indent
{
if !next_rest.is_empty() && indentation_length(next_indent) <= self.opening_indent {
self.push_format_action(queue);
return None;
}
@@ -1120,7 +1079,7 @@ impl<'src> CodeExampleRst<'src> {
queue.push_back(CodeExampleAddAction::Kept);
return Some(self);
}
let indent_len = Indentation::from_str(indent);
let indent_len = indentation_length(indent);
if indent_len <= self.opening_indent {
// If we find an unindented non-empty line at the same (or less)
// indentation of the opening line at this point, then we know it
@@ -1182,7 +1141,7 @@ impl<'src> CodeExampleRst<'src> {
queue.push_back(CodeExampleAddAction::Print { original });
return Some(self);
}
let min_indent = Indentation::from_str(indent);
let min_indent = indentation_length(indent);
// At this point, we found a non-empty line. The only thing we require
// is that its indentation is strictly greater than the indentation of
// the line containing the `::`. Otherwise, we treat this as an invalid
@@ -1256,11 +1215,12 @@ struct CodeExampleMarkdown<'src> {
/// The lines that have been seen so far that make up the block.
lines: Vec<CodeExampleLine<'src>>,
/// The indent of the line "opening" fence of this block in columns.
/// The indent of the line "opening" fence of this block measured via
/// `indentation_length`.
///
/// This indentation is trimmed from the indentation of every line in the
/// body of the code block,
opening_fence_indent: Indentation,
opening_fence_indent: TextSize,
/// The kind of fence, backticks or tildes, used for this block. We need to
/// keep track of which kind was used to open the block in order to look
@@ -1329,7 +1289,7 @@ impl<'src> CodeExampleMarkdown<'src> {
};
Some(CodeExampleMarkdown {
lines: vec![],
opening_fence_indent: Indentation::from_str(opening_fence_indent),
opening_fence_indent: indentation_length(opening_fence_indent),
fence_kind,
fence_len,
})
@@ -1362,7 +1322,7 @@ impl<'src> CodeExampleMarkdown<'src> {
// its indent normalized. And, at the time of writing, a subsequent
// formatting run undoes this indentation, thus violating idempotency.
if !original.line.trim_whitespace().is_empty()
&& Indentation::from_str(original.line) < self.opening_fence_indent
&& indentation_length(original.line) < self.opening_fence_indent
{
queue.push_back(self.into_reset_action());
queue.push_back(CodeExampleAddAction::Print { original });
@@ -1408,7 +1368,7 @@ impl<'src> CodeExampleMarkdown<'src> {
// Unlike reStructuredText blocks, for Markdown fenced code blocks, the
// indentation that we want to strip from each line is known when the
// block is opened. So we can strip it as we collect lines.
let code = self.opening_fence_indent.trim_start_str(original.line);
let code = indentation_trim(self.opening_fence_indent, original.line);
self.lines.push(CodeExampleLine { original, code });
}
@@ -1523,6 +1483,7 @@ enum CodeExampleAddAction<'src> {
/// results in that code example becoming invalid. In this case,
/// we don't want to treat it as a code example, but instead write
/// back the lines to the docstring unchanged.
#[allow(dead_code)] // FIXME: remove when reStructuredText support is added
Reset {
/// The lines of code that we collected but should be printed back to
/// the docstring as-is and not formatted.
@@ -1573,241 +1534,51 @@ fn needs_chaperone_space(normalized: &NormalizedString, trim_end: &str) -> bool
|| trim_end.chars().rev().take_while(|c| *c == '\\').count() % 2 == 1
}
#[derive(Copy, Clone, Debug)]
enum Indentation {
/// Space only indentation or an empty indentation.
///
/// The value is the number of spaces.
Spaces(usize),
/// Tabs only indentation.
Tabs(usize),
/// Indentation that uses tabs followed by spaces.
/// Also known as smart tabs where tabs are used for indents, and spaces for alignment.
TabSpaces { tabs: usize, spaces: usize },
/// Indentation that uses spaces followed by tabs.
SpacesTabs { spaces: usize, tabs: usize },
/// Mixed indentation of tabs and spaces.
Mixed {
/// The visual width of the indentation in columns.
width: usize,
/// The length of the indentation in bytes
len: TextSize,
},
}
impl Indentation {
const TAB_INDENT_WIDTH: usize = 8;
fn from_str(s: &str) -> Self {
let mut iter = s.chars().peekable();
let spaces = iter.peeking_take_while(|c| *c == ' ').count();
let tabs = iter.peeking_take_while(|c| *c == '\t').count();
if tabs == 0 {
// No indent, or spaces only indent
return Self::Spaces(spaces);
}
let align_spaces = iter.peeking_take_while(|c| *c == ' ').count();
if spaces == 0 {
if align_spaces == 0 {
return Self::Tabs(tabs);
}
// At this point it's either a smart tab (tabs followed by spaces) or a wild mix of tabs and spaces.
if iter.peek().copied() != Some('\t') {
return Self::TabSpaces {
tabs,
spaces: align_spaces,
};
}
} else if align_spaces == 0 {
return Self::SpacesTabs { spaces, tabs };
}
// Sequence of spaces.. tabs, spaces, tabs...
let mut width = spaces + tabs * Self::TAB_INDENT_WIDTH + align_spaces;
// SAFETY: Safe because Ruff doesn't support files larger than 4GB.
let mut len = TextSize::try_from(spaces + tabs + align_spaces).unwrap();
for char in iter {
if char == '\t' {
// Pad to the next multiple of tab_width
width += Self::TAB_INDENT_WIDTH - (width.rem_euclid(Self::TAB_INDENT_WIDTH));
len += '\t'.text_len();
} else if char.is_whitespace() {
width += char.len_utf8();
len += char.text_len();
} else {
break;
}
}
// Mixed tabs and spaces
Self::Mixed { width, len }
}
/// Returns the indentation's visual width in columns/spaces.
///
/// For docstring indentation, black counts spaces as 1 and tabs by increasing the indentation up
/// to the next multiple of 8. This is effectively a port of
/// [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs),
/// which black [calls with the default tab width of 8](https://github.com/psf/black/blob/c36e468794f9256d5e922c399240d49782ba04f1/src/black/strings.py#L61).
const fn width(self) -> usize {
match self {
Self::Spaces(count) => count,
Self::Tabs(count) => count * Self::TAB_INDENT_WIDTH,
Self::TabSpaces { tabs, spaces } => tabs * Self::TAB_INDENT_WIDTH + spaces,
Self::SpacesTabs { spaces, tabs } => {
let mut indent = spaces;
indent += Self::TAB_INDENT_WIDTH - indent.rem_euclid(Self::TAB_INDENT_WIDTH);
indent + (tabs - 1) * Self::TAB_INDENT_WIDTH
}
Self::Mixed { width, .. } => width,
}
}
/// Returns the length of the indentation in bytes.
///
/// # Panics
/// If the indentation is longer than 4GB.
fn text_len(self) -> TextSize {
let len = match self {
Self::Spaces(count) => count,
Self::Tabs(count) => count,
Self::TabSpaces { tabs, spaces } => tabs + spaces,
Self::SpacesTabs { spaces, tabs } => spaces + tabs,
Self::Mixed { len, .. } => return len,
};
TextSize::try_from(len).unwrap()
}
/// Trims the indent of `rhs` by `self`.
///
/// Returns `None` if `self` is not a prefix of `rhs` or either `self` or `rhs` use mixed indentation.
fn trim_start(self, rhs: Self) -> Option<Self> {
let (left_tabs, left_spaces) = match self {
Self::Spaces(spaces) => (0usize, spaces),
Self::Tabs(tabs) => (tabs, 0usize),
Self::TabSpaces { tabs, spaces } => (tabs, spaces),
// Handle spaces here because it is the only indent where the spaces come before the tabs.
Self::SpacesTabs {
spaces: left_spaces,
tabs: left_tabs,
} => {
return match rhs {
Self::Spaces(right_spaces) => {
left_spaces.checked_sub(right_spaces).map(|spaces| {
if spaces == 0 {
Self::Tabs(left_tabs)
} else {
Self::SpacesTabs {
tabs: left_tabs,
spaces,
}
}
})
}
Self::SpacesTabs {
spaces: right_spaces,
tabs: right_tabs,
} => left_spaces.checked_sub(right_spaces).and_then(|spaces| {
let tabs = left_tabs.checked_sub(right_tabs)?;
Some(if spaces == 0 {
if tabs == 0 {
Self::Spaces(0)
} else {
Self::Tabs(tabs)
}
} else {
Self::SpacesTabs { spaces, tabs }
})
}),
_ => None,
}
}
Self::Mixed { .. } => return None,
};
let (right_tabs, right_spaces) = match rhs {
Self::Spaces(spaces) => (0usize, spaces),
Self::Tabs(tabs) => (tabs, 0usize),
Self::TabSpaces { tabs, spaces } => (tabs, spaces),
Self::SpacesTabs { .. } | Self::Mixed { .. } => return None,
};
let tabs = left_tabs.checked_sub(right_tabs)?;
let spaces = left_spaces.checked_sub(right_spaces)?;
Some(if tabs == 0 {
Self::Spaces(spaces)
} else if spaces == 0 {
Self::Tabs(tabs)
/// For docstring indentation, black counts spaces as 1 and tabs by increasing the indentation up
/// to the next multiple of 8. This is effectively a port of
/// [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs),
/// which black [calls with the default tab width of 8](https://github.com/psf/black/blob/c36e468794f9256d5e922c399240d49782ba04f1/src/black/strings.py#L61).
fn indentation_length(line: &str) -> TextSize {
let mut indentation = 0u32;
for char in line.chars() {
if char == '\t' {
// Pad to the next multiple of tab_width
indentation += 8 - (indentation.rem_euclid(8));
} else if char.is_whitespace() {
indentation += u32::from(char.text_len());
} else {
Self::TabSpaces { tabs, spaces }
})
}
/// Trims at most `indent_len` indentation from the beginning of `line`.
///
/// This is useful when one needs to trim some minimum
/// level of indentation from a code snippet collected from a docstring before
/// attempting to reformat it.
fn trim_start_str(self, line: &str) -> &str {
let mut seen_indent_len = 0;
let mut trimmed = line;
let indent_len = self.width();
for char in line.chars() {
if seen_indent_len >= indent_len {
return trimmed;
}
if char == '\t' {
// Pad to the next multiple of tab_width
seen_indent_len +=
Self::TAB_INDENT_WIDTH - (seen_indent_len.rem_euclid(Self::TAB_INDENT_WIDTH));
trimmed = &trimmed[1..];
} else if char.is_whitespace() {
seen_indent_len += char.len_utf8();
trimmed = &trimmed[char.len_utf8()..];
} else {
break;
}
break;
}
trimmed
}
const fn is_spaces_tabs(self) -> bool {
matches!(self, Self::SpacesTabs { .. })
}
TextSize::new(indentation)
}
impl PartialOrd for Indentation {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.width().cmp(&other.width()))
}
}
impl PartialEq for Indentation {
fn eq(&self, other: &Self) -> bool {
self.width() == other.width()
}
}
impl Default for Indentation {
fn default() -> Self {
Self::Spaces(0)
/// Trims at most `indent_len` indentation from the beginning of `line`.
///
/// This treats indentation in precisely the same way as `indentation_length`.
/// As such, it is expected that `indent_len` is computed from
/// `indentation_length`. This is useful when one needs to trim some minimum
/// level of indentation from a code snippet collected from a docstring before
/// attempting to reformat it.
fn indentation_trim(indent_len: TextSize, line: &str) -> &str {
let mut seen_indent_len = 0u32;
let mut trimmed = line;
for char in line.chars() {
if seen_indent_len >= indent_len.to_u32() {
return trimmed;
}
if char == '\t' {
// Pad to the next multiple of tab_width
seen_indent_len += 8 - (seen_indent_len.rem_euclid(8));
trimmed = &trimmed[1..];
} else if char.is_whitespace() {
seen_indent_len += u32::from(char.text_len());
trimmed = &trimmed[char.len_utf8()..];
} else {
break;
}
}
trimmed
}
/// Returns the indentation of the given line and everything following it.
@@ -1837,13 +1608,15 @@ fn is_rst_option(line: &str) -> bool {
#[cfg(test)]
mod tests {
use crate::string::docstring::Indentation;
use ruff_text_size::TextSize;
use super::indentation_length;
#[test]
fn test_indentation_like_black() {
assert_eq!(Indentation::from_str("\t \t \t").width(), 24);
assert_eq!(Indentation::from_str("\t \t").width(), 24);
assert_eq!(Indentation::from_str("\t\t\t").width(), 24);
assert_eq!(Indentation::from_str(" ").width(), 4);
assert_eq!(indentation_length("\t \t \t"), TextSize::new(24));
assert_eq!(indentation_length("\t \t"), TextSize::new(24));
assert_eq!(indentation_length("\t\t\t"), TextSize::new(24));
assert_eq!(indentation_length(" "), TextSize::new(4));
}
}

View File

@@ -18,7 +18,6 @@ use crate::expression::parentheses::in_parentheses_only_soft_line_break_or_space
use crate::other::f_string::FormatFString;
use crate::other::string_literal::{FormatStringLiteral, StringLiteralKind};
use crate::prelude::*;
use crate::preview::is_hex_codes_in_unicode_sequences_enabled;
use crate::QuoteStyle;
pub(crate) mod docstring;
@@ -292,54 +291,22 @@ impl StringPart {
}
}
/// Returns the prefix of the string part.
pub(crate) const fn prefix(&self) -> StringPrefix {
self.prefix
}
/// Returns the surrounding quotes of the string part.
pub(crate) const fn quotes(&self) -> StringQuotes {
self.quotes
}
/// Returns the range of the string's content in the source (minus prefix and quotes).
pub(crate) const fn content_range(&self) -> TextRange {
self.content_range
}
}
pub(crate) struct StringNormalizer {
quoting: Quoting,
preferred_quote_style: QuoteStyle,
parent_docstring_quote_char: Option<QuoteChar>,
normalize_hex: bool,
}
impl StringNormalizer {
pub(crate) fn from_context(context: &PyFormatContext<'_>) -> Self {
Self {
quoting: Quoting::default(),
preferred_quote_style: QuoteStyle::default(),
parent_docstring_quote_char: context.docstring(),
normalize_hex: is_hex_codes_in_unicode_sequences_enabled(context),
}
}
pub(crate) fn with_preferred_quote_style(mut self, quote_style: QuoteStyle) -> Self {
self.preferred_quote_style = quote_style;
self
}
pub(crate) fn with_quoting(mut self, quoting: Quoting) -> Self {
self.quoting = quoting;
self
}
/// Computes the strings preferred quotes.
pub(crate) fn choose_quotes(&self, string: &StringPart, locator: &Locator) -> StringQuotes {
/// Computes the strings preferred quotes and normalizes its content.
///
/// The parent docstring quote style should be set when formatting a code
/// snippet within the docstring. The quote style should correspond to the
/// style of quotes used by said docstring. Normalization will ensure the
/// quoting styles don't conflict.
pub(crate) fn normalize<'a>(
self,
quoting: Quoting,
locator: &'a Locator,
configured_style: QuoteStyle,
parent_docstring_quote_char: Option<QuoteChar>,
normalize_hex: bool,
) -> NormalizedString<'a> {
// Per PEP 8, always prefer double quotes for triple-quoted strings.
// Except when using quote-style-preserve.
let preferred_style = if string.quotes().triple {
let preferred_style = if self.quotes.triple {
// ... unless we're formatting a code snippet inside a docstring,
// then we specifically want to invert our quote style to avoid
// writing out invalid Python.
@@ -385,49 +352,37 @@ impl StringNormalizer {
// Overall this is a bit of a corner case and just inverting the
// style from what the parent ultimately decided upon works, even
// if it doesn't have perfect alignment with PEP8.
if let Some(quote) = self.parent_docstring_quote_char {
if let Some(quote) = parent_docstring_quote_char {
QuoteStyle::from(quote.invert())
} else if self.preferred_quote_style.is_preserve() {
QuoteStyle::Preserve
} else {
QuoteStyle::Double
}
} else {
self.preferred_quote_style
configured_style
};
match self.quoting {
Quoting::Preserve => string.quotes(),
let raw_content = &locator.slice(self.content_range);
let quotes = match quoting {
Quoting::Preserve => self.quotes,
Quoting::CanChange => {
if let Some(preferred_quote) = QuoteChar::from_style(preferred_style) {
let raw_content = locator.slice(string.content_range());
if string.prefix().is_raw_string() {
choose_quotes_for_raw_string(raw_content, string.quotes(), preferred_quote)
if self.prefix.is_raw_string() {
choose_quotes_raw(raw_content, self.quotes, preferred_quote)
} else {
choose_quotes_impl(raw_content, string.quotes(), preferred_quote)
choose_quotes(raw_content, self.quotes, preferred_quote)
}
} else {
string.quotes()
self.quotes
}
}
}
}
};
/// Computes the strings preferred quotes and normalizes its content.
pub(crate) fn normalize<'a>(
&self,
string: &StringPart,
locator: &'a Locator,
) -> NormalizedString<'a> {
let raw_content = locator.slice(string.content_range());
let quotes = self.choose_quotes(string, locator);
let normalized = normalize_string(raw_content, quotes, string.prefix(), self.normalize_hex);
let normalized = normalize_string(raw_content, quotes, self.prefix, normalize_hex);
NormalizedString {
prefix: string.prefix(),
content_range: string.content_range(),
prefix: self.prefix,
content_range: self.content_range,
text: normalized,
quotes,
}
@@ -554,7 +509,7 @@ impl Format<PyFormatContext<'_>> for StringPrefix {
/// The preferred quote style is chosen unless the string contains unescaped quotes of the
/// preferred style. For example, `r"foo"` is chosen over `r'foo'` if the preferred quote
/// style is double quotes.
fn choose_quotes_for_raw_string(
fn choose_quotes_raw(
input: &str,
quotes: StringQuotes,
preferred_quote: QuoteChar,
@@ -613,11 +568,7 @@ fn choose_quotes_for_raw_string(
/// For triple quoted strings, the preferred quote style is always used, unless the string contains
/// a triplet of the quote character (e.g., if double quotes are preferred, double quotes will be
/// used unless the string contains `"""`).
fn choose_quotes_impl(
input: &str,
quotes: StringQuotes,
preferred_quote: QuoteChar,
) -> StringQuotes {
fn choose_quotes(input: &str, quotes: StringQuotes, preferred_quote: QuoteChar) -> StringQuotes {
let quote = if quotes.triple {
// True if the string contains a triple quote sequence of the configured quote style.
let mut uses_triple_quotes = false;
@@ -826,7 +777,7 @@ impl TryFrom<char> for QuoteChar {
/// with the provided [`StringQuotes`] style.
///
/// Returns the normalized string and whether it contains new lines.
pub(crate) fn normalize_string(
fn normalize_string(
input: &str,
quotes: StringQuotes,
prefix: StringPrefix,

View File

@@ -1,314 +0,0 @@
---
source: crates/ruff_python_formatter/tests/fixtures.rs
input_file: crates/ruff_python_formatter/resources/test/fixtures/black/cases/collections.py
---
## Input
```python
import core, time, a
from . import A, B, C
# keeps existing trailing comma
from foo import (
bar,
)
# also keeps existing structure
from foo import (
baz,
qux,
)
# `as` works as well
from foo import (
xyzzy as magic,
)
a = {1,2,3,}
b = {
1,2,
3}
c = {
1,
2,
3,
}
x = 1,
y = narf(),
nested = {(1,2,3),(4,5,6),}
nested_no_trailing_comma = {(1,2,3),(4,5,6)}
nested_long_lines = ["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", "cccccccccccccccccccccccccccccccccccccccc", (1, 2, 3), "dddddddddddddddddddddddddddddddddddddddd"]
{"oneple": (1,),}
{"oneple": (1,)}
['ls', 'lsoneple/%s' % (foo,)]
x = {"oneple": (1,)}
y = {"oneple": (1,),}
assert False, ("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa wraps %s" % bar)
# looping over a 1-tuple should also not get wrapped
for x in (1,):
pass
for (x,) in (1,), (2,), (3,):
pass
[1, 2, 3,]
division_result_tuple = (6/2,)
print("foo %r", (foo.bar,))
if True:
IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING = (
Config.IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING
| {pylons.controllers.WSGIController}
)
if True:
ec2client.get_waiter('instance_stopped').wait(
InstanceIds=[instance.id],
WaiterConfig={
'Delay': 5,
})
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={"Delay": 5,},
)
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id], WaiterConfig={"Delay": 5,},
)
```
## Black Differences
```diff
--- Black
+++ Ruff
@@ -54,7 +54,7 @@
}
assert False, (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa wraps %s"
- % bar
+ % bar
)
# looping over a 1-tuple should also not get wrapped
@@ -75,7 +75,7 @@
if True:
IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING = (
Config.IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING
- | {pylons.controllers.WSGIController}
+ | {pylons.controllers.WSGIController}
)
if True:
```
## Ruff Output
```python
import core, time, a
from . import A, B, C
# keeps existing trailing comma
from foo import (
bar,
)
# also keeps existing structure
from foo import (
baz,
qux,
)
# `as` works as well
from foo import (
xyzzy as magic,
)
a = {
1,
2,
3,
}
b = {1, 2, 3}
c = {
1,
2,
3,
}
x = (1,)
y = (narf(),)
nested = {
(1, 2, 3),
(4, 5, 6),
}
nested_no_trailing_comma = {(1, 2, 3), (4, 5, 6)}
nested_long_lines = [
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
"cccccccccccccccccccccccccccccccccccccccc",
(1, 2, 3),
"dddddddddddddddddddddddddddddddddddddddd",
]
{
"oneple": (1,),
}
{"oneple": (1,)}
["ls", "lsoneple/%s" % (foo,)]
x = {"oneple": (1,)}
y = {
"oneple": (1,),
}
assert False, (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa wraps %s"
% bar
)
# looping over a 1-tuple should also not get wrapped
for x in (1,):
pass
for (x,) in (1,), (2,), (3,):
pass
[
1,
2,
3,
]
division_result_tuple = (6 / 2,)
print("foo %r", (foo.bar,))
if True:
IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING = (
Config.IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING
| {pylons.controllers.WSGIController}
)
if True:
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={
"Delay": 5,
},
)
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={
"Delay": 5,
},
)
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={
"Delay": 5,
},
)
```
## Black Output
```python
import core, time, a
from . import A, B, C
# keeps existing trailing comma
from foo import (
bar,
)
# also keeps existing structure
from foo import (
baz,
qux,
)
# `as` works as well
from foo import (
xyzzy as magic,
)
a = {
1,
2,
3,
}
b = {1, 2, 3}
c = {
1,
2,
3,
}
x = (1,)
y = (narf(),)
nested = {
(1, 2, 3),
(4, 5, 6),
}
nested_no_trailing_comma = {(1, 2, 3), (4, 5, 6)}
nested_long_lines = [
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
"cccccccccccccccccccccccccccccccccccccccc",
(1, 2, 3),
"dddddddddddddddddddddddddddddddddddddddd",
]
{
"oneple": (1,),
}
{"oneple": (1,)}
["ls", "lsoneple/%s" % (foo,)]
x = {"oneple": (1,)}
y = {
"oneple": (1,),
}
assert False, (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa wraps %s"
% bar
)
# looping over a 1-tuple should also not get wrapped
for x in (1,):
pass
for (x,) in (1,), (2,), (3,):
pass
[
1,
2,
3,
]
division_result_tuple = (6 / 2,)
print("foo %r", (foo.bar,))
if True:
IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING = (
Config.IGNORED_TYPES_FOR_ATTRIBUTE_CHECKING
| {pylons.controllers.WSGIController}
)
if True:
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={
"Delay": 5,
},
)
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={
"Delay": 5,
},
)
ec2client.get_waiter("instance_stopped").wait(
InstanceIds=[instance.id],
WaiterConfig={
"Delay": 5,
},
)
```

View File

@@ -192,7 +192,7 @@ instruction()#comment with bad spacing
children[0],
body,
children[-1], # type: ignore
@@ -72,14 +76,18 @@
@@ -72,7 +76,11 @@
body,
parameters.children[-1], # )2
]
@@ -204,19 +204,7 @@ instruction()#comment with bad spacing
+ ] # type: ignore
if (
self._proc is not None
- # has the child process finished?
- and self._returncode is None
- # the child process has finished, but the
- # transport hasn't been notified yet?
- and self._proc.poll() is None
+ # has the child process finished?
+ and self._returncode is None
+ # the child process has finished, but the
+ # transport hasn't been notified yet?
+ and self._proc.poll() is None
):
pass
# no newline before or after
# has the child process finished?
@@ -115,7 +123,9 @@
arg3=True,
)
@@ -240,23 +228,14 @@ instruction()#comment with bad spacing
)
@@ -151,14 +164,17 @@
[
CONFIG_FILE,
]
- + SHARED_CONFIG_FILES
- + USER_CONFIG_FILES
+ + SHARED_CONFIG_FILES
+ + USER_CONFIG_FILES
) # type: Final
@@ -158,7 +171,10 @@
class Test:
def _init_host(self, parsed) -> None:
- if parsed.hostname is None or not parsed.hostname.strip(): # type: ignore
+ if (
+ parsed.hostname is None # type: ignore
+ or not parsed.hostname.strip()
+ or not parsed.hostname.strip()
+ ):
pass
@@ -351,11 +330,11 @@ def inline_comments_in_brackets_ruin_everything():
] # type: ignore
if (
self._proc is not None
# has the child process finished?
and self._returncode is None
# the child process has finished, but the
# transport hasn't been notified yet?
and self._proc.poll() is None
# has the child process finished?
and self._returncode is None
# the child process has finished, but the
# transport hasn't been notified yet?
and self._proc.poll() is None
):
pass
# no newline before or after
@@ -432,8 +411,8 @@ CONFIG_FILES = (
[
CONFIG_FILE,
]
+ SHARED_CONFIG_FILES
+ USER_CONFIG_FILES
+ SHARED_CONFIG_FILES
+ USER_CONFIG_FILES
) # type: Final
@@ -441,7 +420,7 @@ class Test:
def _init_host(self, parsed) -> None:
if (
parsed.hostname is None # type: ignore
or not parsed.hostname.strip()
or not parsed.hostname.strip()
):
pass

View File

@@ -141,23 +141,6 @@ aaaaaaaaaaaaa, bbbbbbbbb = map(list, map(itertools.chain.from_iterable, zip(*ite
an_element_with_a_long_value = calls() or more_calls() and more() # type: bool
tup = (
@@ -61,11 +59,11 @@
a = (
element
- + another_element
- + another_element_with_long_name
- + element
- + another_element
- + another_element_with_long_name
+ + another_element
+ + another_element_with_long_name
+ + element
+ + another_element
+ + another_element_with_long_name
) # type: int
@@ -100,7 +98,13 @@
)
@@ -260,11 +243,11 @@ def f(
a = (
element
+ another_element
+ another_element_with_long_name
+ element
+ another_element
+ another_element_with_long_name
+ another_element
+ another_element_with_long_name
+ element
+ another_element
+ another_element_with_long_name
) # type: int

View File

@@ -146,40 +146,6 @@ if (
# b comment
None
)
@@ -92,20 +91,20 @@
# comment1
a
# comment2
- or (
- # comment3
- (
- # comment4
- b
- )
- # comment5
- and
- # comment6
- c
or (
- # comment7
- d
+ # comment3
+ (
+ # comment4
+ b
+ )
+ # comment5
+ and
+ # comment6
+ c
+ or (
+ # comment7
+ d
+ )
)
- )
):
print("Foo")
```
## Ruff Output
@@ -278,21 +244,21 @@ if (
# comment1
a
# comment2
or (
# comment3
(
# comment4
b
)
# comment5
and
# comment6
c
or (
# comment7
d
)
or (
# comment3
(
# comment4
b
)
# comment5
and
# comment6
c
or (
# comment7
d
)
)
):
print("Foo")
```

View File

@@ -193,26 +193,6 @@ class C:
```diff
--- Black
+++ Ruff
@@ -22,8 +22,8 @@
if (
# Rule 1
i % 2 == 0
- # Rule 2
- and i % 3 == 0
+ # Rule 2
+ and i % 3 == 0
):
while (
# Just a comment
@@ -41,7 +41,7 @@
)
return (
'Utterly failed doctest test for %s\n File "%s", line %s, in %s\n\n%s'
- % (test.name, test.filename, lineno, lname, err)
+ % (test.name, test.filename, lineno, lname, err)
)
def omitting_trailers(self) -> None:
@@ -110,19 +110,20 @@
value, is_going_to_be="too long to fit in a single line", srsly=True
), "Not what we expected"
@@ -242,12 +222,12 @@ class C:
+ key8: value8,
+ key9: value9,
+ }
+ == expected
+ == expected
+ ), "Not what we expected and the message is too long to fit in one line"
assert expected(
value, is_going_to_be="too long to fit in a single line", srsly=True
@@ -161,21 +162,19 @@
@@ -161,9 +162,7 @@
8 STORE_ATTR 0 (x)
10 LOAD_CONST 0 (None)
12 RETURN_VALUE
@@ -258,29 +238,6 @@ class C:
assert (
expectedexpectedexpectedexpectedexpectedexpectedexpectedexpectedexpect
- == {
- key1: value1,
- key2: value2,
- key3: value3,
- key4: value4,
- key5: value5,
- key6: value6,
- key7: value7,
- key8: value8,
- key9: value9,
- }
+ == {
+ key1: value1,
+ key2: value2,
+ key3: value3,
+ key4: value4,
+ key5: value5,
+ key6: value6,
+ key7: value7,
+ key8: value8,
+ key9: value9,
+ }
)
```
## Ruff Output
@@ -310,8 +267,8 @@ class C:
if (
# Rule 1
i % 2 == 0
# Rule 2
and i % 3 == 0
# Rule 2
and i % 3 == 0
):
while (
# Just a comment
@@ -329,7 +286,7 @@ class C:
)
return (
'Utterly failed doctest test for %s\n File "%s", line %s, in %s\n\n%s'
% (test.name, test.filename, lineno, lname, err)
% (test.name, test.filename, lineno, lname, err)
)
def omitting_trailers(self) -> None:
@@ -410,7 +367,7 @@ class C:
key8: value8,
key9: value9,
}
== expected
== expected
), "Not what we expected and the message is too long to fit in one line"
assert expected(
@@ -454,17 +411,17 @@ class C:
assert (
expectedexpectedexpectedexpectedexpectedexpectedexpectedexpectedexpect
== {
key1: value1,
key2: value2,
key3: value3,
key4: value4,
key5: value5,
key6: value6,
key7: value7,
key8: value8,
key9: value9,
}
== {
key1: value1,
key2: value2,
key3: value3,
key4: value4,
key5: value5,
key6: value6,
key7: value7,
key8: value8,
key9: value9,
}
)
```

View File

@@ -193,26 +193,6 @@ class C:
```diff
--- Black
+++ Ruff
@@ -22,8 +22,8 @@
if (
# Rule 1
i % 2 == 0
- # Rule 2
- and i % 3 == 0
+ # Rule 2
+ and i % 3 == 0
):
while (
# Just a comment
@@ -41,7 +41,7 @@
)
return (
'Utterly failed doctest test for %s\n File "%s", line %s, in %s\n\n%s'
- % (test.name, test.filename, lineno, lname, err)
+ % (test.name, test.filename, lineno, lname, err)
)
def omitting_trailers(self) -> None:
@@ -110,19 +110,20 @@
value, is_going_to_be="too long to fit in a single line", srsly=True
), "Not what we expected"
@@ -242,12 +222,12 @@ class C:
+ key8: value8,
+ key9: value9,
+ }
+ == expected
+ == expected
+ ), "Not what we expected and the message is too long to fit in one line"
assert expected(
value, is_going_to_be="too long to fit in a single line", srsly=True
@@ -161,21 +162,19 @@
@@ -161,9 +162,7 @@
8 STORE_ATTR 0 (x)
10 LOAD_CONST 0 (None)
12 RETURN_VALUE
@@ -258,29 +238,6 @@ class C:
assert (
expectedexpectedexpectedexpectedexpectedexpectedexpectedexpectedexpect
- == {
- key1: value1,
- key2: value2,
- key3: value3,
- key4: value4,
- key5: value5,
- key6: value6,
- key7: value7,
- key8: value8,
- key9: value9,
- }
+ == {
+ key1: value1,
+ key2: value2,
+ key3: value3,
+ key4: value4,
+ key5: value5,
+ key6: value6,
+ key7: value7,
+ key8: value8,
+ key9: value9,
+ }
)
```
## Ruff Output
@@ -310,8 +267,8 @@ class C:
if (
# Rule 1
i % 2 == 0
# Rule 2
and i % 3 == 0
# Rule 2
and i % 3 == 0
):
while (
# Just a comment
@@ -329,7 +286,7 @@ class C:
)
return (
'Utterly failed doctest test for %s\n File "%s", line %s, in %s\n\n%s'
% (test.name, test.filename, lineno, lname, err)
% (test.name, test.filename, lineno, lname, err)
)
def omitting_trailers(self) -> None:
@@ -410,7 +367,7 @@ class C:
key8: value8,
key9: value9,
}
== expected
== expected
), "Not what we expected and the message is too long to fit in one line"
assert expected(
@@ -454,17 +411,17 @@ class C:
assert (
expectedexpectedexpectedexpectedexpectedexpectedexpectedexpectedexpect
== {
key1: value1,
key2: value2,
key3: value3,
key4: value4,
key5: value5,
key6: value6,
key7: value7,
key8: value8,
key9: value9,
}
== {
key1: value1,
key2: value2,
key3: value3,
key4: value4,
key5: value5,
key6: value6,
key7: value7,
key8: value8,
key9: value9,
}
)
```

View File

@@ -275,131 +275,6 @@ last_call()
) # note: no trailing comma pre-3.6
call(*gidgets[:2])
call(a, *gidgets[:2])
@@ -277,95 +277,95 @@
pass
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
- in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
+ in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
- not in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
+ not in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
- is qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
+ is qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
- is not qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
+ is not qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
if (
threading.current_thread() != threading.main_thread()
- and threading.current_thread() != threading.main_thread()
- or signal.getsignal(signal.SIGINT) != signal.default_int_handler
+ and threading.current_thread() != threading.main_thread()
+ or signal.getsignal(signal.SIGINT) != signal.default_int_handler
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ + aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- * aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ * aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- / aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ / aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
~aaaa.a + aaaa.b - aaaa.c * aaaa.d / aaaa.e
- | aaaa.f & aaaa.g % aaaa.h ^ aaaa.i << aaaa.k >> aaaa.l**aaaa.m // aaaa.n
+ | aaaa.f & aaaa.g % aaaa.h ^ aaaa.i << aaaa.k >> aaaa.l**aaaa.m // aaaa.n
):
return True
if (
~aaaaaaaa.a + aaaaaaaa.b - aaaaaaaa.c @ aaaaaaaa.d / aaaaaaaa.e
- | aaaaaaaa.f & aaaaaaaa.g % aaaaaaaa.h
- ^ aaaaaaaa.i << aaaaaaaa.k >> aaaaaaaa.l**aaaaaaaa.m // aaaaaaaa.n
+ | aaaaaaaa.f & aaaaaaaa.g % aaaaaaaa.h
+ ^ aaaaaaaa.i << aaaaaaaa.k >> aaaaaaaa.l**aaaaaaaa.m // aaaaaaaa.n
):
return True
if (
~aaaaaaaaaaaaaaaa.a
- + aaaaaaaaaaaaaaaa.b
- - aaaaaaaaaaaaaaaa.c * aaaaaaaaaaaaaaaa.d @ aaaaaaaaaaaaaaaa.e
- | aaaaaaaaaaaaaaaa.f & aaaaaaaaaaaaaaaa.g % aaaaaaaaaaaaaaaa.h
- ^ aaaaaaaaaaaaaaaa.i
- << aaaaaaaaaaaaaaaa.k
- >> aaaaaaaaaaaaaaaa.l**aaaaaaaaaaaaaaaa.m // aaaaaaaaaaaaaaaa.n
+ + aaaaaaaaaaaaaaaa.b
+ - aaaaaaaaaaaaaaaa.c * aaaaaaaaaaaaaaaa.d @ aaaaaaaaaaaaaaaa.e
+ | aaaaaaaaaaaaaaaa.f & aaaaaaaaaaaaaaaa.g % aaaaaaaaaaaaaaaa.h
+ ^ aaaaaaaaaaaaaaaa.i
+ << aaaaaaaaaaaaaaaa.k
+ >> aaaaaaaaaaaaaaaa.l**aaaaaaaaaaaaaaaa.m // aaaaaaaaaaaaaaaa.n
):
return True
(
aaaaaaaaaaaaaaaa
- + aaaaaaaaaaaaaaaa
- - aaaaaaaaaaaaaaaa
- * (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
- / (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
+ + aaaaaaaaaaaaaaaa
+ - aaaaaaaaaaaaaaaa
+ * (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
+ / (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
)
aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa
(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- >> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- << aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ >> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ << aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
)
bbbb >> bbbb * bbbb
(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- ^ bbbb.a & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- ^ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ ^ bbbb.a & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ ^ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
)
last_call()
# standalone comment at ENDMARKER
```
## Ruff Output
@@ -684,95 +559,95 @@ for (
pass
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
not in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
not in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
is qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
is qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
is not qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
is not qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
if (
threading.current_thread() != threading.main_thread()
and threading.current_thread() != threading.main_thread()
or signal.getsignal(signal.SIGINT) != signal.default_int_handler
and threading.current_thread() != threading.main_thread()
or signal.getsignal(signal.SIGINT) != signal.default_int_handler
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
| aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
| aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
& aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
& aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
* aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
* aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
/ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
/ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
~aaaa.a + aaaa.b - aaaa.c * aaaa.d / aaaa.e
| aaaa.f & aaaa.g % aaaa.h ^ aaaa.i << aaaa.k >> aaaa.l**aaaa.m // aaaa.n
| aaaa.f & aaaa.g % aaaa.h ^ aaaa.i << aaaa.k >> aaaa.l**aaaa.m // aaaa.n
):
return True
if (
~aaaaaaaa.a + aaaaaaaa.b - aaaaaaaa.c @ aaaaaaaa.d / aaaaaaaa.e
| aaaaaaaa.f & aaaaaaaa.g % aaaaaaaa.h
^ aaaaaaaa.i << aaaaaaaa.k >> aaaaaaaa.l**aaaaaaaa.m // aaaaaaaa.n
| aaaaaaaa.f & aaaaaaaa.g % aaaaaaaa.h
^ aaaaaaaa.i << aaaaaaaa.k >> aaaaaaaa.l**aaaaaaaa.m // aaaaaaaa.n
):
return True
if (
~aaaaaaaaaaaaaaaa.a
+ aaaaaaaaaaaaaaaa.b
- aaaaaaaaaaaaaaaa.c * aaaaaaaaaaaaaaaa.d @ aaaaaaaaaaaaaaaa.e
| aaaaaaaaaaaaaaaa.f & aaaaaaaaaaaaaaaa.g % aaaaaaaaaaaaaaaa.h
^ aaaaaaaaaaaaaaaa.i
<< aaaaaaaaaaaaaaaa.k
>> aaaaaaaaaaaaaaaa.l**aaaaaaaaaaaaaaaa.m // aaaaaaaaaaaaaaaa.n
+ aaaaaaaaaaaaaaaa.b
- aaaaaaaaaaaaaaaa.c * aaaaaaaaaaaaaaaa.d @ aaaaaaaaaaaaaaaa.e
| aaaaaaaaaaaaaaaa.f & aaaaaaaaaaaaaaaa.g % aaaaaaaaaaaaaaaa.h
^ aaaaaaaaaaaaaaaa.i
<< aaaaaaaaaaaaaaaa.k
>> aaaaaaaaaaaaaaaa.l**aaaaaaaaaaaaaaaa.m // aaaaaaaaaaaaaaaa.n
):
return True
(
aaaaaaaaaaaaaaaa
+ aaaaaaaaaaaaaaaa
- aaaaaaaaaaaaaaaa
* (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
/ (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
+ aaaaaaaaaaaaaaaa
- aaaaaaaaaaaaaaaa
* (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
/ (aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa)
)
aaaaaaaaaaaaaaaa + aaaaaaaaaaaaaaaa
(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
>> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
<< aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
>> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
<< aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
)
bbbb >> bbbb * bbbb
(
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
^ bbbb.a & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
^ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
^ bbbb.a & aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
^ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
)
last_call()
# standalone comment at ENDMARKER

View File

@@ -110,15 +110,6 @@ elif unformatted:
},
)
@@ -19,7 +18,7 @@
"-la",
]
# fmt: on
- + path,
+ + path,
check=True,
)
@@ -82,6 +81,6 @@
if x:
return x
@@ -152,7 +143,7 @@ run(
"-la",
]
# fmt: on
+ path,
+ path,
check=True,
)

Some files were not shown because too many files have changed in this diff Show More