Compare commits

...

19 Commits

Author SHA1 Message Date
Charlie Marsh
e7a2779402 Bump version to v0.0.289 (#7308) 2023-09-12 12:00:11 -04:00
Zanie Blue
008da95b29 Add preview documentation section (#7281)
Adds a basic documentation section for preview mode based on the FAQ
entry and versioning RFC.
2023-09-12 15:43:31 +00:00
Zanie Blue
5d4dd3e38e Set the target deployment to main during dispatched documentation deployments (#7304)
Closes #7276 by deploying to production when not triggered by a pull
request.
2023-09-12 10:34:05 -05:00
Micha Reiser
e561f5783b Fix(vscode): Respect line length ruff.toml configuration (#7306) 2023-09-12 15:31:47 +00:00
Dhruv Manilawala
ee0f1270cf Add NotebookIndex to the cache (#6863)
## Summary

This PR updates the `FileCache` to include an optional `NotebookIndex`
to support caching for Jupyter Notebooks.

We only require the index to compute the diagnostics and thus we don't
really need to store the entire `Notebook` on the `Diagnostics` struct.
This means we only need the index to be stored in the cache to
reconstruct the `Diagnostics`.

## Test Plan

Update an existing test case to run over the fixtures under
`ruff_notebook` crate where there are multiple Jupyter Notebook.

Locally, the following commands were run in order:
1. Remove the cache: `rm -rf .ruff_cache`
2. Run without cache: `cargo run --bin ruff -- check --isolated
crates/ruff_notebook/resources/test/fixtures/jupyter/unused_variable.ipynb
--no-cache`
3. Run with cache: `cargo run --bin ruff -- check --isolated
crates/ruff_notebook/resources/test/fixtures/jupyter/unused_variable.ipynb`
4. Check whether the `.ruff_cache` directory was created or not
5. Run with cache again and verify: `cargo run --bin ruff -- check
--isolated
crates/ruff_notebook/resources/test/fixtures/jupyter/unused_variable.ipynb`

## Benchmarks

https://github.com/astral-sh/ruff/pull/6863#issuecomment-1715675186

fixes: #6671
2023-09-12 18:29:03 +05:30
Tom Kuson
e7b7e4a18d Add documentation to duplicate-union-member (#7225)
## Summary

Add documentation to `duplicate-union-member` (`PYI016`) rule. Related
to #2646.

## Test Plan

`python scripts/check_docs_formatted.py`
2023-09-12 08:56:33 -04:00
Brendon Happ
b4419c34ea Ignore @override method when enforcing bad-dunder-name rule (#7224)
## Summary

Closes #6958.

If a method has the `override` decorator, there is nothing you can do
about incorrect dunder methods, so they should be ignored.

## Test Plan

Overridden incorrect dunder method was added to the tests to verify ruff
doesn't catch it when evaluating the file. Snapshot changes are all just
line number changes
2023-09-12 11:54:40 +00:00
Micha Reiser
08f19226b9 Fix panic when formatting binary expression with two implicit concatenated string operands (#7287) 2023-09-12 09:49:51 +02:00
Micha Reiser
1e6df19a35 Bool expression comment placement (#7269) 2023-09-12 06:39:57 +00:00
Zanie Blue
c21b960fc7 Display the --preview option in the CLI help menu (#7274)
If we're going to warn on use of NURSERY in #7210 we probably ought to
show the `--preview` option in our help menus.
2023-09-11 18:09:58 -05:00
Zanie Blue
73ad2affa1 Update preview and fix documentation symbols (#7207)
I don't love the sunrise emoji and 🧪 seems nice :)

Requires #7195

---------

Co-authored-by: konsti <konstin@mailbox.org>
2023-09-11 18:08:00 -05:00
Zanie Blue
40c936922e Add "Preview" section to auto-generated release notes (#7280) 2023-09-11 18:07:47 -05:00
Charlie Marsh
874db4fb86 Invert condition for < and <= in outdated version block (#7284)
Closes https://github.com/astral-sh/ruff/issues/7258.
2023-09-11 23:02:23 +00:00
Dhruv Manilawala
a41bb2733f Add range to lexer test snapshots (#7265)
## Summary

This PR updates the lexer test snapshots to include the range value as
well. This is mainly a mechanical refactor.

### Motivation

The main motivation is so that we can verify that the ranges are valid
and do not overlap.

## Test Plan

`cargo test`
2023-09-11 19:12:46 +00:00
Zanie Blue
24b848a4ea Enable preview mode during benchmarks (#7208)
Split out of https://github.com/astral-sh/ruff/pull/7195 so benchmark
changes from enabling additional rules can be reviewed separately.
2023-09-11 14:09:33 -05:00
Zanie Blue
773ba5f816 Update the docs workflow to allow publishing a specific ref (#7278)
Related https://github.com/astral-sh/ruff/issues/7276

Our docs publishing action does not allow targetting a specific commit
when run manually which means we cannot update the documentation to
anything but the latest commit on `main`. This change allows a ref to be
provided.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2023-09-11 18:51:31 +00:00
Dhruv Manilawala
f5701fcc63 Use snapshots for remaining lexer tests (#7264)
## Summary

This PR updates the remaining lexer test cases to use the snapshots.
This is mainly a mechanical refactor.

## Motivation

The main motivation is so that when we add the token range values to the
test case output, it's easier to update the test cases.

The reason they were not using the snapshots before was because of the usage of
`test_case` macro. The macros is mainly used for different EOL test cases. If we
just generate the snapshots directly, then the snapshot name would be suffixed
with `-1`, `-2`, etc. as the test function is still the same. So, we'll create
the snapshot ourselves with the platform name for the respective EOL
test cases.

## Test Plan

`cargo test`
2023-09-12 00:16:38 +05:30
Zanie Blue
ff0feb191c Use pages deploy instead of the deprecated pages publish command to deploy the docs website (#7277)
See https://github.com/cloudflare/workers-sdk/issues/3067

Related #7276
2023-09-11 13:23:47 -05:00
Zanie Blue
6566d00295 Update rule selection to respect preview mode (#7195)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Extends work in #7046 (some relevant discussion there)

Changes:

- All nursery rules are now referred to as preview rules
- Documentation for the nursery is updated to describe preview
- Adds a "PREVIEW" selector for preview rules
- This is primarily to allow `--preview --ignore PREVIEW --extend-select
FOO001,BAR200`
- Using `--preview` enables preview rules that match selectors

Notable decisions:

- Preview rules are not selectable by their rule code without enabling
preview
- Retains the "NURSERY" selector for backwards compatibility
- Nursery rules are selectable by their rule code for backwards
compatiblity

Additional work:

- Selection of preview rules without the "--preview" flag should display
a warning
- Use of deprecated nursery selection behavior should display a warning
- Nursery selection should be removed after some time

## Test Plan

<!-- How was it tested? -->

Manual confirmation (i.e. we don't have an preview rules yet just
nursery rules so I added a preview rule for manual testing)

New unit tests

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-09-11 12:28:39 -05:00
106 changed files with 4454 additions and 1319 deletions

3
.github/release.yml vendored
View File

@@ -20,6 +20,9 @@ changelog:
- title: Bug Fixes
labels:
- bug
- title: Preview
labels:
- preview
- title: Other Changes
labels:
- "*"

View File

@@ -2,6 +2,11 @@ name: mkdocs
on:
workflow_dispatch:
inputs:
ref:
description: "The commit SHA, tag, or branch to publish. Uses the default branch if not specified."
default: ""
type: string
release:
types: [published]
@@ -13,6 +18,8 @@ jobs:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@v4
with:
ref: ${{ inputs.ref }}
- uses: actions/setup-python@v4
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
@@ -44,4 +51,5 @@ jobs:
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
command: pages publish site --project-name=ruff-docs --branch ${GITHUB_HEAD_REF} --commit-hash ${GITHUB_SHA}
# `github.head_ref` is only set during pull requests and for manual runs or tags we use `main` to deploy to production
command: pages deploy site --project-name=ruff-docs --branch ${{ github.head_ref || 'main' }} --commit-hash ${GITHUB_SHA}

8
Cargo.lock generated
View File

@@ -821,7 +821,7 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.288"
version = "0.0.289"
dependencies = [
"anyhow",
"clap",
@@ -2037,7 +2037,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.288"
version = "0.0.289"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2135,7 +2135,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.288"
version = "0.0.289"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -2183,6 +2183,7 @@ dependencies = [
"similar",
"strum",
"tempfile",
"test-case",
"thiserror",
"tikv-jemallocator",
"tracing",
@@ -2400,7 +2401,6 @@ dependencies = [
"ruff_text_size",
"rustc-hash",
"static_assertions",
"test-case",
"tiny-keccak",
"unicode-ident",
"unicode_names2",

View File

@@ -140,7 +140,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.288
rev: v0.0.289
hooks:
- id: ruff
```

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.288"
version = "0.0.289"
description = """
Convert Flake8 configuration files to Ruff configuration files.
"""

View File

@@ -4,6 +4,7 @@ use std::str::FromStr;
use anyhow::anyhow;
use ruff::registry::Linter;
use ruff::settings::types::PreviewMode;
use ruff::RuleSelector;
#[derive(Copy, Clone, Ord, PartialOrd, Eq, PartialEq)]
@@ -331,7 +332,7 @@ pub(crate) fn infer_plugins_from_codes(selectors: &HashSet<RuleSelector>) -> Vec
.filter(|plugin| {
for selector in selectors {
if selector
.into_iter()
.rules(PreviewMode::Disabled)
.any(|rule| Linter::from(plugin).rules().any(|r| r == rule))
{
return true;

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.288"
version = "0.0.289"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1,3 +1,6 @@
from typing import override
class Apples:
def _init_(self): # [bad-dunder-name]
pass
@@ -21,6 +24,11 @@ class Apples:
# author likely meant to call the invert dunder method
pass
@override
def _ignore__(self): # [bad-dunder-name]
# overridden dunder methods should be ignored
pass
def hello(self):
print("hello")

View File

@@ -178,3 +178,9 @@ if True:
if True:
if sys.version_info > (3, 0): \
expected_error = []
if sys.version_info < (3,12):
print("py3")
if sys.version_info <= (3,12):
print("py3")

View File

@@ -9,6 +9,7 @@ use strum_macros::{AsRefStr, EnumIter};
use ruff_diagnostics::Violation;
use crate::registry::{AsRule, Linter};
use crate::rule_selector::is_single_rule_selector;
use crate::rules;
#[derive(PartialEq, Eq, PartialOrd, Ord)]
@@ -51,7 +52,10 @@ impl PartialEq<&str> for NoqaCode {
pub enum RuleGroup {
/// The rule has not been assigned to any specific group.
Unspecified,
/// The rule is still under development, and must be enabled explicitly.
/// The rule is unstable, and preview mode must be enabled for usage.
Preview,
/// Legacy category for unstable rules, supports backwards compatible selection.
#[deprecated(note = "Use `RuleGroup::Preview` for new rules instead")]
Nursery,
}
@@ -64,38 +68,71 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
Some(match (linter, code) {
// pycodestyle errors
(Pycodestyle, "E101") => (RuleGroup::Unspecified, rules::pycodestyle::rules::MixedSpacesAndTabs),
#[allow(deprecated)]
(Pycodestyle, "E111") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::IndentationWithInvalidMultiple),
#[allow(deprecated)]
(Pycodestyle, "E112") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::NoIndentedBlock),
#[allow(deprecated)]
(Pycodestyle, "E113") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::UnexpectedIndentation),
#[allow(deprecated)]
(Pycodestyle, "E114") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::IndentationWithInvalidMultipleComment),
#[allow(deprecated)]
(Pycodestyle, "E115") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::NoIndentedBlockComment),
#[allow(deprecated)]
(Pycodestyle, "E116") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::UnexpectedIndentationComment),
#[allow(deprecated)]
(Pycodestyle, "E117") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::OverIndented),
#[allow(deprecated)]
(Pycodestyle, "E201") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::WhitespaceAfterOpenBracket),
#[allow(deprecated)]
(Pycodestyle, "E202") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::WhitespaceBeforeCloseBracket),
#[allow(deprecated)]
(Pycodestyle, "E203") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::WhitespaceBeforePunctuation),
#[allow(deprecated)]
(Pycodestyle, "E211") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::WhitespaceBeforeParameters),
#[allow(deprecated)]
(Pycodestyle, "E221") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleSpacesBeforeOperator),
#[allow(deprecated)]
(Pycodestyle, "E222") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleSpacesAfterOperator),
#[allow(deprecated)]
(Pycodestyle, "E223") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabBeforeOperator),
#[allow(deprecated)]
(Pycodestyle, "E224") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabAfterOperator),
#[allow(deprecated)]
(Pycodestyle, "E225") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundOperator),
#[allow(deprecated)]
(Pycodestyle, "E226") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundArithmeticOperator),
#[allow(deprecated)]
(Pycodestyle, "E227") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundBitwiseOrShiftOperator),
#[allow(deprecated)]
(Pycodestyle, "E228") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundModuloOperator),
#[allow(deprecated)]
(Pycodestyle, "E231") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespace),
#[allow(deprecated)]
(Pycodestyle, "E241") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleSpacesAfterComma),
#[allow(deprecated)]
(Pycodestyle, "E242") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabAfterComma),
#[allow(deprecated)]
(Pycodestyle, "E251") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::UnexpectedSpacesAroundKeywordParameterEquals),
#[allow(deprecated)]
(Pycodestyle, "E252") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundParameterEquals),
#[allow(deprecated)]
(Pycodestyle, "E261") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TooFewSpacesBeforeInlineComment),
#[allow(deprecated)]
(Pycodestyle, "E262") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::NoSpaceAfterInlineComment),
#[allow(deprecated)]
(Pycodestyle, "E265") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::NoSpaceAfterBlockComment),
#[allow(deprecated)]
(Pycodestyle, "E266") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleLeadingHashesForBlockComment),
#[allow(deprecated)]
(Pycodestyle, "E271") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleSpacesAfterKeyword),
#[allow(deprecated)]
(Pycodestyle, "E272") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleSpacesBeforeKeyword),
#[allow(deprecated)]
(Pycodestyle, "E273") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabAfterKeyword),
#[allow(deprecated)]
(Pycodestyle, "E274") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabBeforeKeyword),
#[allow(deprecated)]
(Pycodestyle, "E275") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAfterKeyword),
(Pycodestyle, "E401") => (RuleGroup::Unspecified, rules::pycodestyle::rules::MultipleImportsOnOneLine),
(Pycodestyle, "E402") => (RuleGroup::Unspecified, rules::pycodestyle::rules::ModuleImportNotAtTopOfFile),
@@ -176,6 +213,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "C0205") => (RuleGroup::Unspecified, rules::pylint::rules::SingleStringSlots),
(Pylint, "C0208") => (RuleGroup::Unspecified, rules::pylint::rules::IterationOverSet),
(Pylint, "C0414") => (RuleGroup::Unspecified, rules::pylint::rules::UselessImportAlias),
#[allow(deprecated)]
(Pylint, "C1901") => (RuleGroup::Nursery, rules::pylint::rules::CompareToEmptyString),
(Pylint, "C3002") => (RuleGroup::Unspecified, rules::pylint::rules::UnnecessaryDirectLambdaCall),
(Pylint, "E0100") => (RuleGroup::Unspecified, rules::pylint::rules::YieldInInit),
@@ -216,6 +254,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "R1722") => (RuleGroup::Unspecified, rules::pylint::rules::SysExitAlias),
(Pylint, "R2004") => (RuleGroup::Unspecified, rules::pylint::rules::MagicValueComparison),
(Pylint, "R5501") => (RuleGroup::Unspecified, rules::pylint::rules::CollapsibleElseIf),
#[allow(deprecated)]
(Pylint, "R6301") => (RuleGroup::Nursery, rules::pylint::rules::NoSelfUse),
(Pylint, "W0120") => (RuleGroup::Unspecified, rules::pylint::rules::UselessElseOnLoop),
(Pylint, "W0127") => (RuleGroup::Unspecified, rules::pylint::rules::SelfAssigningVariable),
@@ -228,8 +267,10 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "W1508") => (RuleGroup::Unspecified, rules::pylint::rules::InvalidEnvvarDefault),
(Pylint, "W1509") => (RuleGroup::Unspecified, rules::pylint::rules::SubprocessPopenPreexecFn),
(Pylint, "W1510") => (RuleGroup::Unspecified, rules::pylint::rules::SubprocessRunWithoutCheck),
#[allow(deprecated)]
(Pylint, "W1641") => (RuleGroup::Nursery, rules::pylint::rules::EqWithoutHash),
(Pylint, "W2901") => (RuleGroup::Unspecified, rules::pylint::rules::RedefinedLoopName),
#[allow(deprecated)]
(Pylint, "W3201") => (RuleGroup::Nursery, rules::pylint::rules::BadDunderMethodName),
(Pylint, "W3301") => (RuleGroup::Unspecified, rules::pylint::rules::NestedMinMax),
@@ -403,6 +444,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Simplify, "910") => (RuleGroup::Unspecified, rules::flake8_simplify::rules::DictGetWithNoneDefault),
// flake8-copyright
#[allow(deprecated)]
(Flake8Copyright, "001") => (RuleGroup::Nursery, rules::flake8_copyright::rules::MissingCopyrightNotice),
// pyupgrade
@@ -815,9 +857,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "012") => (RuleGroup::Unspecified, rules::ruff::rules::MutableClassDefault),
(Ruff, "013") => (RuleGroup::Unspecified, rules::ruff::rules::ImplicitOptional),
#[cfg(feature = "unreachable-code")] // When removing this feature gate, also update rules_selector.rs
#[allow(deprecated)]
(Ruff, "014") => (RuleGroup::Nursery, rules::ruff::rules::UnreachableCode),
(Ruff, "015") => (RuleGroup::Unspecified, rules::ruff::rules::UnnecessaryIterableAllocationForFirstElement),
(Ruff, "016") => (RuleGroup::Unspecified, rules::ruff::rules::InvalidIndexType),
#[allow(deprecated)]
(Ruff, "017") => (RuleGroup::Nursery, rules::ruff::rules::QuadraticListSummation),
(Ruff, "100") => (RuleGroup::Unspecified, rules::ruff::rules::UnusedNOQA),
(Ruff, "200") => (RuleGroup::Unspecified, rules::ruff::rules::InvalidPyprojectToml),
@@ -866,8 +910,11 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Slots, "002") => (RuleGroup::Unspecified, rules::flake8_slots::rules::NoSlotsInNamedtupleSubclass),
// refurb
#[allow(deprecated)]
(Refurb, "113") => (RuleGroup::Nursery, rules::refurb::rules::RepeatedAppend),
#[allow(deprecated)]
(Refurb, "131") => (RuleGroup::Nursery, rules::refurb::rules::DeleteFullSlice),
#[allow(deprecated)]
(Refurb, "132") => (RuleGroup::Nursery, rules::refurb::rules::CheckAndRemoveFromSet),
_ => return None,

View File

@@ -4,7 +4,7 @@ use std::num::NonZeroUsize;
use colored::Colorize;
use ruff_notebook::{Notebook, NotebookIndex};
use ruff_notebook::NotebookIndex;
use ruff_source_file::OneIndexed;
use crate::fs::relativize_path;
@@ -65,7 +65,7 @@ impl Emitter for GroupedEmitter {
writer,
"{}",
DisplayGroupedMessage {
jupyter_index: context.notebook(message.filename()).map(Notebook::index),
notebook_index: context.notebook_index(message.filename()),
message,
show_fix_status: self.show_fix_status,
show_source: self.show_source,
@@ -92,7 +92,7 @@ struct DisplayGroupedMessage<'a> {
show_source: bool,
row_length: NonZeroUsize,
column_length: NonZeroUsize,
jupyter_index: Option<&'a NotebookIndex>,
notebook_index: Option<&'a NotebookIndex>,
}
impl Display for DisplayGroupedMessage<'_> {
@@ -110,7 +110,7 @@ impl Display for DisplayGroupedMessage<'_> {
)?;
// Check if we're working on a jupyter notebook and translate positions with cell accordingly
let (row, col) = if let Some(jupyter_index) = self.jupyter_index {
let (row, col) = if let Some(jupyter_index) = self.notebook_index {
write!(
f,
"cell {cell}{sep}",
@@ -150,7 +150,7 @@ impl Display for DisplayGroupedMessage<'_> {
"{}",
MessageCodeFrame {
message,
jupyter_index: self.jupyter_index
notebook_index: self.notebook_index
}
)?;
}

View File

@@ -14,7 +14,7 @@ pub use json_lines::JsonLinesEmitter;
pub use junit::JunitEmitter;
pub use pylint::PylintEmitter;
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Fix};
use ruff_notebook::Notebook;
use ruff_notebook::NotebookIndex;
use ruff_source_file::{SourceFile, SourceLocation};
use ruff_text_size::{Ranged, TextRange, TextSize};
pub use text::TextEmitter;
@@ -127,21 +127,21 @@ pub trait Emitter {
/// Context passed to [`Emitter`].
pub struct EmitterContext<'a> {
notebooks: &'a FxHashMap<String, Notebook>,
notebook_indexes: &'a FxHashMap<String, NotebookIndex>,
}
impl<'a> EmitterContext<'a> {
pub fn new(notebooks: &'a FxHashMap<String, Notebook>) -> Self {
Self { notebooks }
pub fn new(notebook_indexes: &'a FxHashMap<String, NotebookIndex>) -> Self {
Self { notebook_indexes }
}
/// Tests if the file with `name` is a jupyter notebook.
pub fn is_notebook(&self, name: &str) -> bool {
self.notebooks.contains_key(name)
self.notebook_indexes.contains_key(name)
}
pub fn notebook(&self, name: &str) -> Option<&Notebook> {
self.notebooks.get(name)
pub fn notebook_index(&self, name: &str) -> Option<&NotebookIndex> {
self.notebook_indexes.get(name)
}
}
@@ -225,8 +225,8 @@ def fibonacci(n):
emitter: &mut dyn Emitter,
messages: &[Message],
) -> String {
let source_kinds = FxHashMap::default();
let context = EmitterContext::new(&source_kinds);
let notebook_indexes = FxHashMap::default();
let context = EmitterContext::new(&notebook_indexes);
let mut output: Vec<u8> = Vec::new();
emitter.emit(&mut output, messages, &context).unwrap();

View File

@@ -7,7 +7,7 @@ use annotate_snippets::snippet::{Annotation, AnnotationType, Slice, Snippet, Sou
use bitflags::bitflags;
use colored::Colorize;
use ruff_notebook::{Notebook, NotebookIndex};
use ruff_notebook::NotebookIndex;
use ruff_source_file::{OneIndexed, SourceLocation};
use ruff_text_size::{Ranged, TextRange, TextSize};
@@ -71,14 +71,14 @@ impl Emitter for TextEmitter {
)?;
let start_location = message.compute_start_location();
let jupyter_index = context.notebook(message.filename()).map(Notebook::index);
let notebook_index = context.notebook_index(message.filename());
// Check if we're working on a jupyter notebook and translate positions with cell accordingly
let diagnostic_location = if let Some(jupyter_index) = jupyter_index {
let diagnostic_location = if let Some(notebook_index) = notebook_index {
write!(
writer,
"cell {cell}{sep}",
cell = jupyter_index
cell = notebook_index
.cell(start_location.row.get())
.unwrap_or_default(),
sep = ":".cyan(),
@@ -86,7 +86,7 @@ impl Emitter for TextEmitter {
SourceLocation {
row: OneIndexed::new(
jupyter_index
notebook_index
.cell_row(start_location.row.get())
.unwrap_or(1) as usize,
)
@@ -115,7 +115,7 @@ impl Emitter for TextEmitter {
"{}",
MessageCodeFrame {
message,
jupyter_index
notebook_index
}
)?;
}
@@ -161,7 +161,7 @@ impl Display for RuleCodeAndBody<'_> {
pub(super) struct MessageCodeFrame<'a> {
pub(crate) message: &'a Message,
pub(crate) jupyter_index: Option<&'a NotebookIndex>,
pub(crate) notebook_index: Option<&'a NotebookIndex>,
}
impl Display for MessageCodeFrame<'_> {
@@ -186,14 +186,12 @@ impl Display for MessageCodeFrame<'_> {
let content_start_index = source_code.line_index(range.start());
let mut start_index = content_start_index.saturating_sub(2);
// If we're working on a jupyter notebook, skip the lines which are
// If we're working with a Jupyter Notebook, skip the lines which are
// outside of the cell containing the diagnostic.
if let Some(jupyter_index) = self.jupyter_index {
let content_start_cell = jupyter_index
.cell(content_start_index.get())
.unwrap_or_default();
if let Some(index) = self.notebook_index {
let content_start_cell = index.cell(content_start_index.get()).unwrap_or_default();
while start_index < content_start_index {
if jupyter_index.cell(start_index.get()).unwrap_or_default() == content_start_cell {
if index.cell(start_index.get()).unwrap_or_default() == content_start_cell {
break;
}
start_index = start_index.saturating_add(1);
@@ -213,14 +211,12 @@ impl Display for MessageCodeFrame<'_> {
.saturating_add(2)
.min(OneIndexed::from_zero_indexed(source_code.line_count()));
// If we're working on a jupyter notebook, skip the lines which are
// If we're working with a Jupyter Notebook, skip the lines which are
// outside of the cell containing the diagnostic.
if let Some(jupyter_index) = self.jupyter_index {
let content_end_cell = jupyter_index
.cell(content_end_index.get())
.unwrap_or_default();
if let Some(index) = self.notebook_index {
let content_end_cell = index.cell(content_end_index.get()).unwrap_or_default();
while end_index > content_end_index {
if jupyter_index.cell(end_index.get()).unwrap_or_default() == content_end_cell {
if index.cell(end_index.get()).unwrap_or_default() == content_end_cell {
break;
}
end_index = end_index.saturating_sub(1);
@@ -256,10 +252,10 @@ impl Display for MessageCodeFrame<'_> {
title: None,
slices: vec![Slice {
source: &source.text,
line_start: self.jupyter_index.map_or_else(
line_start: self.notebook_index.map_or_else(
|| start_index.get(),
|jupyter_index| {
jupyter_index
|notebook_index| {
notebook_index
.cell_row(start_index.get())
.unwrap_or_default() as usize
},

View File

@@ -9,12 +9,16 @@ use crate::codes::RuleCodePrefix;
use crate::codes::RuleIter;
use crate::registry::{Linter, Rule, RuleNamespace};
use crate::rule_redirects::get_redirect;
use crate::settings::types::PreviewMode;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum RuleSelector {
/// Select all stable rules.
/// Select all rules (includes rules in preview if enabled)
All,
/// Select all nursery rules.
/// Category to select all rules in preview (includes legacy nursery rules)
Preview,
/// Legacy category to select all rules in the "nursery" which predated preview mode
#[deprecated(note = "Use `RuleSelector::Preview` for new rules instead")]
Nursery,
/// Legacy category to select both the `mccabe` and `flake8-comprehensions` linters
/// via a single selector.
@@ -29,6 +33,11 @@ pub enum RuleSelector {
prefix: RuleCodePrefix,
redirected_from: Option<&'static str>,
},
/// Select an individual rule with a given prefix.
Rule {
prefix: RuleCodePrefix,
redirected_from: Option<&'static str>,
},
}
impl From<Linter> for RuleSelector {
@@ -43,7 +52,9 @@ impl FromStr for RuleSelector {
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"ALL" => Ok(Self::All),
#[allow(deprecated)]
"NURSERY" => Ok(Self::Nursery),
"PREVIEW" => Ok(Self::Preview),
"C" => Ok(Self::C),
"T" => Ok(Self::T),
_ => {
@@ -59,16 +70,43 @@ impl FromStr for RuleSelector {
return Ok(Self::Linter(linter));
}
Ok(Self::Prefix {
prefix: RuleCodePrefix::parse(&linter, code)
.map_err(|_| ParseError::Unknown(s.to_string()))?,
redirected_from,
})
// Does the selector select a single rule?
let prefix = RuleCodePrefix::parse(&linter, code)
.map_err(|_| ParseError::Unknown(s.to_string()))?;
if is_single_rule_selector(&prefix) {
Ok(Self::Rule {
prefix,
redirected_from,
})
} else {
Ok(Self::Prefix {
prefix,
redirected_from,
})
}
}
}
}
}
/// Returns `true` if the [`RuleCodePrefix`] matches a single rule exactly
/// (e.g., `E225`, as opposed to `E2`).
pub(crate) fn is_single_rule_selector(prefix: &RuleCodePrefix) -> bool {
let mut rules = prefix.rules();
// The selector must match a single rule.
let Some(rule) = rules.next() else {
return false;
};
if rules.next().is_some() {
return false;
}
// The rule must match the selector exactly.
rule.noqa_code().suffix() == prefix.short_code()
}
#[derive(Debug, thiserror::Error)]
pub enum ParseError {
#[error("Unknown rule selector: `{0}`")]
@@ -81,10 +119,12 @@ impl RuleSelector {
pub fn prefix_and_code(&self) -> (&'static str, &'static str) {
match self {
RuleSelector::All => ("", "ALL"),
#[allow(deprecated)]
RuleSelector::Nursery => ("", "NURSERY"),
RuleSelector::Preview => ("", "PREVIEW"),
RuleSelector::C => ("", "C"),
RuleSelector::T => ("", "T"),
RuleSelector::Prefix { prefix, .. } => {
RuleSelector::Prefix { prefix, .. } | RuleSelector::Rule { prefix, .. } => {
(prefix.linter().common_prefix(), prefix.short_code())
}
RuleSelector::Linter(l) => (l.common_prefix(), ""),
@@ -135,27 +175,19 @@ impl Visitor<'_> for SelectorVisitor {
}
}
impl From<RuleCodePrefix> for RuleSelector {
fn from(prefix: RuleCodePrefix) -> Self {
Self::Prefix {
prefix,
redirected_from: None,
}
}
}
impl IntoIterator for &RuleSelector {
type IntoIter = RuleSelectorIter;
type Item = Rule;
fn into_iter(self) -> Self::IntoIter {
impl RuleSelector {
/// Return all matching rules, regardless of whether they're in preview.
pub fn all_rules(&self) -> impl Iterator<Item = Rule> + '_ {
match self {
RuleSelector::All => {
RuleSelectorIter::All(Rule::iter().filter(|rule| !rule.is_nursery()))
}
RuleSelector::All => RuleSelectorIter::All(Rule::iter()),
#[allow(deprecated)]
RuleSelector::Nursery => {
RuleSelectorIter::Nursery(Rule::iter().filter(Rule::is_nursery))
}
RuleSelector::Preview => RuleSelectorIter::Nursery(
Rule::iter().filter(|rule| rule.is_preview() || rule.is_nursery()),
),
RuleSelector::C => RuleSelectorIter::Chain(
Linter::Flake8Comprehensions
.rules()
@@ -167,13 +199,28 @@ impl IntoIterator for &RuleSelector {
.chain(Linter::Flake8Print.rules()),
),
RuleSelector::Linter(linter) => RuleSelectorIter::Vec(linter.rules()),
RuleSelector::Prefix { prefix, .. } => RuleSelectorIter::Vec(prefix.clone().rules()),
RuleSelector::Prefix { prefix, .. } | RuleSelector::Rule { prefix, .. } => {
RuleSelectorIter::Vec(prefix.clone().rules())
}
}
}
/// Returns rules matching the selector, taking into account whether preview mode is enabled.
pub fn rules(&self, preview: PreviewMode) -> impl Iterator<Item = Rule> + '_ {
#[allow(deprecated)]
self.all_rules().filter(move |rule| {
// Always include rules that are not in preview or the nursery
!(rule.is_preview() || rule.is_nursery())
// Backwards compatibility allows selection of nursery rules by exact code or dedicated group
|| (matches!(self, RuleSelector::Rule { .. }) || matches!(self, RuleSelector::Nursery { .. }) && rule.is_nursery())
// Enabling preview includes all preview or nursery rules
|| preview.is_enabled()
})
}
}
pub enum RuleSelectorIter {
All(std::iter::Filter<RuleIter, fn(&Rule) -> bool>),
All(RuleIter),
Nursery(std::iter::Filter<RuleIter, fn(&Rule) -> bool>),
Chain(std::iter::Chain<std::vec::IntoIter<Rule>, std::vec::IntoIter<Rule>>),
Vec(std::vec::IntoIter<Rule>),
@@ -192,18 +239,6 @@ impl Iterator for RuleSelectorIter {
}
}
/// A const alternative to the `impl From<RuleCodePrefix> for RuleSelector`
/// to let us keep the fields of [`RuleSelector`] private.
// Note that Rust doesn't yet support `impl const From<RuleCodePrefix> for
// RuleSelector` (see https://github.com/rust-lang/rust/issues/67792).
// TODO(martin): Remove once RuleSelector is an enum with Linter & Rule variants
pub(crate) const fn prefix_to_selector(prefix: RuleCodePrefix) -> RuleSelector {
RuleSelector::Prefix {
prefix,
redirected_from: None,
}
}
#[cfg(feature = "schemars")]
mod schema {
use itertools::Itertools;
@@ -266,18 +301,20 @@ impl RuleSelector {
pub fn specificity(&self) -> Specificity {
match self {
RuleSelector::All => Specificity::All,
RuleSelector::Preview => Specificity::All,
#[allow(deprecated)]
RuleSelector::Nursery => Specificity::All,
RuleSelector::T => Specificity::LinterGroup,
RuleSelector::C => Specificity::LinterGroup,
RuleSelector::Linter(..) => Specificity::Linter,
RuleSelector::Rule { .. } => Specificity::Rule,
RuleSelector::Prefix { prefix, .. } => {
let prefix: &'static str = prefix.short_code();
match prefix.len() {
1 => Specificity::Code1Char,
2 => Specificity::Code2Chars,
3 => Specificity::Code3Chars,
4 => Specificity::Code4Chars,
5 => Specificity::Code5Chars,
1 => Specificity::Prefix1Char,
2 => Specificity::Prefix2Chars,
3 => Specificity::Prefix3Chars,
4 => Specificity::Prefix4Chars,
_ => panic!("RuleSelector::specificity doesn't yet support codes with so many characters"),
}
}
@@ -285,16 +322,24 @@ impl RuleSelector {
}
}
#[derive(EnumIter, PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
#[derive(EnumIter, PartialEq, Eq, PartialOrd, Ord, Copy, Clone, Debug)]
pub enum Specificity {
/// The specificity when selecting all rules (e.g., `--select ALL`).
All,
/// The specificity when selecting a legacy linter group (e.g., `--select C` or `--select T`).
LinterGroup,
/// The specificity when selecting a linter (e.g., `--select PLE` or `--select UP`).
Linter,
Code1Char,
Code2Chars,
Code3Chars,
Code4Chars,
Code5Chars,
/// The specificity when selecting via a rule prefix with a one-character code (e.g., `--select PLE1`).
Prefix1Char,
/// The specificity when selecting via a rule prefix with a two-character code (e.g., `--select PLE12`).
Prefix2Chars,
/// The specificity when selecting via a rule prefix with a three-character code (e.g., `--select PLE123`).
Prefix3Chars,
/// The specificity when selecting via a rule prefix with a four-character code (e.g., `--select PLE1234`).
Prefix4Chars,
/// The specificity when selecting an individual rule (e.g., `--select PLE1205`).
Rule,
}
#[cfg(feature = "clap")]

View File

@@ -10,6 +10,24 @@ use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::comparable::ComparableExpr;
use ruff_text_size::Ranged;
/// ## What it does
/// Checks for duplicate union members.
///
/// ## Why is this bad?
/// Duplicate union members are redundant and should be removed.
///
/// ## Example
/// ```python
/// foo: str | str
/// ```
///
/// Use instead:
/// ```python
/// foo: str
/// ```
///
/// ## References
/// - [Python documentation: `typing.Union`](https://docs.python.org/3/library/typing.html#typing.Union)
#[violation]
pub struct DuplicateUnionMember {
duplicate_name: String,

View File

@@ -2,18 +2,12 @@ use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::Stmt;
use ruff_python_semantic::analyze::visibility;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for any misspelled dunder name method and for any method
/// defined with `_..._` that's not one of the pre-defined methods
///
/// The pre-defined methods encompass all of Python's standard dunder
/// methods.
///
/// Note this includes all methods starting and ending with at least
/// one underscore to detect mistakes.
/// Checks for misspelled and unknown dunder names in method definitions.
///
/// ## Why is this bad?
/// Misspelled dunder name methods may cause your code to not function
@@ -24,6 +18,10 @@ use crate::checkers::ast::Checker;
/// that diverges from standard Python dunder methods could potentially
/// confuse someone reading the code.
///
/// This rule will detect all methods starting and ending with at least
/// one underscore (e.g., `_str_`), but ignores known dunder methods (like
/// `__init__`), as well as methods that are marked with `@override`.
///
/// ## Example
/// ```python
/// class Foo:
@@ -62,6 +60,9 @@ pub(crate) fn bad_dunder_method_name(checker: &mut Checker, class_body: &[Stmt])
method.name.starts_with('_') && method.name.ends_with('_')
})
{
if visibility::is_override(&method.decorator_list, checker.semantic()) {
continue;
}
checker.diagnostics.push(Diagnostic::new(
BadDunderMethodName {
name: method.name.to_string(),

View File

@@ -1,61 +1,61 @@
---
source: crates/ruff/src/rules/pylint/mod.rs
---
bad_dunder_method_name.py:2:9: PLW3201 Bad or misspelled dunder method name `_init_`. (bad-dunder-name)
bad_dunder_method_name.py:5:9: PLW3201 Bad or misspelled dunder method name `_init_`. (bad-dunder-name)
|
1 | class Apples:
2 | def _init_(self): # [bad-dunder-name]
4 | class Apples:
5 | def _init_(self): # [bad-dunder-name]
| ^^^^^^ PLW3201
3 | pass
6 | pass
|
bad_dunder_method_name.py:5:9: PLW3201 Bad or misspelled dunder method name `__hello__`. (bad-dunder-name)
bad_dunder_method_name.py:8:9: PLW3201 Bad or misspelled dunder method name `__hello__`. (bad-dunder-name)
|
3 | pass
4 |
5 | def __hello__(self): # [bad-dunder-name]
6 | pass
7 |
8 | def __hello__(self): # [bad-dunder-name]
| ^^^^^^^^^ PLW3201
6 | print("hello")
9 | print("hello")
|
bad_dunder_method_name.py:8:9: PLW3201 Bad or misspelled dunder method name `__init_`. (bad-dunder-name)
bad_dunder_method_name.py:11:9: PLW3201 Bad or misspelled dunder method name `__init_`. (bad-dunder-name)
|
6 | print("hello")
7 |
8 | def __init_(self): # [bad-dunder-name]
9 | print("hello")
10 |
11 | def __init_(self): # [bad-dunder-name]
| ^^^^^^^ PLW3201
9 | # author likely unintentionally misspelled the correct init dunder.
10 | pass
12 | # author likely unintentionally misspelled the correct init dunder.
13 | pass
|
bad_dunder_method_name.py:12:9: PLW3201 Bad or misspelled dunder method name `_init_`. (bad-dunder-name)
bad_dunder_method_name.py:15:9: PLW3201 Bad or misspelled dunder method name `_init_`. (bad-dunder-name)
|
10 | pass
11 |
12 | def _init_(self): # [bad-dunder-name]
13 | pass
14 |
15 | def _init_(self): # [bad-dunder-name]
| ^^^^^^ PLW3201
13 | # author likely unintentionally misspelled the correct init dunder.
14 | pass
16 | # author likely unintentionally misspelled the correct init dunder.
17 | pass
|
bad_dunder_method_name.py:16:9: PLW3201 Bad or misspelled dunder method name `___neg__`. (bad-dunder-name)
bad_dunder_method_name.py:19:9: PLW3201 Bad or misspelled dunder method name `___neg__`. (bad-dunder-name)
|
14 | pass
15 |
16 | def ___neg__(self): # [bad-dunder-name]
17 | pass
18 |
19 | def ___neg__(self): # [bad-dunder-name]
| ^^^^^^^^ PLW3201
17 | # author likely accidentally added an additional `_`
18 | pass
20 | # author likely accidentally added an additional `_`
21 | pass
|
bad_dunder_method_name.py:20:9: PLW3201 Bad or misspelled dunder method name `__inv__`. (bad-dunder-name)
bad_dunder_method_name.py:23:9: PLW3201 Bad or misspelled dunder method name `__inv__`. (bad-dunder-name)
|
18 | pass
19 |
20 | def __inv__(self): # [bad-dunder-name]
21 | pass
22 |
23 | def __inv__(self): # [bad-dunder-name]
| ^^^^^^^ PLW3201
21 | # author likely meant to call the invert dunder method
22 | pass
24 | # author likely meant to call the invert dunder method
25 | pass
|

View File

@@ -60,66 +60,147 @@ impl AlwaysAutofixableViolation for OutdatedVersionBlock {
}
}
/// Converts a `BigInt` to a `u32`. If the number is negative, it will return 0.
fn bigint_to_u32(number: &BigInt) -> u32 {
let the_number = number.to_u32_digits();
match the_number.0 {
Sign::Minus | Sign::NoSign => 0,
Sign::Plus => *the_number.1.first().unwrap(),
}
}
/// UP036
pub(crate) fn outdated_version_block(checker: &mut Checker, stmt_if: &StmtIf) {
for branch in if_elif_branches(stmt_if) {
let Expr::Compare(ast::ExprCompare {
left,
ops,
comparators,
range: _,
}) = &branch.test
else {
continue;
};
/// Gets the version from the tuple
fn extract_version(elts: &[Expr]) -> Vec<u32> {
let mut version: Vec<u32> = vec![];
for elt in elts {
if let Expr::Constant(ast::ExprConstant {
value: Constant::Int(item),
..
}) = &elt
let ([op], [comparison]) = (ops.as_slice(), comparators.as_slice()) else {
continue;
};
if !checker
.semantic()
.resolve_call_path(left)
.is_some_and(|call_path| matches!(call_path.as_slice(), ["sys", "version_info"]))
{
let number = bigint_to_u32(item);
version.push(number);
} else {
return version;
continue;
}
}
version
}
/// Returns true if the `if_version` is less than the `PythonVersion`
fn compare_version(if_version: &[u32], py_version: PythonVersion, or_equal: bool) -> bool {
let mut if_version_iter = if_version.iter();
if let Some(if_major) = if_version_iter.next() {
let (py_major, py_minor) = py_version.as_tuple();
match if_major.cmp(&py_major) {
Ordering::Less => true,
Ordering::Equal => {
if let Some(if_minor) = if_version_iter.next() {
// Check the if_minor number (the minor version).
if or_equal {
*if_minor <= py_minor
} else {
*if_minor < py_minor
match comparison {
Expr::Tuple(ast::ExprTuple { elts, .. }) => match op {
CmpOp::Lt | CmpOp::LtE => {
let version = extract_version(elts);
let target = checker.settings.target_version;
if compare_version(&version, target, op == &CmpOp::LtE) {
let mut diagnostic =
Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_always_false_branch(checker, stmt_if, &branch) {
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
}
CmpOp::Gt | CmpOp::GtE => {
let version = extract_version(elts);
let target = checker.settings.target_version;
if compare_version(&version, target, op == &CmpOp::GtE) {
let mut diagnostic =
Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_always_true_branch(checker, stmt_if, &branch) {
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
}
_ => {}
},
Expr::Constant(ast::ExprConstant {
value: Constant::Int(number),
..
}) => {
if op == &CmpOp::Eq {
match bigint_to_u32(number) {
2 => {
let mut diagnostic =
Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) =
fix_always_false_branch(checker, stmt_if, &branch)
{
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
3 => {
let mut diagnostic =
Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_always_true_branch(checker, stmt_if, &branch)
{
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
_ => {}
}
} else {
// Assume Python 3.0.
true
}
}
Ordering::Greater => false,
_ => (),
}
} else {
false
}
}
/// For fixing, we have 4 cases:
/// * Just an if: delete as statement (insert pass in parent if required)
/// * If with an elif: delete, turn elif into if
/// * If with an else: delete, dedent else
/// * Just an elif: delete, `elif False` can always be removed
fn fix_py2_block(checker: &Checker, stmt_if: &StmtIf, branch: &IfElifBranch) -> Option<Fix> {
/// Returns true if the `target_version` is always less than the [`PythonVersion`].
fn compare_version(target_version: &[u32], py_version: PythonVersion, or_equal: bool) -> bool {
let mut target_version_iter = target_version.iter();
let Some(if_major) = target_version_iter.next() else {
return false;
};
let (py_major, py_minor) = py_version.as_tuple();
match if_major.cmp(&py_major) {
Ordering::Less => true,
Ordering::Greater => false,
Ordering::Equal => {
let Some(if_minor) = target_version_iter.next() else {
return true;
};
if or_equal {
// Ex) `sys.version_info <= 3.8`. If Python 3.8 is the minimum supported version,
// the condition won't always evaluate to `false`, so we want to return `false`.
*if_minor < py_minor
} else {
// Ex) `sys.version_info < 3.8`. If Python 3.8 is the minimum supported version,
// the condition _will_ always evaluate to `false`, so we want to return `true`.
*if_minor <= py_minor
}
}
}
}
/// Fix a branch that is known to always evaluate to `false`.
///
/// For example, when running with a minimum supported version of Python 3.8, the following branch
/// would be considered redundant:
/// ```python
/// if sys.version_info < (3, 7): ...
/// ```
///
/// In this case, the fix would involve removing the branch; however, there are multiple cases to
/// consider. For example, if the `if` has an `else`, then the `if` should be removed, and the
/// `else` should be inlined at the top level.
fn fix_always_false_branch(
checker: &Checker,
stmt_if: &StmtIf,
branch: &IfElifBranch,
) -> Option<Fix> {
match branch.kind {
BranchKind::If => match stmt_if.elif_else_clauses.first() {
// If we have a lone `if`, delete as statement (insert pass in parent if required)
@@ -210,8 +291,18 @@ fn fix_py2_block(checker: &Checker, stmt_if: &StmtIf, branch: &IfElifBranch) ->
}
}
/// Convert a [`Stmt::If`], removing the `else` block.
fn fix_py3_block(checker: &mut Checker, stmt_if: &StmtIf, branch: &IfElifBranch) -> Option<Fix> {
/// Fix a branch that is known to always evaluate to `true`.
///
/// For example, when running with a minimum supported version of Python 3.8, the following branch
/// would be considered redundant, as it's known to always evaluate to `true`:
/// ```python
/// if sys.version_info >= (3, 8): ...
/// ```
fn fix_always_true_branch(
checker: &mut Checker,
stmt_if: &StmtIf,
branch: &IfElifBranch,
) -> Option<Fix> {
match branch.kind {
BranchKind::If => {
// If the first statement is an `if`, use the body of this statement, and ignore
@@ -262,85 +353,31 @@ fn fix_py3_block(checker: &mut Checker, stmt_if: &StmtIf, branch: &IfElifBranch)
}
}
/// UP036
pub(crate) fn outdated_version_block(checker: &mut Checker, stmt_if: &StmtIf) {
for branch in if_elif_branches(stmt_if) {
let Expr::Compare(ast::ExprCompare {
left,
ops,
comparators,
range: _,
}) = &branch.test
else {
continue;
};
/// Converts a `BigInt` to a `u32`. If the number is negative, it will return 0.
fn bigint_to_u32(number: &BigInt) -> u32 {
let the_number = number.to_u32_digits();
match the_number.0 {
Sign::Minus | Sign::NoSign => 0,
Sign::Plus => *the_number.1.first().unwrap(),
}
}
let ([op], [comparison]) = (ops.as_slice(), comparators.as_slice()) else {
continue;
};
if !checker
.semantic()
.resolve_call_path(left)
.is_some_and(|call_path| matches!(call_path.as_slice(), ["sys", "version_info"]))
/// Gets the version from the tuple
fn extract_version(elts: &[Expr]) -> Vec<u32> {
let mut version: Vec<u32> = vec![];
for elt in elts {
if let Expr::Constant(ast::ExprConstant {
value: Constant::Int(item),
..
}) = &elt
{
continue;
}
match comparison {
Expr::Tuple(ast::ExprTuple { elts, .. }) => {
let version = extract_version(elts);
let target = checker.settings.target_version;
if op == &CmpOp::Lt || op == &CmpOp::LtE {
if compare_version(&version, target, op == &CmpOp::LtE) {
let mut diagnostic =
Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_py2_block(checker, stmt_if, &branch) {
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
} else if op == &CmpOp::Gt || op == &CmpOp::GtE {
if compare_version(&version, target, op == &CmpOp::GtE) {
let mut diagnostic =
Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_py3_block(checker, stmt_if, &branch) {
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
}
}
Expr::Constant(ast::ExprConstant {
value: Constant::Int(number),
..
}) => {
let version_number = bigint_to_u32(number);
if version_number == 2 && op == &CmpOp::Eq {
let mut diagnostic = Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_py2_block(checker, stmt_if, &branch) {
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
} else if version_number == 3 && op == &CmpOp::Eq {
let mut diagnostic = Diagnostic::new(OutdatedVersionBlock, branch.test.range());
if checker.patch(diagnostic.kind.rule()) {
if let Some(fix) = fix_py3_block(checker, stmt_if, &branch) {
diagnostic.set_fix(fix);
}
}
checker.diagnostics.push(diagnostic);
}
}
_ => (),
let number = bigint_to_u32(item);
version.push(number);
} else {
return version;
}
}
version
}
#[cfg(test)]
@@ -355,8 +392,8 @@ mod tests {
#[test_case(PythonVersion::Py37, &[3, 0], true, true; "compare-3.0-whole")]
#[test_case(PythonVersion::Py37, &[3, 1], true, true; "compare-3.1")]
#[test_case(PythonVersion::Py37, &[3, 5], true, true; "compare-3.5")]
#[test_case(PythonVersion::Py37, &[3, 7], true, true; "compare-3.7")]
#[test_case(PythonVersion::Py37, &[3, 7], false, false; "compare-3.7-not-equal")]
#[test_case(PythonVersion::Py37, &[3, 7], true, false; "compare-3.7")]
#[test_case(PythonVersion::Py37, &[3, 7], false, true; "compare-3.7-not-equal")]
#[test_case(PythonVersion::Py37, &[3, 8], false , false; "compare-3.8")]
#[test_case(PythonVersion::Py310, &[3,9], true, true; "compare-3.9")]
#[test_case(PythonVersion::Py310, &[3, 11], true, false; "compare-3.11")]

View File

@@ -662,5 +662,27 @@ UP036_0.py:179:8: UP036 [*] Version block is outdated for minimum Python version
178 178 | if True:
179 |- if sys.version_info > (3, 0): \
180 179 | expected_error = []
181 180 |
182 181 | if sys.version_info < (3,12):
UP036_0.py:182:4: UP036 [*] Version block is outdated for minimum Python version
|
180 | expected_error = []
181 |
182 | if sys.version_info < (3,12):
| ^^^^^^^^^^^^^^^^^^^^^^^^^ UP036
183 | print("py3")
|
= help: Remove outdated version block
Suggested fix
179 179 | if sys.version_info > (3, 0): \
180 180 | expected_error = []
181 181 |
182 |-if sys.version_info < (3,12):
183 |- print("py3")
184 182 |
185 183 | if sys.version_info <= (3,12):
186 184 | print("py3")

View File

@@ -9,7 +9,7 @@ use super::Settings;
use crate::codes::{self, RuleCodePrefix};
use crate::line_width::{LineLength, TabSize};
use crate::registry::Linter;
use crate::rule_selector::{prefix_to_selector, RuleSelector};
use crate::rule_selector::RuleSelector;
use crate::rules::{
flake8_annotations, flake8_bandit, flake8_bugbear, flake8_builtins, flake8_comprehensions,
flake8_copyright, flake8_errmsg, flake8_gettext, flake8_implicit_str_concat,
@@ -20,7 +20,10 @@ use crate::rules::{
use crate::settings::types::FilePatternSet;
pub const PREFIXES: &[RuleSelector] = &[
prefix_to_selector(RuleCodePrefix::Pycodestyle(codes::Pycodestyle::E)),
RuleSelector::Prefix {
prefix: RuleCodePrefix::Pycodestyle(codes::Pycodestyle::E),
redirected_from: None,
},
RuleSelector::Linter(Linter::Pyflakes),
];
@@ -70,7 +73,10 @@ pub static INCLUDE: Lazy<Vec<FilePattern>> = Lazy::new(|| {
impl Default for Settings {
fn default() -> Self {
Self {
rules: PREFIXES.iter().flat_map(IntoIterator::into_iter).collect(),
rules: PREFIXES
.iter()
.flat_map(|selector| selector.rules(PreviewMode::default()))
.collect(),
allowed_confusables: FxHashSet::from_iter([]),
builtins: vec![],
dummy_variable_rgx: DUMMY_VARIABLE_RGX.clone(),

View File

@@ -194,7 +194,8 @@ pub struct PerFileIgnore {
impl PerFileIgnore {
pub fn new(pattern: String, prefixes: &[RuleSelector], project_root: Option<&Path>) -> Self {
let rules: RuleSet = prefixes.iter().flat_map(IntoIterator::into_iter).collect();
// Rules in preview are included here even if preview mode is disabled; it's safe to ignore disabled rules
let rules: RuleSet = prefixes.iter().flat_map(RuleSelector::all_rules).collect();
let path = Path::new(&pattern);
let absolute = match project_root {
Some(project_root) => fs::normalize_path_to(path, project_root),

View File

@@ -297,7 +297,7 @@ pub(crate) fn print_jupyter_messages(
messages,
&EmitterContext::new(&FxHashMap::from_iter([(
path.file_name().unwrap().to_string_lossy().to_string(),
notebook.clone(),
notebook.index().clone(),
)])),
)
.unwrap();

View File

@@ -79,7 +79,7 @@ fn benchmark_default_rules(criterion: &mut Criterion) {
}
fn benchmark_all_rules(criterion: &mut Criterion) {
let mut rules: RuleTable = RuleSelector::All.into_iter().collect();
let mut rules: RuleTable = RuleSelector::All.all_rules().collect();
// Disable IO based rules because it is a source of flakiness
rules.disable(Rule::ShebangMissingExecutableFile);

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.0.288"
version = "0.0.289"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -75,6 +75,7 @@ colored = { workspace = true, features = ["no-color"]}
insta = { workspace = true, features = ["filters"] }
insta-cmd = { version = "0.4.0" }
tempfile = "3.6.0"
test-case = { workspace = true }
ureq = { version = "2.6.2", features = [] }
[target.'cfg(target_os = "windows")'.dependencies]

View File

@@ -116,7 +116,7 @@ pub struct CheckCommand {
#[arg(long, value_enum)]
pub target_version: Option<PythonVersion>,
/// Enable preview mode; checks will include unstable rules and fixes.
#[arg(long, overrides_with("no_preview"), hide = true)]
#[arg(long, overrides_with("no_preview"))]
preview: bool,
#[clap(long, overrides_with("preview"), hide = true)]
no_preview: bool,

View File

@@ -8,6 +8,7 @@ use std::sync::Mutex;
use std::time::{Duration, SystemTime};
use anyhow::{Context, Result};
use rustc_hash::FxHashMap;
use serde::{Deserialize, Serialize};
use ruff::message::Message;
@@ -15,6 +16,7 @@ use ruff::settings::Settings;
use ruff::warn_user;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_diagnostics::{DiagnosticKind, Fix};
use ruff_notebook::NotebookIndex;
use ruff_python_ast::imports::ImportMap;
use ruff_source_file::SourceFileBuilder;
use ruff_text_size::{TextRange, TextSize};
@@ -193,6 +195,7 @@ impl Cache {
key: T,
messages: &[Message],
imports: &ImportMap,
notebook_index: Option<&NotebookIndex>,
) {
let source = if let Some(msg) = messages.first() {
msg.file.source_text().to_owned()
@@ -226,6 +229,7 @@ impl Cache {
imports: imports.clone(),
messages,
source,
notebook_index: notebook_index.cloned(),
};
self.new_files.lock().unwrap().insert(path, file);
}
@@ -263,6 +267,8 @@ pub(crate) struct FileCache {
///
/// This will be empty if `messages` is empty.
source: String,
/// Notebook index if this file is a Jupyter Notebook.
notebook_index: Option<NotebookIndex>,
}
impl FileCache {
@@ -283,7 +289,12 @@ impl FileCache {
})
.collect()
};
Diagnostics::new(messages, self.imports.clone())
let notebook_indexes = if let Some(notebook_index) = self.notebook_index.as_ref() {
FxHashMap::from_iter([(path.to_string_lossy().to_string(), notebook_index.clone())])
} else {
FxHashMap::default()
};
Diagnostics::new(messages, self.imports.clone(), notebook_indexes)
}
}
@@ -350,16 +361,19 @@ mod tests {
use anyhow::Result;
use ruff_python_ast::imports::ImportMap;
#[test]
fn same_results() {
use test_case::test_case;
#[test_case("../ruff/resources/test/fixtures", "ruff_tests/cache_same_results_ruff"; "ruff_fixtures")]
#[test_case("../ruff_notebook/resources/test/fixtures", "ruff_tests/cache_same_results_ruff_notebook"; "ruff_notebook_fixtures")]
fn same_results(package_root: &str, cache_dir_path: &str) {
let mut cache_dir = temp_dir();
cache_dir.push("ruff_tests/cache_same_results");
cache_dir.push(cache_dir_path);
let _ = fs::remove_dir_all(&cache_dir);
cache::init(&cache_dir).unwrap();
let settings = AllSettings::default();
let package_root = fs::canonicalize("../ruff/resources/test/fixtures").unwrap();
let package_root = fs::canonicalize(package_root).unwrap();
let cache = Cache::open(&cache_dir, package_root.clone(), &settings.lib);
assert_eq!(cache.new_files.lock().unwrap().len(), 0);
@@ -444,9 +458,6 @@ mod tests {
.unwrap();
}
// Not stored in the cache.
expected_diagnostics.notebooks.clear();
got_diagnostics.notebooks.clear();
assert_eq!(expected_diagnostics, got_diagnostics);
}
@@ -614,6 +625,7 @@ mod tests {
imports: ImportMap::new(),
messages: Vec::new(),
source: String::new(),
notebook_index: None,
},
);

View File

@@ -11,6 +11,7 @@ use itertools::Itertools;
use log::{debug, error, warn};
#[cfg(not(target_family = "wasm"))]
use rayon::prelude::*;
use rustc_hash::FxHashMap;
use ruff::message::Message;
use ruff::registry::Rule;
@@ -156,6 +157,7 @@ pub(crate) fn check(
TextSize::default(),
)],
ImportMap::default(),
FxHashMap::default(),
)
} else {
warn!(

View File

@@ -73,12 +73,14 @@ pub(crate) fn format(
return None;
};
let preview = match pyproject_config.settings.lib.preview {
let resolved_settings = resolver.resolve(path, &pyproject_config);
let preview = match resolved_settings.preview {
PreviewMode::Enabled => ruff_python_formatter::PreviewMode::Enabled,
PreviewMode::Disabled => ruff_python_formatter::PreviewMode::Disabled,
};
let line_length = resolved_settings.line_length;
let line_length = resolver.resolve(path, &pyproject_config).line_length;
let options = PyFormatOptions::from_source_type(source_type)
.with_line_width(LineWidth::from(NonZeroU16::from(line_length)))
.with_preview(preview);

View File

@@ -1,8 +1,11 @@
use std::io::{stdout, Write};
use std::num::NonZeroU16;
use std::path::Path;
use anyhow::Result;
use log::warn;
use ruff::settings::types::PreviewMode;
use ruff_formatter::LineWidth;
use ruff_python_formatter::{format_module, PyFormatOptions};
use ruff_workspace::resolver::python_file_at_path;
@@ -35,9 +38,19 @@ pub(crate) fn format_stdin(cli: &FormatArguments, overrides: &Overrides) -> Resu
// Format the file.
let path = cli.stdin_filename.as_deref();
let preview = match pyproject_config.settings.lib.preview {
PreviewMode::Enabled => ruff_python_formatter::PreviewMode::Enabled,
PreviewMode::Disabled => ruff_python_formatter::PreviewMode::Disabled,
};
let line_length = pyproject_config.settings.lib.line_length;
let options = path
.map(PyFormatOptions::from_extension)
.unwrap_or_default();
.unwrap_or_default()
.with_line_width(LineWidth::from(NonZeroU16::from(line_length)))
.with_preview(preview);
match format_source(path, options, mode) {
Ok(result) => match mode {
FormatMode::Write => Ok(ExitStatus::Success),

View File

@@ -19,7 +19,7 @@ struct Explanation<'a> {
message_formats: &'a [&'a str],
autofix: String,
explanation: Option<&'a str>,
nursery: bool,
preview: bool,
}
impl<'a> Explanation<'a> {
@@ -35,7 +35,7 @@ impl<'a> Explanation<'a> {
message_formats: rule.message_formats(),
autofix,
explanation: rule.explanation(),
nursery: rule.is_nursery(),
preview: rule.is_preview(),
}
}
}
@@ -58,13 +58,10 @@ fn format_rule_text(rule: Rule) -> String {
output.push('\n');
}
if rule.is_nursery() {
output.push_str(&format!(
r#"This rule is part of the **nursery**, a collection of newer lints that are
still under development. As such, it must be enabled by explicitly selecting
{}."#,
rule.noqa_code()
));
if rule.is_preview() {
output.push_str(
r#"This rule is in preview and is not stable. The `--preview` flag is required for use."#,
);
output.push('\n');
output.push('\n');
}

View File

@@ -26,7 +26,7 @@ use ruff::source_kind::SourceKind;
use ruff::{fs, IOError, SyntaxError};
use ruff_diagnostics::Diagnostic;
use ruff_macros::CacheKey;
use ruff_notebook::{Cell, Notebook, NotebookError};
use ruff_notebook::{Cell, Notebook, NotebookError, NotebookIndex};
use ruff_python_ast::imports::ImportMap;
use ruff_python_ast::{PySourceType, SourceType, TomlSourceType};
use ruff_source_file::{LineIndex, SourceCode, SourceFileBuilder};
@@ -64,16 +64,20 @@ pub(crate) struct Diagnostics {
pub(crate) messages: Vec<Message>,
pub(crate) fixed: FxHashMap<String, FixTable>,
pub(crate) imports: ImportMap,
pub(crate) notebooks: FxHashMap<String, Notebook>,
pub(crate) notebook_indexes: FxHashMap<String, NotebookIndex>,
}
impl Diagnostics {
pub(crate) fn new(messages: Vec<Message>, imports: ImportMap) -> Self {
pub(crate) fn new(
messages: Vec<Message>,
imports: ImportMap,
notebook_indexes: FxHashMap<String, NotebookIndex>,
) -> Self {
Self {
messages,
fixed: FxHashMap::default(),
imports,
notebooks: FxHashMap::default(),
notebook_indexes,
}
}
@@ -94,6 +98,7 @@ impl Diagnostics {
TextSize::default(),
)],
ImportMap::default(),
FxHashMap::default(),
)
} else {
match path {
@@ -130,7 +135,7 @@ impl AddAssign for Diagnostics {
}
}
}
self.notebooks.extend(other.notebooks);
self.notebook_indexes.extend(other.notebook_indexes);
}
}
@@ -341,7 +346,13 @@ pub(crate) fn lint_path(
if let Some((cache, relative_path, key)) = caching {
// We don't cache parsing errors.
if parse_error.is_none() {
cache.update(relative_path.to_owned(), key, &messages, &imports);
cache.update(
relative_path.to_owned(),
key,
&messages,
&imports,
source_kind.as_ipy_notebook().map(Notebook::index),
);
}
}
@@ -359,12 +370,13 @@ pub(crate) fn lint_path(
);
}
let notebooks = if let SourceKind::IpyNotebook(notebook) = source_kind {
let notebook_indexes = if let SourceKind::IpyNotebook(notebook) = source_kind {
FxHashMap::from_iter([(
path.to_str()
.ok_or_else(|| anyhow!("Unable to parse filename: {:?}", path))?
.to_string(),
notebook,
// Index needs to be computed always to store in cache.
notebook.index().clone(),
)])
} else {
FxHashMap::default()
@@ -374,7 +386,7 @@ pub(crate) fn lint_path(
messages,
fixed: FxHashMap::from_iter([(fs::relativize_path(path), fixed)]),
imports,
notebooks,
notebook_indexes,
})
}
@@ -498,7 +510,7 @@ pub(crate) fn lint_stdin(
fixed,
)]),
imports,
notebooks: FxHashMap::default(),
notebook_indexes: FxHashMap::default(),
})
}

View File

@@ -177,7 +177,7 @@ impl Printer {
return Ok(());
}
let context = EmitterContext::new(&diagnostics.notebooks);
let context = EmitterContext::new(&diagnostics.notebook_indexes);
match self.format {
SerializationFormat::Json => {
@@ -364,7 +364,7 @@ impl Printer {
writeln!(writer)?;
}
let context = EmitterContext::new(&diagnostics.notebooks);
let context = EmitterContext::new(&diagnostics.notebook_indexes);
TextEmitter::default()
.with_show_fix_status(show_fix_status(self.autofix_level))
.with_show_source(self.flags.intersects(Flags::SHOW_SOURCE))

View File

@@ -43,13 +43,10 @@ pub(crate) fn main(args: &Args) -> Result<()> {
output.push('\n');
}
if rule.is_nursery() {
output.push_str(&format!(
r#"This rule is part of the **nursery**, a collection of newer lints that are
still under development. As such, it must be enabled by explicitly selecting
{}."#,
rule.noqa_code()
));
if rule.is_preview() {
output.push_str(
r#"This rule is in preview and is not stable. The `--preview` flag is required for use."#,
);
output.push('\n');
output.push('\n');
}

View File

@@ -10,8 +10,8 @@ use ruff::upstream_categories::UpstreamCategoryAndPrefix;
use ruff_diagnostics::AutofixKind;
use ruff_workspace::options::Options;
const FIX_SYMBOL: &str = "🛠";
const NURSERY_SYMBOL: &str = "🌅";
const FIX_SYMBOL: &str = "🛠";
const PREVIEW_SYMBOL: &str = "🧪";
fn generate_table(table_out: &mut String, rules: impl IntoIterator<Item = Rule>, linter: &Linter) {
table_out.push_str("| Code | Name | Message | |");
@@ -25,12 +25,12 @@ fn generate_table(table_out: &mut String, rules: impl IntoIterator<Item = Rule>,
}
AutofixKind::None => format!("<span style='opacity: 0.1'>{FIX_SYMBOL}</span>"),
};
let nursery_token = if rule.is_nursery() {
format!("<span style='opacity: 1'>{NURSERY_SYMBOL}</span>")
let preview_token = if rule.is_preview() {
format!("<span style='opacity: 1'>{PREVIEW_SYMBOL}</span>")
} else {
format!("<span style='opacity: 0.1'>{NURSERY_SYMBOL}</span>")
format!("<span style='opacity: 0.1'>{PREVIEW_SYMBOL}</span>")
};
let status_token = format!("{fix_token} {nursery_token}");
let status_token = format!("{fix_token} {preview_token}");
let rule_name = rule.as_ref();
@@ -61,7 +61,7 @@ pub(crate) fn generate() -> String {
table_out.push('\n');
table_out.push_str(&format!(
"The {NURSERY_SYMBOL} emoji indicates that a rule is part of the [\"nursery\"](../faq/#what-is-the-nursery)."
"The {PREVIEW_SYMBOL} emoji indicates that a rule in [\"preview\"](../faq/#what-is-preview)."
));
table_out.push('\n');
table_out.push('\n');

View File

@@ -8,7 +8,7 @@ use syn::{
Ident, ItemFn, LitStr, Pat, Path, Stmt, Token,
};
use crate::rule_code_prefix::{get_prefix_ident, if_all_same, is_nursery};
use crate::rule_code_prefix::{get_prefix_ident, if_all_same};
/// A rule entry in the big match statement such a
/// `(Pycodestyle, "E112") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::NoIndentedBlock),`
@@ -113,9 +113,23 @@ pub(crate) fn map_codes(func: &ItemFn) -> syn::Result<TokenStream> {
Self::#linter(linter)
}
}
// Rust doesn't yet support `impl const From<RuleCodePrefix> for RuleSelector`
// See https://github.com/rust-lang/rust/issues/67792
impl From<#linter> for crate::rule_selector::RuleSelector {
fn from(linter: #linter) -> Self {
Self::Prefix{prefix: RuleCodePrefix::#linter(linter), redirected_from: None}
let prefix = RuleCodePrefix::#linter(linter);
if is_single_rule_selector(&prefix) {
Self::Rule {
prefix,
redirected_from: None,
}
} else {
Self::Prefix {
prefix,
redirected_from: None,
}
}
}
}
});
@@ -156,7 +170,7 @@ pub(crate) fn map_codes(func: &ItemFn) -> syn::Result<TokenStream> {
output.extend(quote! {
impl #linter {
pub fn rules(self) -> ::std::vec::IntoIter<Rule> {
pub fn rules(&self) -> ::std::vec::IntoIter<Rule> {
match self { #prefix_into_iter_match_arms }
}
}
@@ -172,7 +186,7 @@ pub(crate) fn map_codes(func: &ItemFn) -> syn::Result<TokenStream> {
})
}
pub fn rules(self) -> ::std::vec::IntoIter<Rule> {
pub fn rules(&self) -> ::std::vec::IntoIter<Rule> {
match self {
#(RuleCodePrefix::#linter_idents(prefix) => prefix.clone().rules(),)*
}
@@ -195,26 +209,12 @@ fn rules_by_prefix(
// TODO(charlie): Why do we do this here _and_ in `rule_code_prefix::expand`?
let mut rules_by_prefix = BTreeMap::new();
for (code, rule) in rules {
// Nursery rules have to be explicitly selected, so we ignore them when looking at
// prefix-level selectors (e.g., `--select SIM10`), but add the rule itself under
// its fully-qualified code (e.g., `--select SIM101`).
if is_nursery(&rule.group) {
rules_by_prefix.insert(code.clone(), vec![(rule.path.clone(), rule.attrs.clone())]);
continue;
}
for code in rules.keys() {
for i in 1..=code.len() {
let prefix = code[..i].to_string();
let rules: Vec<_> = rules
.iter()
.filter_map(|(code, rule)| {
// Nursery rules have to be explicitly selected, so we ignore them when
// looking at prefixes.
if is_nursery(&rule.group) {
return None;
}
if code.starts_with(&prefix) {
Some((rule.path.clone(), rule.attrs.clone()))
} else {
@@ -311,6 +311,11 @@ See also https://github.com/astral-sh/ruff/issues/2186.
}
}
pub fn is_preview(&self) -> bool {
matches!(self.group(), RuleGroup::Preview)
}
#[allow(deprecated)]
pub fn is_nursery(&self) -> bool {
matches!(self.group(), RuleGroup::Nursery)
}
@@ -336,12 +341,10 @@ fn generate_iter_impl(
let mut linter_rules_match_arms = quote!();
let mut linter_all_rules_match_arms = quote!();
for (linter, map) in linter_to_rules {
let rule_paths = map.values().filter(|rule| !is_nursery(&rule.group)).map(
|Rule { attrs, path, .. }| {
let rule_name = path.segments.last().unwrap();
quote!(#(#attrs)* Rule::#rule_name)
},
);
let rule_paths = map.values().map(|Rule { attrs, path, .. }| {
let rule_name = path.segments.last().unwrap();
quote!(#(#attrs)* Rule::#rule_name)
});
linter_rules_match_arms.extend(quote! {
Linter::#linter => vec![#(#rule_paths,)*].into_iter(),
});

View File

@@ -12,22 +12,14 @@ pub(crate) fn expand<'a>(
let mut prefix_to_codes: BTreeMap<String, BTreeSet<String>> = BTreeMap::default();
let mut code_to_attributes: BTreeMap<String, &[Attribute]> = BTreeMap::default();
for (variant, group, attr) in variants {
for (variant, .., attr) in variants {
let code_str = variant.to_string();
// Nursery rules have to be explicitly selected, so we ignore them when looking at prefixes.
if is_nursery(group) {
for i in 1..=code_str.len() {
let prefix = code_str[..i].to_string();
prefix_to_codes
.entry(code_str.clone())
.entry(prefix)
.or_default()
.insert(code_str.clone());
} else {
for i in 1..=code_str.len() {
let prefix = code_str[..i].to_string();
prefix_to_codes
.entry(prefix)
.or_default()
.insert(code_str.clone());
}
}
code_to_attributes.insert(code_str, attr);
@@ -125,14 +117,3 @@ pub(crate) fn get_prefix_ident(prefix: &str) -> Ident {
};
Ident::new(&prefix, Span::call_site())
}
/// Returns true if the given group is the "nursery" group.
pub(crate) fn is_nursery(group: &Path) -> bool {
let group = group
.segments
.iter()
.map(|segment| segment.ident.to_string())
.collect::<Vec<String>>()
.join("::");
group == "RuleGroup::Nursery"
}

View File

@@ -1,8 +1,10 @@
use serde::{Deserialize, Serialize};
/// Jupyter Notebook indexing table
///
/// When we lint a jupyter notebook, we have to translate the row/column based on
/// [`ruff_text_size::TextSize`] to jupyter notebook cell/row/column.
#[derive(Clone, Debug, Eq, PartialEq)]
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
pub struct NotebookIndex {
/// Enter a row (1-based), get back the cell (1-based)
pub(super) row_to_cell: Vec<u32>,

View File

@@ -320,6 +320,14 @@ rowuses = [(1 << j) | # column ordinal
(1 << (n + 2*n-1 + i+j)) # NE-SW ordinal
for j in rangen]
rowuses = [((1 << j) # column ordinal
)|
(
# comment
(1 << (n + i-j + n-1))) | # NW-SE ordinal
(1 << (n + 2*n-1 + i+j)) # NE-SW ordinal
for j in rangen]
skip_bytes = (
header.timecnt * 5 # Transition times and types
+ header.typecnt * 6 # Local time type records
@@ -328,3 +336,56 @@ skip_bytes = (
+ header.isstdcnt # Standard/wall indicators
+ header.isutcnt # UT/local indicators
)
if (
(1 + 2) # test
or (3 + 4) # other
or (4 + 5) # more
):
pass
if (
(1 and 2) # test
+ (3 and 4) # other
+ (4 and 5) # more
):
pass
if (
(1 + 2) # test
< (3 + 4) # other
> (4 + 5) # more
):
pass
z = (
a
+
# a: extracts this comment
(
# b: and this comment
(
# c: formats it as part of the expression
x and y
)
)
)
z = (
(
(
x and y
# a: formats it as part of the expression
)
# b: extracts this comment
)
# c: and this comment
+ a
)

View File

@@ -169,3 +169,23 @@ c = (a
# test trailing operator comment
b
)
c = ("a" "b" +
# test leading binary comment
"a" "b"
)
(
b + c + d +
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbb" +
"cccccccccccccccccccccccccc"
"dddddddddddddddddddddddddd"
% aaaaaaaaaaaa
+ x
)
"a" "b" "c" + "d" "e" + "f" "g" + "h" "i" "j"
class EC2REPATH:
f.write ("Pathway name" + "\t" "Database Identifier" + "\t" "Source database" + "\n")

View File

@@ -102,3 +102,86 @@ def test():
and {k.lower(): v for k, v in self.items()}
== {k.lower(): v for k, v in other.items()}
)
if "_continue" in request.POST or (
# Redirecting after "Save as new".
"_saveasnew" in request.POST
and self.save_as_continue
and self.has_change_permission(request, obj)
):
pass
if True:
if False:
if True:
if (
self.validate_max
and self.total_form_count() - len(self.deleted_forms) > self.max_num
) or self.management_form.cleaned_data[
TOTAL_FORM_COUNT
] > self.absolute_max:
pass
if True:
if (
reference_field_name is None
or
# Unspecified to_field(s).
to_fields is None
or
# Reference to primary key.
(
None in to_fields
and (reference_field is None or reference_field.primary_key)
)
or
# Reference to field.
reference_field_name in to_fields
):
pass
field = opts.get_field(name)
if (
field.is_relation
and
# Generic foreign keys OR reverse relations
((field.many_to_one and not field.related_model) or field.one_to_many)
):
pass
if True:
return (
filtered.exists()
and
# It may happen that the object is deleted from the DB right after
# this check, causing the subsequent UPDATE to return zero matching
# rows. The same result can occur in some rare cases when the
# database returns zero despite the UPDATE being executed
# successfully (a row is matched and updated). In order to
# distinguish these two cases, the object's existence in the
# database is again checked for if the UPDATE query returns 0.
(filtered._update(values) > 0 or filtered.exists())
)
if (self._proc is not None
# has the child process finished?
and self._returncode is None
# the child process has finished, but the
# transport hasn't been notified yet?
and self._proc.poll() is None):
pass
if (self._proc
# has the child process finished?
* self._returncode
# the child process has finished, but the
# transport hasn't been notified yet?
+ self._proc.poll()):
pass

View File

@@ -205,6 +205,9 @@ fn handle_enclosed_comment<'a>(
locator,
)
}
AnyNodeRef::ExprBoolOp(_) | AnyNodeRef::ExprCompare(_) => {
handle_trailing_binary_like_comment(comment, locator)
}
AnyNodeRef::Keyword(keyword) => handle_keyword_comment(comment, keyword, locator),
AnyNodeRef::PatternKeyword(pattern_keyword) => {
handle_pattern_keyword_comment(comment, pattern_keyword, locator)
@@ -836,6 +839,47 @@ fn handle_trailing_binary_expression_left_or_operator_comment<'a>(
}
}
/// Attaches comments between two bool or compare expression operands to the preceding operand if the comment is before the operator.
///
/// ```python
/// a = (
/// 5 > 3
/// # trailing comment
/// and 3 == 3
/// )
/// ```
fn handle_trailing_binary_like_comment<'a>(
comment: DecoratedComment<'a>,
locator: &Locator,
) -> CommentPlacement<'a> {
debug_assert!(
comment.enclosing_node().is_expr_bool_op() || comment.enclosing_node().is_expr_compare()
);
// Only if there's a preceding node (in which case, the preceding node is `left` or middle node).
let (Some(left_operand), Some(right_operand)) =
(comment.preceding_node(), comment.following_node())
else {
return CommentPlacement::Default(comment);
};
let between_operands_range = TextRange::new(left_operand.end(), right_operand.start());
let mut tokens = SimpleTokenizer::new(locator.contents(), between_operands_range)
.skip_trivia()
.skip_while(|token| token.kind == SimpleTokenKind::RParen);
let operator_offset = tokens
.next()
.expect("Expected a token for the operator")
.start();
if comment.end() < operator_offset {
CommentPlacement::trailing(left_operand, comment)
} else {
CommentPlacement::Default(comment)
}
}
/// Handles own line comments on the module level before a class or function statement.
/// A comment only becomes the leading comment of a class or function if it isn't separated by an empty
/// line from the class. Comments that are separated by at least one empty line from the header of the

View File

@@ -5,14 +5,18 @@ use smallvec::SmallVec;
use ruff_formatter::write;
use ruff_python_ast::{
Constant, Expr, ExprAttribute, ExprBinOp, ExprCompare, ExprConstant, ExprUnaryOp, UnaryOp,
Constant, Expr, ExprAttribute, ExprBinOp, ExprBoolOp, ExprCompare, ExprConstant, ExprUnaryOp,
UnaryOp,
};
use ruff_python_trivia::{SimpleToken, SimpleTokenKind, SimpleTokenizer};
use ruff_text_size::{Ranged, TextRange};
use crate::comments::{leading_comments, trailing_comments, Comments, SourceComment};
use crate::expression::parentheses::{
in_parentheses_only_group, in_parentheses_only_soft_line_break,
in_parentheses_only_soft_line_break_or_space, is_expression_parenthesized,
write_in_parentheses_only_group_end_tag, write_in_parentheses_only_group_start_tag,
Parentheses,
};
use crate::expression::string::{AnyString, FormatString, StringLayout};
use crate::expression::OperatorPrecedence;
@@ -20,8 +24,9 @@ use crate::prelude::*;
#[derive(Copy, Clone, Debug)]
pub(super) enum BinaryLike<'a> {
BinaryExpression(&'a ExprBinOp),
CompareExpression(&'a ExprCompare),
Binary(&'a ExprBinOp),
Compare(&'a ExprCompare),
Bool(&'a ExprBoolOp),
}
impl<'a> BinaryLike<'a> {
@@ -84,6 +89,54 @@ impl<'a> BinaryLike<'a> {
}
}
fn recurse_bool<'a>(
bool_expression: &'a ExprBoolOp,
leading_comments: &'a [SourceComment],
trailing_comments: &'a [SourceComment],
comments: &'a Comments,
source: &str,
parts: &mut SmallVec<[OperandOrOperator<'a>; 8]>,
) {
parts.reserve(bool_expression.values.len() * 2 - 1);
if let Some((left, rest)) = bool_expression.values.split_first() {
rec(
Operand::Left {
expression: left,
leading_comments,
},
comments,
source,
parts,
);
parts.push(OperandOrOperator::Operator(Operator {
symbol: OperatorSymbol::Bool(bool_expression.op),
trailing_comments: &[],
}));
if let Some((right, middle)) = rest.split_last() {
for expression in middle {
rec(Operand::Middle { expression }, comments, source, parts);
parts.push(OperandOrOperator::Operator(Operator {
symbol: OperatorSymbol::Bool(bool_expression.op),
trailing_comments: &[],
}));
}
rec(
Operand::Right {
expression: right,
trailing_comments,
},
comments,
source,
parts,
);
}
}
}
fn recurse_binary<'a>(
binary: &'a ExprBinOp,
leading_comments: &'a [SourceComment],
@@ -164,6 +217,26 @@ impl<'a> BinaryLike<'a> {
parts,
);
}
Expr::BoolOp(bool_op)
if !is_expression_parenthesized(expression.into(), source) =>
{
let leading_comments = operand
.leading_binary_comments()
.unwrap_or_else(|| comments.leading(bool_op));
let trailing_comments = operand
.trailing_binary_comments()
.unwrap_or_else(|| comments.trailing(bool_op));
recurse_bool(
bool_op,
leading_comments,
trailing_comments,
comments,
source,
parts,
);
}
_ => {
parts.push(OperandOrOperator::Operand(operand));
}
@@ -172,18 +245,25 @@ impl<'a> BinaryLike<'a> {
let mut parts = SmallVec::new();
match self {
BinaryLike::BinaryExpression(binary) => {
BinaryLike::Binary(binary) => {
// Leading and trailing comments are handled by the binary's ``FormatNodeRule` implementation.
recurse_binary(binary, &[], &[], comments, source, &mut parts);
}
BinaryLike::CompareExpression(compare) => {
BinaryLike::Compare(compare) => {
// Leading and trailing comments are handled by the compare's ``FormatNodeRule` implementation.
recurse_compare(compare, &[], &[], comments, source, &mut parts);
}
BinaryLike::Bool(bool) => {
recurse_bool(bool, &[], &[], comments, source, &mut parts);
}
}
FlatBinaryExpression(parts)
}
const fn is_bool_op(self) -> bool {
matches!(self, BinaryLike::Bool(_))
}
}
impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
@@ -191,6 +271,10 @@ impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
let comments = f.context().comments().clone();
let flat_binary = self.flatten(&comments, f.context().source());
if self.is_bool_op() {
return in_parentheses_only_group(&&*flat_binary).fmt(f);
}
let source = f.context().source();
let mut string_operands = flat_binary
.operands()
@@ -233,44 +317,58 @@ impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
// ^^^^^^ this part or ^^^^^^^ this part
// ```
if let Some(left_operator_index) = index.left_operator() {
// Everything between the last implicit concatenated string and the left operator
// right before the implicit concatenated string:
// Handles the case where the left and right side of a binary expression are both
// implicit concatenated strings. In this case, the left operator has already been written
// by the preceding implicit concatenated string. It is only necessary to finish the group,
// wrapping the soft line break and operator.
//
// ```python
// a + b + "c" "d"
// ^--- left_operator
// ^^^^^-- left
// "a" "b" + "c" "d"
// ```
let left =
flat_binary.between_operators(last_operator_index, left_operator_index);
let left_operator = &flat_binary[left_operator_index];
if let Some(leading) = left.first_operand().leading_binary_comments() {
leading_comments(leading).fmt(f)?;
}
// Write the left, the left operator, and the space before the right side
write!(
f,
[
left,
left.last_operand()
.trailing_binary_comments()
.map(trailing_comments),
in_parentheses_only_soft_line_break_or_space(),
left_operator,
]
)?;
// Finish the left-side group (the group was started before the loop or by the
// previous iteration)
write_in_parentheses_only_group_end_tag(f);
if operand.has_leading_comments(f.context().comments())
|| left_operator.has_trailing_comments()
{
hard_line_break().fmt(f)?;
if last_operator_index == Some(left_operator_index) {
write_in_parentheses_only_group_end_tag(f);
} else {
space().fmt(f)?;
// Everything between the last implicit concatenated string and the left operator
// right before the implicit concatenated string:
// ```python
// a + b + "c" "d"
// ^--- left_operator
// ^^^^^-- left
// ```
let left = flat_binary
.between_operators(last_operator_index, left_operator_index);
let left_operator = &flat_binary[left_operator_index];
if let Some(leading) = left.first_operand().leading_binary_comments() {
leading_comments(leading).fmt(f)?;
}
// Write the left, the left operator, and the space before the right side
write!(
f,
[
left,
left.last_operand()
.trailing_binary_comments()
.map(trailing_comments),
in_parentheses_only_soft_line_break_or_space(),
left_operator,
]
)?;
// Finish the left-side group (the group was started before the loop or by the
// previous iteration)
write_in_parentheses_only_group_end_tag(f);
if operand.has_unparenthesized_leading_comments(
f.context().comments(),
f.context().source(),
) || left_operator.has_trailing_comments()
{
hard_line_break().fmt(f)?;
} else {
space().fmt(f)?;
}
}
write!(
@@ -314,8 +412,11 @@ impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
if let Some(right_operator) = flat_binary.get_operator(index.right_operator()) {
write_in_parentheses_only_group_start_tag(f);
let right_operand = &flat_binary[right_operator_index.right_operand()];
let right_operand_has_leading_comments =
right_operand.has_leading_comments(f.context().comments());
let right_operand_has_leading_comments = right_operand
.has_unparenthesized_leading_comments(
f.context().comments(),
f.context().source(),
);
// Keep the operator on the same line if the right side has leading comments (and thus, breaks)
if right_operand_has_leading_comments {
@@ -326,7 +427,11 @@ impl Format<PyFormatContext<'_>> for BinaryLike<'_> {
right_operator.fmt(f)?;
if right_operand_has_leading_comments
if (right_operand_has_leading_comments
&& !is_expression_parenthesized(
right_operand.expression().into(),
f.context().source(),
))
|| right_operator.has_trailing_comments()
{
hard_line_break().fmt(f)?;
@@ -540,7 +645,7 @@ impl Format<PyFormatContext<'_>> for FlatBinaryExpressionSlice<'_> {
fn fmt(&self, f: &mut Formatter<PyFormatContext>) -> FormatResult<()> {
// Single operand slice
if let [OperandOrOperator::Operand(operand)] = &self.0 {
return operand.expression().format().fmt(f);
return operand.fmt(f);
}
let mut last_operator: Option<OperatorIndex> = None;
@@ -577,10 +682,11 @@ impl Format<PyFormatContext<'_>> for FlatBinaryExpressionSlice<'_> {
operator_part.fmt(f)?;
// Format the operator on its own line if the right side has any leading comments.
if right
.first_operand()
.has_leading_comments(f.context().comments())
|| operator_part.has_trailing_comments()
if operator_part.has_trailing_comments()
|| right.first_operand().has_unparenthesized_leading_comments(
f.context().comments(),
f.context().source(),
)
{
hard_line_break().fmt(f)?;
} else if !is_pow {
@@ -682,13 +788,33 @@ impl<'a> Operand<'a> {
}
}
fn has_leading_comments(&self, comments: &Comments) -> bool {
/// Returns `true` if the operand has any leading comments that are not parenthesized.
fn has_unparenthesized_leading_comments(&self, comments: &Comments, source: &str) -> bool {
match self {
Operand::Left {
leading_comments, ..
} => !leading_comments.is_empty(),
Operand::Middle { expression } | Operand::Right { expression, .. } => {
comments.has_leading(*expression)
let leading = comments.leading(*expression);
if is_expression_parenthesized((*expression).into(), source) {
leading.iter().any(|comment| {
!comment.is_formatted()
&& matches!(
SimpleTokenizer::new(
source,
TextRange::new(comment.end(), expression.start()),
)
.skip_trivia()
.next(),
Some(SimpleToken {
kind: SimpleTokenKind::LParen,
..
})
)
})
} else {
!leading.is_empty()
}
}
}
}
@@ -713,6 +839,146 @@ impl<'a> Operand<'a> {
}
}
impl Format<PyFormatContext<'_>> for Operand<'_> {
fn fmt(&self, f: &mut Formatter<PyFormatContext<'_>>) -> FormatResult<()> {
let expression = self.expression();
return if is_expression_parenthesized(expression.into(), f.context().source()) {
let comments = f.context().comments().clone();
let expression_comments = comments.leading_dangling_trailing(expression);
// Format leading comments that are before the inner most `(` outside of the expression's parentheses.
// ```python
// z = (
// a
// +
// # a: extracts this comment
// (
// # b: and this comment
// (
// # c: formats it as part of the expression
// x and y
// )
// )
// )
// ```
//
// Gets formatted as
// ```python
// z = (
// a
// +
// # a: extracts this comment
// # b: and this comment
// (
// # c: formats it as part of the expression
// x and y
// )
// )
// ```
let leading = expression_comments.leading;
let leading_before_parentheses_end = leading
.iter()
.rposition(|comment| {
comment.is_unformatted()
&& matches!(
SimpleTokenizer::new(
f.context().source(),
TextRange::new(comment.end(), expression.start()),
)
.skip_trivia()
.next(),
Some(SimpleToken {
kind: SimpleTokenKind::LParen,
..
})
)
})
.map_or(0, |position| position + 1);
let leading_before_parentheses = &leading[..leading_before_parentheses_end];
// Format trailing comments that are outside of the inner most `)` outside of the parentheses.
// ```python
// z = (
// (
//
// (
//
// x and y
// # a: extracts this comment
// )
// # b: and this comment
// )
// # c: formats it as part of the expression
// + a
// )
// ```
// Gets formatted as
// ```python
// z = (
// (
// x and y
// # a: extracts this comment
// )
// # b: and this comment
// # c: formats it as part of the expression
// + a
// )
// ```
let trailing = expression_comments.trailing;
let trailing_after_parentheses_start = trailing
.iter()
.position(|comment| {
comment.is_unformatted()
&& matches!(
SimpleTokenizer::new(
f.context().source(),
TextRange::new(expression.end(), comment.start()),
)
.skip_trivia()
.next(),
Some(SimpleToken {
kind: SimpleTokenKind::RParen,
..
})
)
})
.unwrap_or(trailing.len());
let trailing_after_parentheses = &trailing[trailing_after_parentheses_start..];
// Mark the comment as formatted to avoid that the formatting of the expression
// formats the trailing comment inside of the parentheses.
for comment in trailing_after_parentheses {
comment.mark_formatted();
}
if !leading_before_parentheses.is_empty() {
leading_comments(leading_before_parentheses).fmt(f)?;
}
expression
.format()
.with_options(Parentheses::Always)
.fmt(f)?;
for comment in trailing_after_parentheses {
comment.mark_unformatted();
}
if !trailing_after_parentheses.is_empty() {
trailing_comments(trailing_after_parentheses).fmt(f)?;
}
Ok(())
} else {
expression.format().with_options(Parentheses::Never).fmt(f)
};
}
}
#[derive(Debug)]
struct Operator<'a> {
symbol: OperatorSymbol,
@@ -739,6 +1005,7 @@ impl Format<PyFormatContext<'_>> for Operator<'_> {
enum OperatorSymbol {
Binary(ruff_python_ast::Operator),
Comparator(ruff_python_ast::CmpOp),
Bool(ruff_python_ast::BoolOp),
}
impl OperatorSymbol {
@@ -750,6 +1017,7 @@ impl OperatorSymbol {
match self {
OperatorSymbol::Binary(operator) => OperatorPrecedence::from(operator),
OperatorSymbol::Comparator(_) => OperatorPrecedence::Comparator,
OperatorSymbol::Bool(_) => OperatorPrecedence::BooleanOperation,
}
}
}
@@ -759,6 +1027,7 @@ impl Format<PyFormatContext<'_>> for OperatorSymbol {
match self {
OperatorSymbol::Binary(operator) => operator.format().fmt(f),
OperatorSymbol::Comparator(operator) => operator.format().fmt(f),
OperatorSymbol::Bool(bool) => bool.format().fmt(f),
}
}
}

View File

@@ -14,7 +14,7 @@ pub struct FormatExprBinOp;
impl FormatNodeRule<ExprBinOp> for FormatExprBinOp {
#[inline]
fn fmt_fields(&self, item: &ExprBinOp, f: &mut PyFormatter) -> FormatResult<()> {
BinaryLike::BinaryExpression(item).fmt(f)
BinaryLike::Binary(item).fmt(f)
}
fn fmt_dangling_comments(

View File

@@ -1,80 +1,18 @@
use ruff_formatter::{write, FormatOwnedWithRule, FormatRefWithRule, FormatRuleWithOptions};
use ruff_formatter::{FormatOwnedWithRule, FormatRefWithRule};
use ruff_python_ast::node::AnyNodeRef;
use ruff_python_ast::{BoolOp, Expr, ExprBoolOp};
use ruff_python_ast::{BoolOp, ExprBoolOp};
use crate::comments::leading_comments;
use crate::expression::parentheses::{
in_parentheses_only_group, in_parentheses_only_soft_line_break_or_space, NeedsParentheses,
OptionalParentheses,
};
use crate::expression::binary_like::BinaryLike;
use crate::expression::parentheses::{NeedsParentheses, OptionalParentheses};
use crate::prelude::*;
use super::parentheses::is_expression_parenthesized;
#[derive(Default)]
pub struct FormatExprBoolOp {
layout: BoolOpLayout,
}
#[derive(Default, Copy, Clone)]
pub enum BoolOpLayout {
#[default]
Default,
Chained,
}
impl FormatRuleWithOptions<ExprBoolOp, PyFormatContext<'_>> for FormatExprBoolOp {
type Options = BoolOpLayout;
fn with_options(mut self, options: Self::Options) -> Self {
self.layout = options;
self
}
}
pub struct FormatExprBoolOp;
impl FormatNodeRule<ExprBoolOp> for FormatExprBoolOp {
#[inline]
fn fmt_fields(&self, item: &ExprBoolOp, f: &mut PyFormatter) -> FormatResult<()> {
let ExprBoolOp {
range: _,
op,
values,
} = item;
let inner = format_with(|f: &mut PyFormatter| {
let mut values = values.iter();
let comments = f.context().comments().clone();
let Some(first) = values.next() else {
return Ok(());
};
FormatValue { value: first }.fmt(f)?;
for value in values {
let leading_value_comments = comments.leading(value);
// Format the expressions leading comments **before** the operator
if leading_value_comments.is_empty() {
write!(f, [in_parentheses_only_soft_line_break_or_space()])?;
} else {
write!(
f,
[hard_line_break(), leading_comments(leading_value_comments)]
)?;
}
write!(f, [op.format(), space()])?;
FormatValue { value }.fmt(f)?;
}
Ok(())
});
if matches!(self.layout, BoolOpLayout::Chained) {
// Chained boolean operations should not be given a new group
inner.fmt(f)
} else {
in_parentheses_only_group(&inner).fmt(f)
}
BinaryLike::Bool(item).fmt(f)
}
}
@@ -88,24 +26,6 @@ impl NeedsParentheses for ExprBoolOp {
}
}
struct FormatValue<'a> {
value: &'a Expr,
}
impl Format<PyFormatContext<'_>> for FormatValue<'_> {
fn fmt(&self, f: &mut PyFormatter) -> FormatResult<()> {
match self.value {
Expr::BoolOp(bool_op)
if !is_expression_parenthesized(bool_op.into(), f.context().source()) =>
{
// Mark chained boolean operations e.g. `x and y or z` and avoid creating a new group
write!(f, [bool_op.format().with_options(BoolOpLayout::Chained)])
}
_ => write!(f, [in_parentheses_only_group(&self.value.format())]),
}
}
}
#[derive(Copy, Clone)]
pub struct FormatBoolOp;

View File

@@ -15,7 +15,7 @@ pub struct FormatExprCompare;
impl FormatNodeRule<ExprCompare> for FormatExprCompare {
#[inline]
fn fmt_fields(&self, item: &ExprCompare, f: &mut PyFormatter) -> FormatResult<()> {
BinaryLike::CompareExpression(item).fmt(f)
BinaryLike::Compare(item).fmt(f)
}
fn fmt_dangling_comments(

View File

@@ -1,12 +1,11 @@
use ruff_formatter::{format_args, write, FormatRuleWithOptions};
use ruff_formatter::{write, FormatRuleWithOptions};
use ruff_python_ast::node::{AnyNodeRef, AstNode};
use ruff_python_ast::{Expr, ExprSubscript};
use crate::comments::{trailing_comments, SourceComment};
use crate::comments::SourceComment;
use crate::context::{NodeLevel, WithNodeLevel};
use crate::expression::expr_tuple::TupleParentheses;
use crate::expression::parentheses::{NeedsParentheses, OptionalParentheses};
use crate::expression::parentheses::{parenthesized, NeedsParentheses, OptionalParentheses};
use crate::expression::CallChainLayout;
use crate::prelude::*;
@@ -67,15 +66,9 @@ impl FormatNodeRule<ExprSubscript> for FormatExprSubscript {
}
});
write!(
f,
[group(&format_args![
token("["),
trailing_comments(dangling_comments),
soft_block_indent(&format_slice),
token("]")
])]
)
parenthesized("[", &format_slice, "]")
.with_dangling_comments(dangling_comments)
.fmt(f)
}
fn fmt_dangling_comments(

View File

@@ -326,6 +326,14 @@ rowuses = [(1 << j) | # column ordinal
(1 << (n + 2*n-1 + i+j)) # NE-SW ordinal
for j in rangen]
rowuses = [((1 << j) # column ordinal
)|
(
# comment
(1 << (n + i-j + n-1))) | # NW-SE ordinal
(1 << (n + 2*n-1 + i+j)) # NE-SW ordinal
for j in rangen]
skip_bytes = (
header.timecnt * 5 # Transition times and types
+ header.typecnt * 6 # Local time type records
@@ -334,6 +342,59 @@ skip_bytes = (
+ header.isstdcnt # Standard/wall indicators
+ header.isutcnt # UT/local indicators
)
if (
(1 + 2) # test
or (3 + 4) # other
or (4 + 5) # more
):
pass
if (
(1 and 2) # test
+ (3 and 4) # other
+ (4 and 5) # more
):
pass
if (
(1 + 2) # test
< (3 + 4) # other
> (4 + 5) # more
):
pass
z = (
a
+
# a: extracts this comment
(
# b: and this comment
(
# c: formats it as part of the expression
x and y
)
)
)
z = (
(
(
x and y
# a: formats it as part of the expression
)
# b: extracts this comment
)
# c: and this comment
+ a
)
```
## Output
@@ -565,19 +626,15 @@ if [
...
if (
[
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
]
&
(
# comment
a + b
)
if [
fffffffffffffffff,
gggggggggggggggggggg,
hhhhhhhhhhhhhhhhhhhhh,
iiiiiiiiiiiiiiii,
jjjjjjjjjjjjj,
] & (
# comment
a + b
):
...
@@ -706,8 +763,7 @@ expected_content = (
</sitemap>
</sitemapindex>
"""
%
(
% (
# Needs parentheses
self.base_url
)
@@ -715,14 +771,21 @@ expected_content = (
rowuses = [
(
1 << j # column ordinal
)
(1 << j) # column ordinal
| (1 << (n + i - j + n - 1)) # NW-SE ordinal
| (1 << (n + 2 * n - 1 + i + j)) # NE-SW ordinal
for j in rangen
]
rowuses = [
(1 << j) # column ordinal
|
# comment
(1 << (n + i - j + n - 1)) # NW-SE ordinal
| (1 << (n + 2 * n - 1 + i + j)) # NE-SW ordinal
for j in rangen
]
skip_bytes = (
header.timecnt * 5 # Transition times and types
+ header.typecnt * 6 # Local time type records
@@ -731,6 +794,51 @@ skip_bytes = (
+ header.isstdcnt # Standard/wall indicators
+ header.isutcnt # UT/local indicators
)
if (
(1 + 2) # test
or (3 + 4) # other
or (4 + 5) # more
):
pass
if (
(1 and 2) # test
+ (3 and 4) # other
+ (4 and 5) # more
):
pass
if (
(1 + 2) # test
< (3 + 4) # other
> (4 + 5) # more
):
pass
z = (
a
+
# a: extracts this comment
# b: and this comment
(
# c: formats it as part of the expression
x and y
)
)
z = (
(
x and y
# a: formats it as part of the expression
)
# b: extracts this comment
# c: and this comment
+ a
)
```

View File

@@ -175,6 +175,26 @@ c = (a
# test trailing operator comment
b
)
c = ("a" "b" +
# test leading binary comment
"a" "b"
)
(
b + c + d +
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbb" +
"cccccccccccccccccccccccccc"
"dddddddddddddddddddddddddd"
% aaaaaaaaaaaa
+ x
)
"a" "b" "c" + "d" "e" + "f" "g" + "h" "i" "j"
class EC2REPATH:
f.write ("Pathway name" + "\t" "Database Identifier" + "\t" "Source database" + "\n")
```
## Output
@@ -363,6 +383,26 @@ c = (
# test trailing operator comment
b
)
c = (
"a"
"b" +
# test leading binary comment
"a"
"b"
)
(
b + c + d + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbb" + "cccccccccccccccccccccccccc"
"dddddddddddddddddddddddddd" % aaaaaaaaaaaa + x
)
"a" "b" "c" + "d" "e" + "f" "g" + "h" "i" "j"
class EC2REPATH:
f.write("Pathway name" + "\t" "Database Identifier" + "\t" "Source database" + "\n")
```

View File

@@ -108,6 +108,89 @@ def test():
and {k.lower(): v for k, v in self.items()}
== {k.lower(): v for k, v in other.items()}
)
if "_continue" in request.POST or (
# Redirecting after "Save as new".
"_saveasnew" in request.POST
and self.save_as_continue
and self.has_change_permission(request, obj)
):
pass
if True:
if False:
if True:
if (
self.validate_max
and self.total_form_count() - len(self.deleted_forms) > self.max_num
) or self.management_form.cleaned_data[
TOTAL_FORM_COUNT
] > self.absolute_max:
pass
if True:
if (
reference_field_name is None
or
# Unspecified to_field(s).
to_fields is None
or
# Reference to primary key.
(
None in to_fields
and (reference_field is None or reference_field.primary_key)
)
or
# Reference to field.
reference_field_name in to_fields
):
pass
field = opts.get_field(name)
if (
field.is_relation
and
# Generic foreign keys OR reverse relations
((field.many_to_one and not field.related_model) or field.one_to_many)
):
pass
if True:
return (
filtered.exists()
and
# It may happen that the object is deleted from the DB right after
# this check, causing the subsequent UPDATE to return zero matching
# rows. The same result can occur in some rare cases when the
# database returns zero despite the UPDATE being executed
# successfully (a row is matched and updated). In order to
# distinguish these two cases, the object's existence in the
# database is again checked for if the UPDATE query returns 0.
(filtered._update(values) > 0 or filtered.exists())
)
if (self._proc is not None
# has the child process finished?
and self._returncode is None
# the child process has finished, but the
# transport hasn't been notified yet?
and self._proc.poll() is None):
pass
if (self._proc
# has the child process finished?
* self._returncode
# the child process has finished, but the
# transport hasn't been notified yet?
+ self._proc.poll()):
pass
```
## Output
@@ -234,6 +317,89 @@ def test():
return isinstance(other, Mapping) and {k.lower(): v for k, v in self.items()} == {
k.lower(): v for k, v in other.items()
}
if "_continue" in request.POST or (
# Redirecting after "Save as new".
"_saveasnew" in request.POST
and self.save_as_continue
and self.has_change_permission(request, obj)
):
pass
if True:
if False:
if True:
if (
self.validate_max
and self.total_form_count() - len(self.deleted_forms) > self.max_num
) or self.management_form.cleaned_data[
TOTAL_FORM_COUNT
] > self.absolute_max:
pass
if True:
if (
reference_field_name is None
or
# Unspecified to_field(s).
to_fields is None
or
# Reference to primary key.
(None in to_fields and (reference_field is None or reference_field.primary_key))
or
# Reference to field.
reference_field_name in to_fields
):
pass
field = opts.get_field(name)
if (
field.is_relation
and
# Generic foreign keys OR reverse relations
((field.many_to_one and not field.related_model) or field.one_to_many)
):
pass
if True:
return (
filtered.exists()
and
# It may happen that the object is deleted from the DB right after
# this check, causing the subsequent UPDATE to return zero matching
# rows. The same result can occur in some rare cases when the
# database returns zero despite the UPDATE being executed
# successfully (a row is matched and updated). In order to
# distinguish these two cases, the object's existence in the
# database is again checked for if the UPDATE query returns 0.
(filtered._update(values) > 0 or filtered.exists())
)
if (
self._proc is not None
# has the child process finished?
and self._returncode is None
# the child process has finished, but the
# transport hasn't been notified yet?
and self._proc.poll() is None
):
pass
if (
self._proc
# has the child process finished?
* self._returncode
# the child process has finished, but the
# transport hasn't been notified yet?
+ self._proc.poll()
):
pass
```

View File

@@ -339,13 +339,13 @@ ct_match = (
== self.get_content_type[obj, rel_obj, using, instance._state.db].id
)
ct_match = {
aaaaaaaaaaaaaaaa
} == self.get_content_type[obj, rel_obj, using, instance._state.db].id
ct_match = {aaaaaaaaaaaaaaaa} == self.get_content_type[
obj, rel_obj, using, instance._state.db
].id
ct_match = (
aaaaaaaaaaaaaaaa
) == self.get_content_type[obj, rel_obj, using, instance._state.db].id
ct_match = (aaaaaaaaaaaaaaaa) == self.get_content_type[
obj, rel_obj, using, instance._state.db
].id
# comments

View File

@@ -90,8 +90,7 @@ a = (
+ b
+ c
+ d
+
( # Hello
+ ( # Hello
e + f + g
)
)

View File

@@ -30,7 +30,6 @@ static_assertions = "1.1.0"
[dev-dependencies]
insta = { workspace = true }
test-case = { workspace = true }
[build-dependencies]
anyhow = { workspace = true }

View File

@@ -1265,11 +1265,7 @@ impl<'a> LexedText<'a> {
#[cfg(test)]
mod tests {
use num_bigint::BigInt;
use ruff_python_ast::IpyEscapeKind;
use insta::assert_debug_snapshot;
use test_case::test_case;
use super::*;
@@ -1277,50 +1273,63 @@ mod tests {
const MAC_EOL: &str = "\r";
const UNIX_EOL: &str = "\n";
pub(crate) fn lex_source(source: &str) -> Vec<Tok> {
let lexer = lex(source, Mode::Module);
lexer.map(|x| x.unwrap().0).collect()
fn lex_source_with_mode(source: &str, mode: Mode) -> Vec<Spanned> {
let lexer = lex(source, mode);
lexer.map(std::result::Result::unwrap).collect()
}
pub(crate) fn lex_jupyter_source(source: &str) -> Vec<Tok> {
let lexer = lex(source, Mode::Ipython);
lexer.map(|x| x.unwrap().0).collect()
fn lex_source(source: &str) -> Vec<Spanned> {
lex_source_with_mode(source, Mode::Module)
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_ipython_escape_command_line_continuation_eol(eol: &str) {
fn lex_jupyter_source(source: &str) -> Vec<Spanned> {
lex_source_with_mode(source, Mode::Ipython)
}
fn ipython_escape_command_line_continuation_eol(eol: &str) -> Vec<Spanned> {
let source = format!("%matplotlib \\{eol} --inline");
let tokens = lex_jupyter_source(&source);
assert_eq!(
tokens,
vec![
Tok::IpyEscapeCommand {
value: "matplotlib --inline".to_string(),
kind: IpyEscapeKind::Magic
},
Tok::Newline
]
);
lex_jupyter_source(&source)
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_ipython_escape_command_line_continuation_with_eol_and_eof(eol: &str) {
#[test]
fn test_ipython_escape_command_line_continuation_unix_eol() {
assert_debug_snapshot!(ipython_escape_command_line_continuation_eol(UNIX_EOL));
}
#[test]
fn test_ipython_escape_command_line_continuation_mac_eol() {
assert_debug_snapshot!(ipython_escape_command_line_continuation_eol(MAC_EOL));
}
#[test]
fn test_ipython_escape_command_line_continuation_windows_eol() {
assert_debug_snapshot!(ipython_escape_command_line_continuation_eol(WINDOWS_EOL));
}
fn ipython_escape_command_line_continuation_with_eol_and_eof(eol: &str) -> Vec<Spanned> {
let source = format!("%matplotlib \\{eol}");
let tokens = lex_jupyter_source(&source);
assert_eq!(
tokens,
vec![
Tok::IpyEscapeCommand {
value: "matplotlib ".to_string(),
kind: IpyEscapeKind::Magic
},
Tok::Newline
]
);
lex_jupyter_source(&source)
}
#[test]
fn test_ipython_escape_command_line_continuation_with_unix_eol_and_eof() {
assert_debug_snapshot!(ipython_escape_command_line_continuation_with_eol_and_eof(
UNIX_EOL
));
}
#[test]
fn test_ipython_escape_command_line_continuation_with_mac_eol_and_eof() {
assert_debug_snapshot!(ipython_escape_command_line_continuation_with_eol_and_eof(
MAC_EOL
));
}
#[test]
fn test_ipython_escape_command_line_continuation_with_windows_eol_and_eof() {
assert_debug_snapshot!(ipython_escape_command_line_continuation_with_eol_and_eof(
WINDOWS_EOL
));
}
#[test]
@@ -1397,8 +1406,8 @@ baz = %matplotlib \
assert_debug_snapshot!(lex_jupyter_source(source));
}
fn assert_no_ipython_escape_command(tokens: &[Tok]) {
for tok in tokens {
fn assert_no_ipython_escape_command(tokens: &[Spanned]) {
for (tok, _) in tokens {
if let Tok::IpyEscapeCommand { .. } = tok {
panic!("Unexpected escape command token: {tok:?}")
}
@@ -1428,45 +1437,48 @@ def f(arg=%timeit a = b):
assert_debug_snapshot!(lex_source(source));
}
#[test_case(" foo"; "long")]
#[test_case(" "; "whitespace")]
#[test_case(" "; "single whitespace")]
#[test_case(""; "empty")]
fn test_line_comment(comment: &str) {
let source = format!("99232 # {comment}");
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::Int {
value: BigInt::from(99232)
},
Tok::Comment(format!("# {comment}")),
Tok::Newline
]
);
#[test]
fn test_line_comment_long() {
let source = "99232 # foo".to_string();
assert_debug_snapshot!(lex_source(&source));
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_comment_until_eol(eol: &str) {
#[test]
fn test_line_comment_whitespace() {
let source = "99232 # ".to_string();
assert_debug_snapshot!(lex_source(&source));
}
#[test]
fn test_line_comment_single_whitespace() {
let source = "99232 # ".to_string();
assert_debug_snapshot!(lex_source(&source));
}
#[test]
fn test_line_comment_empty() {
let source = "99232 #".to_string();
assert_debug_snapshot!(lex_source(&source));
}
fn comment_until_eol(eol: &str) -> Vec<Spanned> {
let source = format!("123 # Foo{eol}456");
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::Int {
value: BigInt::from(123)
},
Tok::Comment("# Foo".to_string()),
Tok::Newline,
Tok::Int {
value: BigInt::from(456)
},
Tok::Newline,
]
);
lex_source(&source)
}
#[test]
fn test_comment_until_unix_eol() {
assert_debug_snapshot!(comment_until_eol(UNIX_EOL));
}
#[test]
fn test_comment_until_mac_eol() {
assert_debug_snapshot!(comment_until_eol(MAC_EOL));
}
#[test]
fn test_comment_until_windows_eol() {
assert_debug_snapshot!(comment_until_eol(WINDOWS_EOL));
}
#[test]
@@ -1475,115 +1487,67 @@ def f(arg=%timeit a = b):
assert_debug_snapshot!(lex_source(source));
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_indentation_with_eol(eol: &str) {
fn indentation_with_eol(eol: &str) -> Vec<Spanned> {
let source = format!("def foo():{eol} return 99{eol}{eol}");
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::Def,
Tok::Name {
name: String::from("foo"),
},
Tok::Lpar,
Tok::Rpar,
Tok::Colon,
Tok::Newline,
Tok::Indent,
Tok::Return,
Tok::Int {
value: BigInt::from(99)
},
Tok::Newline,
Tok::NonLogicalNewline,
Tok::Dedent,
]
);
lex_source(&source)
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_double_dedent_with_eol(eol: &str) {
#[test]
fn test_indentation_with_unix_eol() {
assert_debug_snapshot!(indentation_with_eol(UNIX_EOL));
}
#[test]
fn test_indentation_with_mac_eol() {
assert_debug_snapshot!(indentation_with_eol(MAC_EOL));
}
#[test]
fn test_indentation_with_windows_eol() {
assert_debug_snapshot!(indentation_with_eol(WINDOWS_EOL));
}
fn double_dedent_with_eol(eol: &str) -> Vec<Spanned> {
let source = format!("def foo():{eol} if x:{eol}{eol} return 99{eol}{eol}");
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::Def,
Tok::Name {
name: String::from("foo"),
},
Tok::Lpar,
Tok::Rpar,
Tok::Colon,
Tok::Newline,
Tok::Indent,
Tok::If,
Tok::Name {
name: String::from("x"),
},
Tok::Colon,
Tok::Newline,
Tok::NonLogicalNewline,
Tok::Indent,
Tok::Return,
Tok::Int {
value: BigInt::from(99)
},
Tok::Newline,
Tok::NonLogicalNewline,
Tok::Dedent,
Tok::Dedent,
]
);
lex_source(&source)
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_double_dedent_with_tabs(eol: &str) {
#[test]
fn test_double_dedent_with_unix_eol() {
assert_debug_snapshot!(double_dedent_with_eol(UNIX_EOL));
}
#[test]
fn test_double_dedent_with_mac_eol() {
assert_debug_snapshot!(double_dedent_with_eol(MAC_EOL));
}
#[test]
fn test_double_dedent_with_windows_eol() {
assert_debug_snapshot!(double_dedent_with_eol(WINDOWS_EOL));
}
fn double_dedent_with_tabs_eol(eol: &str) -> Vec<Spanned> {
let source = format!("def foo():{eol}\tif x:{eol}{eol}\t\t return 99{eol}{eol}");
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::Def,
Tok::Name {
name: String::from("foo"),
},
Tok::Lpar,
Tok::Rpar,
Tok::Colon,
Tok::Newline,
Tok::Indent,
Tok::If,
Tok::Name {
name: String::from("x"),
},
Tok::Colon,
Tok::Newline,
Tok::NonLogicalNewline,
Tok::Indent,
Tok::Return,
Tok::Int {
value: BigInt::from(99)
},
Tok::Newline,
Tok::NonLogicalNewline,
Tok::Dedent,
Tok::Dedent,
]
);
lex_source(&source)
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_newline_in_brackets(eol: &str) {
#[test]
fn test_double_dedent_with_tabs_unix_eol() {
assert_debug_snapshot!(double_dedent_with_tabs_eol(UNIX_EOL));
}
#[test]
fn test_double_dedent_with_tabs_mac_eol() {
assert_debug_snapshot!(double_dedent_with_tabs_eol(MAC_EOL));
}
#[test]
fn test_double_dedent_with_tabs_windows_eol() {
assert_debug_snapshot!(double_dedent_with_tabs_eol(WINDOWS_EOL));
}
fn newline_in_brackets_eol(eol: &str) -> Vec<Spanned> {
let source = r"x = [
1,2
@@ -1595,59 +1559,22 @@ def f(arg=%timeit a = b):
7}]
"
.replace('\n', eol);
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::Name {
name: String::from("x"),
},
Tok::Equal,
Tok::Lsqb,
Tok::NonLogicalNewline,
Tok::NonLogicalNewline,
Tok::Int {
value: BigInt::from(1)
},
Tok::Comma,
Tok::Int {
value: BigInt::from(2)
},
Tok::NonLogicalNewline,
Tok::Comma,
Tok::Lpar,
Tok::Int {
value: BigInt::from(3)
},
Tok::Comma,
Tok::NonLogicalNewline,
Tok::Int {
value: BigInt::from(4)
},
Tok::Comma,
Tok::NonLogicalNewline,
Tok::Rpar,
Tok::Comma,
Tok::Lbrace,
Tok::NonLogicalNewline,
Tok::Int {
value: BigInt::from(5)
},
Tok::Comma,
Tok::NonLogicalNewline,
Tok::Int {
value: BigInt::from(6)
},
Tok::Comma,
// Continuation here - no NonLogicalNewline.
Tok::Int {
value: BigInt::from(7)
},
Tok::Rbrace,
Tok::Rsqb,
Tok::Newline,
]
);
lex_source(&source)
}
#[test]
fn test_newline_in_brackets_unix_eol() {
assert_debug_snapshot!(newline_in_brackets_eol(UNIX_EOL));
}
#[test]
fn test_newline_in_brackets_mac_eol() {
assert_debug_snapshot!(newline_in_brackets_eol(MAC_EOL));
}
#[test]
fn test_newline_in_brackets_windows_eol() {
assert_debug_snapshot!(newline_in_brackets_eol(WINDOWS_EOL));
}
#[test]
@@ -1680,60 +1607,50 @@ def f(arg=%timeit a = b):
assert_debug_snapshot!(lex_source(source));
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_string_continuation_with_eol(eol: &str) {
fn string_continuation_with_eol(eol: &str) -> Vec<Spanned> {
let source = format!("\"abc\\{eol}def\"");
let tokens = lex_source(&source);
lex_source(&source)
}
assert_eq!(
tokens,
vec![
Tok::String {
value: format!("abc\\{eol}def"),
kind: StringKind::String,
triple_quoted: false,
},
Tok::Newline,
]
);
#[test]
fn test_string_continuation_with_unix_eol() {
assert_debug_snapshot!(string_continuation_with_eol(UNIX_EOL));
}
#[test]
fn test_string_continuation_with_mac_eol() {
assert_debug_snapshot!(string_continuation_with_eol(MAC_EOL));
}
#[test]
fn test_string_continuation_with_windows_eol() {
assert_debug_snapshot!(string_continuation_with_eol(WINDOWS_EOL));
}
#[test]
fn test_escape_unicode_name() {
let source = r#""\N{EN SPACE}""#;
let tokens = lex_source(source);
assert_eq!(
tokens,
vec![
Tok::String {
value: r"\N{EN SPACE}".to_string(),
kind: StringKind::String,
triple_quoted: false,
},
Tok::Newline
]
);
assert_debug_snapshot!(lex_source(source));
}
#[test_case(UNIX_EOL)]
#[test_case(MAC_EOL)]
#[test_case(WINDOWS_EOL)]
fn test_triple_quoted(eol: &str) {
fn triple_quoted_eol(eol: &str) -> Vec<Spanned> {
let source = format!("\"\"\"{eol} test string{eol} \"\"\"");
let tokens = lex_source(&source);
assert_eq!(
tokens,
vec![
Tok::String {
value: format!("{eol} test string{eol} "),
kind: StringKind::String,
triple_quoted: true,
},
Tok::Newline,
]
);
lex_source(&source)
}
#[test]
fn test_triple_quoted_unix_eol() {
assert_debug_snapshot!(triple_quoted_eol(UNIX_EOL));
}
#[test]
fn test_triple_quoted_mac_eol() {
assert_debug_snapshot!(triple_quoted_eol(MAC_EOL));
}
#[test]
fn test_triple_quoted_windows_eol() {
assert_debug_snapshot!(triple_quoted_eol(WINDOWS_EOL));
}
// This test case is to just make sure that the lexer doesn't go into

View File

@@ -3,20 +3,44 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
Name {
name: "a_variable",
},
Equal,
Int {
value: 99,
},
Plus,
Int {
value: 2,
},
Minus,
Int {
value: 0,
},
Newline,
(
Name {
name: "a_variable",
},
0..10,
),
(
Equal,
11..12,
),
(
Int {
value: 99,
},
13..15,
),
(
Plus,
16..17,
),
(
Int {
value: 2,
},
18..19,
),
(
Minus,
19..20,
),
(
Int {
value: 0,
},
20..21,
),
(
Newline,
21..21,
),
]

View File

@@ -0,0 +1,32 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: comment_until_eol(MAC_EOL)
---
[
(
Int {
value: 123,
},
0..3,
),
(
Comment(
"# Foo",
),
5..10,
),
(
Newline,
10..11,
),
(
Int {
value: 456,
},
11..14,
),
(
Newline,
14..14,
),
]

View File

@@ -0,0 +1,32 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: comment_until_eol(UNIX_EOL)
---
[
(
Int {
value: 123,
},
0..3,
),
(
Comment(
"# Foo",
),
5..10,
),
(
Newline,
10..11,
),
(
Int {
value: 456,
},
11..14,
),
(
Newline,
14..14,
),
]

View File

@@ -0,0 +1,32 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: comment_until_eol(WINDOWS_EOL)
---
[
(
Int {
value: 123,
},
0..3,
),
(
Comment(
"# Foo",
),
5..10,
),
(
Newline,
10..12,
),
(
Int {
value: 456,
},
12..15,
),
(
Newline,
15..15,
),
]

View File

@@ -0,0 +1,88 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: double_dedent_with_eol(MAC_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..11,
),
(
Indent,
11..12,
),
(
If,
12..14,
),
(
Name {
name: "x",
},
15..16,
),
(
Colon,
16..17,
),
(
Newline,
17..18,
),
(
NonLogicalNewline,
18..19,
),
(
Indent,
19..21,
),
(
Return,
21..27,
),
(
Int {
value: 99,
},
28..30,
),
(
Newline,
30..31,
),
(
NonLogicalNewline,
31..32,
),
(
Dedent,
32..32,
),
(
Dedent,
32..32,
),
]

View File

@@ -0,0 +1,88 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: double_dedent_with_tabs_eol(MAC_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..11,
),
(
Indent,
11..12,
),
(
If,
12..14,
),
(
Name {
name: "x",
},
15..16,
),
(
Colon,
16..17,
),
(
Newline,
17..18,
),
(
NonLogicalNewline,
18..19,
),
(
Indent,
19..22,
),
(
Return,
22..28,
),
(
Int {
value: 99,
},
29..31,
),
(
Newline,
31..32,
),
(
NonLogicalNewline,
32..33,
),
(
Dedent,
33..33,
),
(
Dedent,
33..33,
),
]

View File

@@ -0,0 +1,88 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: double_dedent_with_tabs_eol(UNIX_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..11,
),
(
Indent,
11..12,
),
(
If,
12..14,
),
(
Name {
name: "x",
},
15..16,
),
(
Colon,
16..17,
),
(
Newline,
17..18,
),
(
NonLogicalNewline,
18..19,
),
(
Indent,
19..22,
),
(
Return,
22..28,
),
(
Int {
value: 99,
},
29..31,
),
(
Newline,
31..32,
),
(
NonLogicalNewline,
32..33,
),
(
Dedent,
33..33,
),
(
Dedent,
33..33,
),
]

View File

@@ -0,0 +1,88 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: double_dedent_with_tabs_eol(WINDOWS_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..12,
),
(
Indent,
12..13,
),
(
If,
13..15,
),
(
Name {
name: "x",
},
16..17,
),
(
Colon,
17..18,
),
(
Newline,
18..20,
),
(
NonLogicalNewline,
20..22,
),
(
Indent,
22..25,
),
(
Return,
25..31,
),
(
Int {
value: 99,
},
32..34,
),
(
Newline,
34..36,
),
(
NonLogicalNewline,
36..38,
),
(
Dedent,
38..38,
),
(
Dedent,
38..38,
),
]

View File

@@ -0,0 +1,88 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: double_dedent_with_eol(UNIX_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..11,
),
(
Indent,
11..12,
),
(
If,
12..14,
),
(
Name {
name: "x",
},
15..16,
),
(
Colon,
16..17,
),
(
Newline,
17..18,
),
(
NonLogicalNewline,
18..19,
),
(
Indent,
19..21,
),
(
Return,
21..27,
),
(
Int {
value: 99,
},
28..30,
),
(
Newline,
30..31,
),
(
NonLogicalNewline,
31..32,
),
(
Dedent,
32..32,
),
(
Dedent,
32..32,
),
]

View File

@@ -0,0 +1,88 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: double_dedent_with_eol(WINDOWS_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..12,
),
(
Indent,
12..13,
),
(
If,
13..15,
),
(
Name {
name: "x",
},
16..17,
),
(
Colon,
17..18,
),
(
Newline,
18..20,
),
(
NonLogicalNewline,
20..22,
),
(
Indent,
22..24,
),
(
Return,
24..30,
),
(
Int {
value: 99,
},
31..33,
),
(
Newline,
33..35,
),
(
NonLogicalNewline,
35..37,
),
(
Dedent,
37..37,
),
(
Dedent,
37..37,
),
]

View File

@@ -3,49 +3,103 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_jupyter_source(source)
---
[
IpyEscapeCommand {
value: "",
kind: Magic,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Magic2,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Shell,
},
Newline,
IpyEscapeCommand {
value: "",
kind: ShCap,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Paren,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Quote,
},
Newline,
IpyEscapeCommand {
value: "",
kind: Quote2,
},
Newline,
(
IpyEscapeCommand {
value: "",
kind: Magic,
},
0..1,
),
(
Newline,
1..2,
),
(
IpyEscapeCommand {
value: "",
kind: Magic2,
},
2..4,
),
(
Newline,
4..5,
),
(
IpyEscapeCommand {
value: "",
kind: Shell,
},
5..6,
),
(
Newline,
6..7,
),
(
IpyEscapeCommand {
value: "",
kind: ShCap,
},
7..9,
),
(
Newline,
9..10,
),
(
IpyEscapeCommand {
value: "",
kind: Help,
},
10..11,
),
(
Newline,
11..12,
),
(
IpyEscapeCommand {
value: "",
kind: Help2,
},
12..14,
),
(
Newline,
14..15,
),
(
IpyEscapeCommand {
value: "",
kind: Paren,
},
15..16,
),
(
Newline,
16..17,
),
(
IpyEscapeCommand {
value: "",
kind: Quote,
},
17..18,
),
(
Newline,
18..19,
),
(
IpyEscapeCommand {
value: "",
kind: Quote2,
},
19..20,
),
(
Newline,
20..20,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
(
String {
value: "\\N{EN SPACE}",
kind: String,
triple_quoted: false,
},
0..14,
),
(
Newline,
14..14,
),
]

View File

@@ -0,0 +1,58 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: indentation_with_eol(MAC_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..11,
),
(
Indent,
11..15,
),
(
Return,
15..21,
),
(
Int {
value: 99,
},
22..24,
),
(
Newline,
24..25,
),
(
NonLogicalNewline,
25..26,
),
(
Dedent,
26..26,
),
]

View File

@@ -0,0 +1,58 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: indentation_with_eol(UNIX_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..11,
),
(
Indent,
11..15,
),
(
Return,
15..21,
),
(
Int {
value: 99,
},
22..24,
),
(
Newline,
24..25,
),
(
NonLogicalNewline,
25..26,
),
(
Dedent,
26..26,
),
]

View File

@@ -0,0 +1,58 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: indentation_with_eol(WINDOWS_EOL)
---
[
(
Def,
0..3,
),
(
Name {
name: "foo",
},
4..7,
),
(
Lpar,
7..8,
),
(
Rpar,
8..9,
),
(
Colon,
9..10,
),
(
Newline,
10..12,
),
(
Indent,
12..16,
),
(
Return,
16..22,
),
(
Int {
value: 99,
},
23..25,
),
(
Newline,
25..27,
),
(
NonLogicalNewline,
27..29,
),
(
Dedent,
29..29,
),
]

View File

@@ -3,59 +3,125 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_jupyter_source(source)
---
[
IpyEscapeCommand {
value: "foo",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "timeit a = b",
kind: Magic,
},
Newline,
IpyEscapeCommand {
value: "timeit a % 3",
kind: Magic,
},
Newline,
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
Newline,
IpyEscapeCommand {
value: "pwd && ls -a | sed 's/^/\\\\ /'",
kind: Shell,
},
Newline,
IpyEscapeCommand {
value: "cd /Users/foo/Library/Application\\ Support/",
kind: ShCap,
},
Newline,
IpyEscapeCommand {
value: "foo 1 2",
kind: Paren,
},
Newline,
IpyEscapeCommand {
value: "foo 1 2",
kind: Quote,
},
Newline,
IpyEscapeCommand {
value: "foo 1 2",
kind: Quote2,
},
Newline,
IpyEscapeCommand {
value: "ls",
kind: Shell,
},
Newline,
(
IpyEscapeCommand {
value: "foo",
kind: Help,
},
0..4,
),
(
Newline,
4..5,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
5..10,
),
(
Newline,
10..11,
),
(
IpyEscapeCommand {
value: "timeit a = b",
kind: Magic,
},
11..24,
),
(
Newline,
24..25,
),
(
IpyEscapeCommand {
value: "timeit a % 3",
kind: Magic,
},
25..38,
),
(
Newline,
38..39,
),
(
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
39..65,
),
(
Newline,
65..66,
),
(
IpyEscapeCommand {
value: "pwd && ls -a | sed 's/^/\\\\ /'",
kind: Shell,
},
66..103,
),
(
Newline,
103..104,
),
(
IpyEscapeCommand {
value: "cd /Users/foo/Library/Application\\ Support/",
kind: ShCap,
},
104..149,
),
(
Newline,
149..150,
),
(
IpyEscapeCommand {
value: "foo 1 2",
kind: Paren,
},
150..158,
),
(
Newline,
158..159,
),
(
IpyEscapeCommand {
value: "foo 1 2",
kind: Quote,
},
159..167,
),
(
Newline,
167..168,
),
(
IpyEscapeCommand {
value: "foo 1 2",
kind: Quote2,
},
168..176,
),
(
Newline,
176..177,
),
(
IpyEscapeCommand {
value: "ls",
kind: Shell,
},
177..180,
),
(
Newline,
180..180,
),
]

View File

@@ -3,40 +3,88 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_jupyter_source(source)
---
[
Name {
name: "pwd",
},
Equal,
IpyEscapeCommand {
value: "pwd",
kind: Shell,
},
Newline,
Name {
name: "foo",
},
Equal,
IpyEscapeCommand {
value: "timeit a = b",
kind: Magic,
},
Newline,
Name {
name: "bar",
},
Equal,
IpyEscapeCommand {
value: "timeit a % 3",
kind: Magic,
},
Newline,
Name {
name: "baz",
},
Equal,
IpyEscapeCommand {
value: "matplotlib inline",
kind: Magic,
},
Newline,
(
Name {
name: "pwd",
},
0..3,
),
(
Equal,
4..5,
),
(
IpyEscapeCommand {
value: "pwd",
kind: Shell,
},
6..10,
),
(
Newline,
10..11,
),
(
Name {
name: "foo",
},
11..14,
),
(
Equal,
15..16,
),
(
IpyEscapeCommand {
value: "timeit a = b",
kind: Magic,
},
17..30,
),
(
Newline,
30..31,
),
(
Name {
name: "bar",
},
31..34,
),
(
Equal,
35..36,
),
(
IpyEscapeCommand {
value: "timeit a % 3",
kind: Magic,
},
37..50,
),
(
Newline,
50..51,
),
(
Name {
name: "baz",
},
51..54,
),
(
Equal,
55..56,
),
(
IpyEscapeCommand {
value: "matplotlib inline",
kind: Magic,
},
57..85,
),
(
Newline,
85..85,
),
]

View File

@@ -3,15 +3,39 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_jupyter_source(source)
---
[
If,
True,
Colon,
Newline,
Indent,
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
Newline,
Dedent,
(
If,
0..2,
),
(
True,
3..7,
),
(
Colon,
7..8,
),
(
Newline,
8..9,
),
(
Indent,
9..13,
),
(
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
13..43,
),
(
Newline,
43..43,
),
(
Dedent,
43..43,
),
]

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: ipython_escape_command_line_continuation_eol(MAC_EOL)
---
[
(
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
0..24,
),
(
Newline,
24..24,
),
]

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: ipython_escape_command_line_continuation_eol(UNIX_EOL)
---
[
(
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
0..24,
),
(
Newline,
24..24,
),
]

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: ipython_escape_command_line_continuation_eol(WINDOWS_EOL)
---
[
(
IpyEscapeCommand {
value: "matplotlib --inline",
kind: Magic,
},
0..25,
),
(
Newline,
25..25,
),
]

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: ipython_escape_command_line_continuation_with_eol_and_eof(MAC_EOL)
---
[
(
IpyEscapeCommand {
value: "matplotlib ",
kind: Magic,
},
0..14,
),
(
Newline,
14..14,
),
]

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: ipython_escape_command_line_continuation_with_eol_and_eof(UNIX_EOL)
---
[
(
IpyEscapeCommand {
value: "matplotlib ",
kind: Magic,
},
0..14,
),
(
Newline,
14..14,
),
]

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: ipython_escape_command_line_continuation_with_eol_and_eof(WINDOWS_EOL)
---
[
(
IpyEscapeCommand {
value: "matplotlib ",
kind: Magic,
},
0..15,
),
(
Newline,
15..15,
),
]

View File

@@ -3,84 +3,180 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_jupyter_source(source)
---
[
IpyEscapeCommand {
value: "foo",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: " foo ?",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "foo???",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "?foo???",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "foo",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: " ?",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "??",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "%foo",
kind: Help,
},
Newline,
IpyEscapeCommand {
value: "%foo",
kind: Help2,
},
Newline,
IpyEscapeCommand {
value: "foo???",
kind: Magic2,
},
Newline,
IpyEscapeCommand {
value: "pwd",
kind: Help,
},
Newline,
(
IpyEscapeCommand {
value: "foo",
kind: Help,
},
0..5,
),
(
Newline,
5..6,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help,
},
6..15,
),
(
Newline,
15..16,
),
(
IpyEscapeCommand {
value: " foo ?",
kind: Help2,
},
16..27,
),
(
Newline,
27..28,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
28..34,
),
(
Newline,
34..35,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
35..42,
),
(
Newline,
42..43,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help,
},
43..50,
),
(
Newline,
50..51,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help2,
},
51..59,
),
(
Newline,
59..60,
),
(
IpyEscapeCommand {
value: "foo???",
kind: Help2,
},
60..68,
),
(
Newline,
68..69,
),
(
IpyEscapeCommand {
value: "?foo???",
kind: Help2,
},
69..78,
),
(
Newline,
78..79,
),
(
IpyEscapeCommand {
value: "foo",
kind: Help,
},
79..92,
),
(
Newline,
92..93,
),
(
IpyEscapeCommand {
value: " ?",
kind: Help2,
},
93..99,
),
(
Newline,
99..100,
),
(
IpyEscapeCommand {
value: "??",
kind: Help2,
},
100..104,
),
(
Newline,
104..105,
),
(
IpyEscapeCommand {
value: "%foo",
kind: Help,
},
105..110,
),
(
Newline,
110..111,
),
(
IpyEscapeCommand {
value: "%foo",
kind: Help2,
},
111..117,
),
(
Newline,
117..118,
),
(
IpyEscapeCommand {
value: "foo???",
kind: Magic2,
},
118..126,
),
(
Newline,
126..127,
),
(
IpyEscapeCommand {
value: "pwd",
kind: Help,
},
127..132,
),
(
Newline,
132..132,
),
]

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(&source)
---
[
(
Int {
value: 99232,
},
0..5,
),
(
Comment(
"#",
),
7..8,
),
(
Newline,
8..8,
),
]

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(&source)
---
[
(
Int {
value: 99232,
},
0..5,
),
(
Comment(
"# foo",
),
7..12,
),
(
Newline,
12..12,
),
]

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(&source)
---
[
(
Int {
value: 99232,
},
0..5,
),
(
Comment(
"# ",
),
7..9,
),
(
Newline,
9..9,
),
]

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(&source)
---
[
(
Int {
value: 99232,
},
0..5,
),
(
Comment(
"# ",
),
7..10,
),
(
Newline,
10..10,
),
]

View File

@@ -3,12 +3,24 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
Comment(
"#Hello",
(
Comment(
"#Hello",
),
0..6,
),
NonLogicalNewline,
Comment(
"#World",
(
NonLogicalNewline,
6..7,
),
(
Comment(
"#World",
),
7..13,
),
(
NonLogicalNewline,
13..14,
),
NonLogicalNewline,
]

View File

@@ -0,0 +1,142 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: newline_in_brackets_eol(MAC_EOL)
---
[
(
Name {
name: "x",
},
0..1,
),
(
Equal,
2..3,
),
(
Lsqb,
4..5,
),
(
NonLogicalNewline,
5..6,
),
(
NonLogicalNewline,
6..7,
),
(
Int {
value: 1,
},
11..12,
),
(
Comma,
12..13,
),
(
Int {
value: 2,
},
13..14,
),
(
NonLogicalNewline,
14..15,
),
(
Comma,
15..16,
),
(
Lpar,
16..17,
),
(
Int {
value: 3,
},
17..18,
),
(
Comma,
18..19,
),
(
NonLogicalNewline,
19..20,
),
(
Int {
value: 4,
},
20..21,
),
(
Comma,
21..22,
),
(
NonLogicalNewline,
22..23,
),
(
Rpar,
23..24,
),
(
Comma,
24..25,
),
(
Lbrace,
26..27,
),
(
NonLogicalNewline,
27..28,
),
(
Int {
value: 5,
},
28..29,
),
(
Comma,
29..30,
),
(
NonLogicalNewline,
30..31,
),
(
Int {
value: 6,
},
31..32,
),
(
Comma,
32..33,
),
(
Int {
value: 7,
},
35..36,
),
(
Rbrace,
36..37,
),
(
Rsqb,
37..38,
),
(
Newline,
38..39,
),
]

View File

@@ -0,0 +1,142 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: newline_in_brackets_eol(UNIX_EOL)
---
[
(
Name {
name: "x",
},
0..1,
),
(
Equal,
2..3,
),
(
Lsqb,
4..5,
),
(
NonLogicalNewline,
5..6,
),
(
NonLogicalNewline,
6..7,
),
(
Int {
value: 1,
},
11..12,
),
(
Comma,
12..13,
),
(
Int {
value: 2,
},
13..14,
),
(
NonLogicalNewline,
14..15,
),
(
Comma,
15..16,
),
(
Lpar,
16..17,
),
(
Int {
value: 3,
},
17..18,
),
(
Comma,
18..19,
),
(
NonLogicalNewline,
19..20,
),
(
Int {
value: 4,
},
20..21,
),
(
Comma,
21..22,
),
(
NonLogicalNewline,
22..23,
),
(
Rpar,
23..24,
),
(
Comma,
24..25,
),
(
Lbrace,
26..27,
),
(
NonLogicalNewline,
27..28,
),
(
Int {
value: 5,
},
28..29,
),
(
Comma,
29..30,
),
(
NonLogicalNewline,
30..31,
),
(
Int {
value: 6,
},
31..32,
),
(
Comma,
32..33,
),
(
Int {
value: 7,
},
35..36,
),
(
Rbrace,
36..37,
),
(
Rsqb,
37..38,
),
(
Newline,
38..39,
),
]

View File

@@ -0,0 +1,142 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: newline_in_brackets_eol(WINDOWS_EOL)
---
[
(
Name {
name: "x",
},
0..1,
),
(
Equal,
2..3,
),
(
Lsqb,
4..5,
),
(
NonLogicalNewline,
5..7,
),
(
NonLogicalNewline,
7..9,
),
(
Int {
value: 1,
},
13..14,
),
(
Comma,
14..15,
),
(
Int {
value: 2,
},
15..16,
),
(
NonLogicalNewline,
16..18,
),
(
Comma,
18..19,
),
(
Lpar,
19..20,
),
(
Int {
value: 3,
},
20..21,
),
(
Comma,
21..22,
),
(
NonLogicalNewline,
22..24,
),
(
Int {
value: 4,
},
24..25,
),
(
Comma,
25..26,
),
(
NonLogicalNewline,
26..28,
),
(
Rpar,
28..29,
),
(
Comma,
29..30,
),
(
Lbrace,
31..32,
),
(
NonLogicalNewline,
32..34,
),
(
Int {
value: 5,
},
34..35,
),
(
Comma,
35..36,
),
(
NonLogicalNewline,
36..38,
),
(
Int {
value: 6,
},
38..39,
),
(
Comma,
39..40,
),
(
Int {
value: 7,
},
43..44,
),
(
Rbrace,
44..45,
),
(
Rsqb,
45..46,
),
(
Newline,
46..48,
),
]

View File

@@ -3,32 +3,68 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
Lpar,
NonLogicalNewline,
String {
value: "a",
kind: String,
triple_quoted: false,
},
NonLogicalNewline,
String {
value: "b",
kind: String,
triple_quoted: false,
},
NonLogicalNewline,
NonLogicalNewline,
String {
value: "c",
kind: String,
triple_quoted: false,
},
String {
value: "d",
kind: String,
triple_quoted: false,
},
NonLogicalNewline,
Rpar,
Newline,
(
Lpar,
0..1,
),
(
NonLogicalNewline,
1..2,
),
(
String {
value: "a",
kind: String,
triple_quoted: false,
},
6..9,
),
(
NonLogicalNewline,
9..10,
),
(
String {
value: "b",
kind: String,
triple_quoted: false,
},
14..17,
),
(
NonLogicalNewline,
17..18,
),
(
NonLogicalNewline,
18..19,
),
(
String {
value: "c",
kind: String,
triple_quoted: false,
},
23..26,
),
(
String {
value: "d",
kind: String,
triple_quoted: false,
},
33..36,
),
(
NonLogicalNewline,
36..37,
),
(
Rpar,
37..38,
),
(
Newline,
38..38,
),
]

View File

@@ -3,40 +3,76 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
Int {
value: 47,
},
Int {
value: 10,
},
Int {
value: 13,
},
Int {
value: 0,
},
Int {
value: 123,
},
Int {
value: 1234567890,
},
Float {
value: 0.2,
},
Float {
value: 100.0,
},
Float {
value: 2100.0,
},
Complex {
real: 0.0,
imag: 2.0,
},
Complex {
real: 0.0,
imag: 2.2,
},
Newline,
(
Int {
value: 47,
},
0..4,
),
(
Int {
value: 10,
},
5..9,
),
(
Int {
value: 13,
},
10..16,
),
(
Int {
value: 0,
},
17..18,
),
(
Int {
value: 123,
},
19..22,
),
(
Int {
value: 1234567890,
},
23..36,
),
(
Float {
value: 0.2,
},
37..40,
),
(
Float {
value: 100.0,
},
41..45,
),
(
Float {
value: 2100.0,
},
46..51,
),
(
Complex {
real: 0.0,
imag: 2.0,
},
52..54,
),
(
Complex {
real: 0.0,
imag: 2.2,
},
55..59,
),
(
Newline,
59..59,
),
]

View File

@@ -3,10 +3,28 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
DoubleSlash,
DoubleSlash,
DoubleSlashEqual,
Slash,
Slash,
Newline,
(
DoubleSlash,
0..2,
),
(
DoubleSlash,
2..4,
),
(
DoubleSlashEqual,
4..7,
),
(
Slash,
7..8,
),
(
Slash,
9..10,
),
(
Newline,
10..10,
),
]

View File

@@ -3,50 +3,80 @@ source: crates/ruff_python_parser/src/lexer.rs
expression: lex_source(source)
---
[
String {
value: "double",
kind: String,
triple_quoted: false,
},
String {
value: "single",
kind: String,
triple_quoted: false,
},
String {
value: "can\\'t",
kind: String,
triple_quoted: false,
},
String {
value: "\\\\\\\"",
kind: String,
triple_quoted: false,
},
String {
value: "\\t\\r\\n",
kind: String,
triple_quoted: false,
},
String {
value: "\\g",
kind: String,
triple_quoted: false,
},
String {
value: "raw\\'",
kind: RawString,
triple_quoted: false,
},
String {
value: "\\420",
kind: String,
triple_quoted: false,
},
String {
value: "\\200\\0a",
kind: String,
triple_quoted: false,
},
Newline,
(
String {
value: "double",
kind: String,
triple_quoted: false,
},
0..8,
),
(
String {
value: "single",
kind: String,
triple_quoted: false,
},
9..17,
),
(
String {
value: "can\\'t",
kind: String,
triple_quoted: false,
},
18..26,
),
(
String {
value: "\\\\\\\"",
kind: String,
triple_quoted: false,
},
27..33,
),
(
String {
value: "\\t\\r\\n",
kind: String,
triple_quoted: false,
},
34..42,
),
(
String {
value: "\\g",
kind: String,
triple_quoted: false,
},
43..47,
),
(
String {
value: "raw\\'",
kind: RawString,
triple_quoted: false,
},
48..56,
),
(
String {
value: "\\420",
kind: String,
triple_quoted: false,
},
57..63,
),
(
String {
value: "\\200\\0a",
kind: String,
triple_quoted: false,
},
64..73,
),
(
Newline,
73..73,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: string_continuation_with_eol(MAC_EOL)
---
[
(
String {
value: "abc\\\rdef",
kind: String,
triple_quoted: false,
},
0..10,
),
(
Newline,
10..10,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: string_continuation_with_eol(UNIX_EOL)
---
[
(
String {
value: "abc\\\ndef",
kind: String,
triple_quoted: false,
},
0..10,
),
(
Newline,
10..10,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: string_continuation_with_eol(WINDOWS_EOL)
---
[
(
String {
value: "abc\\\r\ndef",
kind: String,
triple_quoted: false,
},
0..11,
),
(
Newline,
11..11,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: triple_quoted_eol(MAC_EOL)
---
[
(
String {
value: "\r test string\r ",
kind: String,
triple_quoted: true,
},
0..21,
),
(
Newline,
21..21,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: triple_quoted_eol(UNIX_EOL)
---
[
(
String {
value: "\n test string\n ",
kind: String,
triple_quoted: true,
},
0..21,
),
(
Newline,
21..21,
),
]

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_python_parser/src/lexer.rs
expression: triple_quoted_eol(WINDOWS_EOL)
---
[
(
String {
value: "\r\n test string\r\n ",
kind: String,
triple_quoted: true,
},
0..23,
),
(
Newline,
23..23,
),
]

View File

@@ -440,14 +440,17 @@ impl Configuration {
}
pub fn as_rule_table(&self) -> RuleTable {
let preview = self.preview.unwrap_or_default();
// The select_set keeps track of which rules have been selected.
let mut select_set: RuleSet = defaults::PREFIXES.iter().flatten().collect();
// The fixable set keeps track of which rules are fixable.
let mut fixable_set: RuleSet = RuleSelector::All
.into_iter()
.chain(&RuleSelector::Nursery)
let mut select_set: RuleSet = defaults::PREFIXES
.iter()
.flat_map(|selector| selector.rules(preview))
.collect();
// The fixable set keeps track of which rules are fixable.
let mut fixable_set: RuleSet = RuleSelector::All.rules(preview).collect();
// Ignores normally only subtract from the current set of selected
// rules. By that logic the ignore in `select = [], ignore = ["E501"]`
// would be effectless. Instead we carry over the ignores to the next
@@ -482,7 +485,7 @@ impl Configuration {
.chain(selection.extend_select.iter())
.filter(|s| s.specificity() == spec)
{
for rule in selector {
for rule in selector.rules(preview) {
select_map_updates.insert(rule, true);
}
}
@@ -492,7 +495,7 @@ impl Configuration {
.chain(carriedover_ignores.into_iter().flatten())
.filter(|s| s.specificity() == spec)
{
for rule in selector {
for rule in selector.rules(preview) {
select_map_updates.insert(rule, false);
}
}
@@ -504,7 +507,7 @@ impl Configuration {
.chain(selection.extend_fixable.iter())
.filter(|s| s.specificity() == spec)
{
for rule in selector {
for rule in selector.rules(preview) {
fixable_map_updates.insert(rule, true);
}
}
@@ -514,7 +517,7 @@ impl Configuration {
.chain(carriedover_unfixables.into_iter().flatten())
.filter(|s| s.specificity() == spec)
{
for rule in selector {
for rule in selector.rules(preview) {
fixable_map_updates.insert(rule, false);
}
}
@@ -761,26 +764,122 @@ pub fn resolve_src(src: &[String], project_root: &Path) -> Result<Vec<PathBuf>>
#[cfg(test)]
mod tests {
use crate::configuration::{Configuration, RuleSelection};
use ruff::codes::Pycodestyle;
use ruff::registry::{Rule, RuleSet};
use ruff::codes::{Flake8Copyright, Pycodestyle};
use ruff::registry::{Linter, Rule, RuleSet};
use ruff::settings::types::PreviewMode;
use ruff::RuleSelector;
const NURSERY_RULES: &[Rule] = &[
Rule::MissingCopyrightNotice,
Rule::IndentationWithInvalidMultiple,
Rule::NoIndentedBlock,
Rule::UnexpectedIndentation,
Rule::IndentationWithInvalidMultipleComment,
Rule::NoIndentedBlockComment,
Rule::UnexpectedIndentationComment,
Rule::OverIndented,
Rule::WhitespaceAfterOpenBracket,
Rule::WhitespaceBeforeCloseBracket,
Rule::WhitespaceBeforePunctuation,
Rule::WhitespaceBeforeParameters,
Rule::MultipleSpacesBeforeOperator,
Rule::MultipleSpacesAfterOperator,
Rule::TabBeforeOperator,
Rule::TabAfterOperator,
Rule::MissingWhitespaceAroundOperator,
Rule::MissingWhitespaceAroundArithmeticOperator,
Rule::MissingWhitespaceAroundBitwiseOrShiftOperator,
Rule::MissingWhitespaceAroundModuloOperator,
Rule::MissingWhitespace,
Rule::MultipleSpacesAfterComma,
Rule::TabAfterComma,
Rule::UnexpectedSpacesAroundKeywordParameterEquals,
Rule::MissingWhitespaceAroundParameterEquals,
Rule::TooFewSpacesBeforeInlineComment,
Rule::NoSpaceAfterInlineComment,
Rule::NoSpaceAfterBlockComment,
Rule::MultipleLeadingHashesForBlockComment,
Rule::MultipleSpacesAfterKeyword,
Rule::MultipleSpacesBeforeKeyword,
Rule::TabAfterKeyword,
Rule::TabBeforeKeyword,
Rule::MissingWhitespaceAfterKeyword,
Rule::CompareToEmptyString,
Rule::NoSelfUse,
Rule::EqWithoutHash,
Rule::BadDunderMethodName,
Rule::RepeatedAppend,
Rule::DeleteFullSlice,
Rule::CheckAndRemoveFromSet,
Rule::QuadraticListSummation,
];
#[allow(clippy::needless_pass_by_value)]
fn resolve_rules(selections: impl IntoIterator<Item = RuleSelection>) -> RuleSet {
fn resolve_rules(
selections: impl IntoIterator<Item = RuleSelection>,
preview: Option<PreviewMode>,
) -> RuleSet {
Configuration {
rule_selections: selections.into_iter().collect(),
preview,
..Configuration::default()
}
.as_rule_table()
.iter_enabled()
// Filter out rule gated behind `#[cfg(feature = "unreachable-code")]`, which is off-by-default
.filter(|rule| rule.noqa_code() != "RUF014")
.collect()
}
#[test]
fn rule_codes() {
let actual = resolve_rules([RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
..RuleSelection::default()
}]);
fn select_linter() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Linter::Pycodestyle.into()]),
..RuleSelection::default()
}],
None,
);
let expected = RuleSet::from_rules(&[
Rule::MixedSpacesAndTabs,
Rule::MultipleImportsOnOneLine,
Rule::ModuleImportNotAtTopOfFile,
Rule::LineTooLong,
Rule::MultipleStatementsOnOneLineColon,
Rule::MultipleStatementsOnOneLineSemicolon,
Rule::UselessSemicolon,
Rule::NoneComparison,
Rule::TrueFalseComparison,
Rule::NotInTest,
Rule::NotIsTest,
Rule::TypeComparison,
Rule::BareExcept,
Rule::LambdaAssignment,
Rule::AmbiguousVariableName,
Rule::AmbiguousClassName,
Rule::AmbiguousFunctionName,
Rule::IOError,
Rule::SyntaxError,
Rule::TabIndentation,
Rule::TrailingWhitespace,
Rule::MissingNewlineAtEndOfFile,
Rule::BlankLineWithWhitespace,
Rule::DocLineTooLong,
Rule::InvalidEscapeSequence,
]);
assert_eq!(actual, expected);
}
#[test]
fn select_one_char_prefix() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
..RuleSelection::default()
}],
None,
);
let expected = RuleSet::from_rules(&[
Rule::TrailingWhitespace,
@@ -791,19 +890,31 @@ mod tests {
Rule::TabIndentation,
]);
assert_eq!(actual, expected);
}
let actual = resolve_rules([RuleSelection {
select: Some(vec![Pycodestyle::W6.into()]),
..RuleSelection::default()
}]);
#[test]
fn select_two_char_prefix() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Pycodestyle::W6.into()]),
..RuleSelection::default()
}],
None,
);
let expected = RuleSet::from_rule(Rule::InvalidEscapeSequence);
assert_eq!(actual, expected);
}
let actual = resolve_rules([RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
}]);
#[test]
fn select_prefix_ignore_code() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
}],
None,
);
let expected = RuleSet::from_rules(&[
Rule::TrailingWhitespace,
Rule::BlankLineWithWhitespace,
@@ -812,73 +923,100 @@ mod tests {
Rule::TabIndentation,
]);
assert_eq!(actual, expected);
}
let actual = resolve_rules([RuleSelection {
select: Some(vec![Pycodestyle::W292.into()]),
ignore: vec![Pycodestyle::W.into()],
..RuleSelection::default()
}]);
let expected = RuleSet::from_rule(Rule::MissingNewlineAtEndOfFile);
assert_eq!(actual, expected);
let actual = resolve_rules([RuleSelection {
select: Some(vec![Pycodestyle::W605.into()]),
ignore: vec![Pycodestyle::W605.into()],
..RuleSelection::default()
}]);
let expected = RuleSet::empty();
assert_eq!(actual, expected);
let actual = resolve_rules([
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
extend_select: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
]);
let expected = RuleSet::from_rules(&[
Rule::TrailingWhitespace,
Rule::MissingNewlineAtEndOfFile,
Rule::BlankLineWithWhitespace,
Rule::DocLineTooLong,
Rule::InvalidEscapeSequence,
Rule::TabIndentation,
]);
assert_eq!(actual, expected);
let actual = resolve_rules([
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
extend_select: vec![Pycodestyle::W292.into()],
#[test]
fn select_code_ignore_prefix() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Pycodestyle::W292.into()]),
ignore: vec![Pycodestyle::W.into()],
..RuleSelection::default()
},
]);
}],
None,
);
let expected = RuleSet::from_rule(Rule::MissingNewlineAtEndOfFile);
assert_eq!(actual, expected);
}
#[test]
fn carry_over_ignore() {
let actual = resolve_rules([
RuleSelection {
select: Some(vec![]),
ignore: vec![Pycodestyle::W292.into()],
fn select_code_ignore_code() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Pycodestyle::W605.into()]),
ignore: vec![Pycodestyle::W605.into()],
..RuleSelection::default()
},
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
..RuleSelection::default()
},
}],
None,
);
let expected = RuleSet::empty();
assert_eq!(actual, expected);
}
#[test]
fn select_prefix_ignore_code_then_extend_select_code() {
let actual = resolve_rules(
[
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
extend_select: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
],
None,
);
let expected = RuleSet::from_rules(&[
Rule::TrailingWhitespace,
Rule::MissingNewlineAtEndOfFile,
Rule::BlankLineWithWhitespace,
Rule::DocLineTooLong,
Rule::InvalidEscapeSequence,
Rule::TabIndentation,
]);
assert_eq!(actual, expected);
}
#[test]
fn select_prefix_ignore_code_then_extend_select_code_ignore_prefix() {
let actual = resolve_rules(
[
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
extend_select: vec![Pycodestyle::W292.into()],
ignore: vec![Pycodestyle::W.into()],
..RuleSelection::default()
},
],
None,
);
let expected = RuleSet::from_rule(Rule::MissingNewlineAtEndOfFile);
assert_eq!(actual, expected);
}
#[test]
fn ignore_code_then_select_prefix() {
let actual = resolve_rules(
[
RuleSelection {
select: Some(vec![]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
..RuleSelection::default()
},
],
None,
);
let expected = RuleSet::from_rules(&[
Rule::TrailingWhitespace,
Rule::BlankLineWithWhitespace,
@@ -887,19 +1025,25 @@ mod tests {
Rule::TabIndentation,
]);
assert_eq!(actual, expected);
}
let actual = resolve_rules([
RuleSelection {
select: Some(vec![]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W505.into()],
..RuleSelection::default()
},
]);
#[test]
fn ignore_code_then_select_prefix_ignore_code() {
let actual = resolve_rules(
[
RuleSelection {
select: Some(vec![]),
ignore: vec![Pycodestyle::W292.into()],
..RuleSelection::default()
},
RuleSelection {
select: Some(vec![Pycodestyle::W.into()]),
ignore: vec![Pycodestyle::W505.into()],
..RuleSelection::default()
},
],
None,
);
let expected = RuleSet::from_rules(&[
Rule::TrailingWhitespace,
Rule::BlankLineWithWhitespace,
@@ -908,4 +1052,124 @@ mod tests {
]);
assert_eq!(actual, expected);
}
#[test]
fn select_linter_preview() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Linter::Flake8Copyright.into()]),
..RuleSelection::default()
}],
Some(PreviewMode::Disabled),
);
let expected = RuleSet::empty();
assert_eq!(actual, expected);
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Linter::Flake8Copyright.into()]),
..RuleSelection::default()
}],
Some(PreviewMode::Enabled),
);
let expected = RuleSet::from_rule(Rule::MissingCopyrightNotice);
assert_eq!(actual, expected);
}
#[test]
fn select_prefix_preview() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Flake8Copyright::_0.into()]),
..RuleSelection::default()
}],
Some(PreviewMode::Disabled),
);
let expected = RuleSet::empty();
assert_eq!(actual, expected);
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Flake8Copyright::_0.into()]),
..RuleSelection::default()
}],
Some(PreviewMode::Enabled),
);
let expected = RuleSet::from_rule(Rule::MissingCopyrightNotice);
assert_eq!(actual, expected);
}
#[test]
fn select_preview() {
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![RuleSelector::Preview]),
..RuleSelection::default()
}],
Some(PreviewMode::Disabled),
);
let expected = RuleSet::empty();
assert_eq!(actual, expected);
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![RuleSelector::Preview]),
..RuleSelection::default()
}],
Some(PreviewMode::Enabled),
);
let expected = RuleSet::from_rules(NURSERY_RULES);
assert_eq!(actual, expected);
}
#[test]
fn nursery_select_code() {
// Backwards compatible behavior allows selection of nursery rules with their exact code
// when preview is disabled
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Flake8Copyright::_001.into()]),
..RuleSelection::default()
}],
Some(PreviewMode::Disabled),
);
let expected = RuleSet::from_rule(Rule::MissingCopyrightNotice);
assert_eq!(actual, expected);
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![Flake8Copyright::_001.into()]),
..RuleSelection::default()
}],
Some(PreviewMode::Enabled),
);
let expected = RuleSet::from_rule(Rule::MissingCopyrightNotice);
assert_eq!(actual, expected);
}
#[test]
#[allow(deprecated)]
fn select_nursery() {
// Backwards compatible behavior allows selection of nursery rules with the nursery selector
// when preview is disabled
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![RuleSelector::Nursery]),
..RuleSelection::default()
}],
Some(PreviewMode::Disabled),
);
let expected = RuleSet::from_rules(NURSERY_RULES);
assert_eq!(actual, expected);
let actual = resolve_rules(
[RuleSelection {
select: Some(vec![RuleSelector::Nursery]),
..RuleSelection::default()
}],
Some(PreviewMode::Enabled),
);
let expected = RuleSet::from_rules(NURSERY_RULES);
assert_eq!(actual, expected);
}
}

View File

@@ -157,7 +157,6 @@ mod tests {
use crate::tests::test_resource_path;
use anyhow::Result;
use ruff::codes;
use ruff::codes::RuleCodePrefix;
use ruff::line_width::LineLength;
use ruff::settings::types::PatternPrefixPair;
use rustc_hash::FxHashMap;
@@ -307,7 +306,7 @@ other-attribute = 1
]),
per_file_ignores: Some(FxHashMap::from_iter([(
"__init__.py".to_string(),
vec![RuleCodePrefix::Pyflakes(codes::Pyflakes::_401).into()]
vec![codes::Pyflakes::_401.into()]
)])),
..Options::default()
}

View File

@@ -212,6 +212,8 @@ Options:
Specify file to write the linter output to (default: stdout)
--target-version <TARGET_VERSION>
The minimum Python version that should be supported [possible values: py37, py38, py39, py310, py311, py312]
--preview
Enable preview mode; checks will include unstable rules and fixes
--config <CONFIG>
Path to the `pyproject.toml` or `ruff.toml` file to use for configuration
--statistics

View File

@@ -382,37 +382,9 @@ matter how they're provided, which avoids accidental incompatibilities and simpl
By default, no `convention` is set, and so the enabled rules are determined by the `select` setting
alone.
## What is the "nursery"?
## What is preview?
The "nursery" is a collection of newer rules that are considered experimental or unstable.
If a rule is marked as part of the "nursery", it can only be enabled via direct selection. For
example, consider a hypothetical rule, `HYP001`. If `HYP001` were included in the "nursery", it
could be enabled by adding the following to your `pyproject.toml`:
```toml
[tool.ruff]
extend-select = ["HYP001"]
```
However, it would _not_ be enabled by selecting the `HYP` category, like so:
```toml
[tool.ruff]
extend-select = ["HYP"]
```
Similarly, it would _not_ be enabled via the `ALL` selector:
```toml
[tool.ruff]
select = ["ALL"]
```
(The "nursery" terminology comes from [Clippy](https://doc.rust-lang.org/nightly/clippy/), a similar
tool for linting Rust code.)
To see which rules are currently in the "nursery", visit the [rules reference](https://beta.ruff.rs/docs/rules/).
Preview enables a collection of newer rules and fixes that are considered experimental or unstable. See the [preview documentation](https://beta.ruff.rs/docs/preview/) for more details; or, to see which rules are currently in preview, visit the [rules reference](https://beta.ruff.rs/docs/rules/).
## How can I tell what settings Ruff is using to check my code?

Some files were not shown because too many files have changed in this diff Show More