Compare commits

...

37 Commits

Author SHA1 Message Date
Zanie Blue
b9a5a32e6e Increase recursion limit in ruff_linter 2024-03-12 17:40:47 -05:00
Charlie Marsh
c56fb6e15a Sort hash maps in Settings display (#10370)
## Summary

We had a report of a test failure on a specific architecture, and
looking into it, I think the test assumes that the hash keys are
iterated in a specific order. This PR thus adds a variant to our
settings display macro specifically for maps and sets. Like `CacheKey`,
it sorts the keys when printing.

Closes https://github.com/astral-sh/ruff/issues/10359.
2024-03-12 15:59:38 -04:00
Charlie Marsh
dbf82233b8 Gate f-string struct size test for Rustc < 1.76 (#10371)
Closes https://github.com/astral-sh/ruff/issues/10319.
2024-03-12 15:46:36 -04:00
Zanie Blue
87afe36c87 Add test case for F401 in __init__ files (#10364)
In preparation for https://github.com/astral-sh/ruff/pull/5845
2024-03-12 13:30:17 -05:00
Alex Waygood
704fefc7ab F821: Fix false negatives in .py files when from __future__ import annotations is active (#10362) 2024-03-12 17:07:44 +00:00
Auguste Lalande
dacec7377c Fix Indexer fails to identify continuation preceded by newline #10351 (#10354)
## Summary

Fixes #10351

It seems the bug was caused by this section of code

b669306c87/crates/ruff_python_index/src/indexer.rs (L55-L58)

It's true that newline tokens cannot be immediately followed by line
continuations, but only outside parentheses. e.g. the exception
```
(
    1
    \
    + 2)
```

But why was this check put there in the first place? Is it guarding
against something else?



## Test Plan

New test was added to indexer
2024-03-12 00:35:41 -04:00
Anuraag (Rag) Agrawal
b669306c87 Fix typo in docs snippt -> snippet (#10353) 2024-03-11 22:33:40 -04:00
Auguste Lalande
b117f33075 [pycodestyle] Implement blank-line-at-end-of-file (W391) (#10243)
## Summary

Implements the [blank line at end of
file](https://pycodestyle.pycqa.org/en/latest/intro.html#error-codes)
rule (W391) from pycodestyle. Renamed to TooManyNewlinesAtEndOfFile for
clarity.

## Test Plan

New fixtures have been added

Part of #2402
2024-03-11 22:07:59 -04:00
Auguste Lalande
c746912b9e [pycodestyle] Implement redundant-backslash (E502) (#10292)
## Summary

Implements the
[redundant-backslash](https://pycodestyle.pycqa.org/en/latest/intro.html#error-codes)
rule (E502) from pycodestyle.

## Test Plan

New fixture has been added

Part of #2402
2024-03-11 21:15:06 -04:00
Mathieu Kniewallner
fc7139d9a5 [flake8-bandit]: Implement S610 rule (#10316)
Part of https://github.com/astral-sh/ruff/issues/1646.

## Summary

Implement `S610` rule from `flake8-bandit`. 

Upstream references:
- Implementation:
https://github.com/PyCQA/bandit/blob/1.7.8/bandit/plugins/django_sql_injection.py#L20-L97
- Test cases:
https://github.com/PyCQA/bandit/blob/1.7.8/examples/django_sql_injection_extra.py
- Test assertion:
https://github.com/PyCQA/bandit/blob/1.7.8/tests/functional/test_functional.py#L517-L524

The implementation in `bandit` targets additional arguments (`params`,
`order_by` and `select_params`) but doesn't seem to do anything with
them in the end, so I did not include them in the implementation.

Note that this rule could be prone to false positives, as ideally we
would want to check if `extra()` is tied to a [Django
queryset](https://docs.djangoproject.com/en/5.0/ref/models/querysets/),
but AFAIK Ruff is not able to resolve classes outside of the current
module.

## Test Plan

Snapshot tests
2024-03-11 20:22:02 -04:00
Charlie Marsh
f8f56186b3 [pylint] Avoid false-positive slot non-assignment for __dict__ (PLE0237) (#10348)
Closes https://github.com/astral-sh/ruff/issues/10306.
2024-03-11 18:48:56 -04:00
Charlie Marsh
02fc521369 Wrap expressions in parentheses when negating (#10346)
## Summary

When negating an expression like `a or b`, we need to wrap it in
parentheses, e.g., `not (a or b)` instead of `not a or b`, due to
operator precedence.

Closes https://github.com/astral-sh/ruff/issues/10335.

## Test Plan

`cargo test`
2024-03-11 18:20:55 -04:00
Alex Waygood
4b0666919b F821, F822: fix false positive for .pyi files; add more test coverage for .pyi files (#10341)
This PR fixes the following false positive in a `.pyi` stub file:

```py
x: int
y = x  # F821 currently emitted here, but shouldn't be in a stub file
```

In a `.py` file, this is invalid regardless of whether `from __future__ import annotations` is enabled or not. In a `.pyi` stub file, however, it's always valid, as an annotation counts as a binding in a stub file even if no value is assigned to the variable.

I also added more test coverage for `.pyi` stub files in various edge cases where ruff's behaviour is currently correct, but where `.pyi` stub files do slightly different things to `.py` files.
2024-03-11 22:15:24 +00:00
Zanie Blue
06284c3700 Add release script (#10305)
Copied over from `uv`
2024-03-11 16:26:21 -05:00
Hoël Bagard
8d73866f70 [pycodestyle] Do not trigger E225 and E275 when the next token is a ')' (#10315)
## Summary

Fixes #10295.

`E225` (`Missing whitespace around operator`) and `E275` (`Missing
whitespace after keyword`) try to add a white space even when the next
character is a `)` (which is a syntax error in most cases, the
exceptions already being handled). This causes `E202` (`Whitespace
before close bracket`) to try to remove the added whitespace, resulting
in an infinite loop when `E225`/`E275` re-add it.
This PR adds an exception in `E225` and `E275` to not trigger in case
the next token is a `)`. It is a bit simplistic, but it solves the
example given in the issue without introducing a change in behavior
(according to the fixtures).

## Test Plan

`cargo test` and the `ruff-ecosystem` check were used to check that the
PR's changes do not have side-effects.
A new fixture was added to check that running the 3 rules on the example
given in the issue does not cause ruff to fail to converge.
2024-03-11 21:23:18 +00:00
Mathieu Kniewallner
bc693ea13a [flake8-bandit] Implement upstream updates for S311, S324 and S605 (#10313)
## Summary

Pick up updates made in latest
[releases](https://github.com/PyCQA/bandit/releases) of `bandit`:
- `S311`: https://github.com/PyCQA/bandit/pull/940 and
https://github.com/PyCQA/bandit/pull/1096
- `S324`: https://github.com/PyCQA/bandit/pull/1018
- `S605`: https://github.com/PyCQA/bandit/pull/1116

## Test Plan

Snapshot tests
2024-03-11 21:07:58 +00:00
dependabot[bot]
ad84eedc18 Bump chrono from 0.4.34 to 0.4.35 (#10333) 2024-03-11 10:57:30 -04:00
dependabot[bot]
96a4f95a44 Bump js-sys from 0.3.68 to 0.3.69 (#10331) 2024-03-11 10:50:23 -04:00
dependabot[bot]
bae26b49a6 Bump wasm-bindgen from 0.2.91 to 0.2.92 (#10329)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 09:29:08 +00:00
dependabot[bot]
3d7adbc0ed Bump unicode_names2 from 1.2.1 to 1.2.2 (#10330)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 09:28:25 +00:00
dependabot[bot]
c6456b882c Bump clap from 4.5.1 to 4.5.2 (#10332)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 09:26:42 +00:00
dependabot[bot]
49eb97879a Bump the actions group with 1 update (#10334)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 09:24:59 +00:00
Jane Lewis
0c84fbb6db ruff server - A new built-in LSP for Ruff, written in Rust (#10158)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR introduces the `ruff_server` crate and a new `ruff server`
command. `ruff_server` is a re-implementation of
[`ruff-lsp`](https://github.com/astral-sh/ruff-lsp), written entirely in
Rust. It brings significant performance improvements, much tighter
integration with Ruff, a foundation for supporting entirely new language
server features, and more!

This PR is an early version of `ruff_lsp` that we're calling the
**pre-release** version. Anyone is more than welcome to use it and
submit bug reports for any issues they encounter - we'll have some
documentation on how to set it up with a few common editors, and we'll
also provide a pre-release VSCode extension for those interested.

This pre-release version supports:
- **Diagnostics for `.py` files**
- **Quick fixes**
- **Full-file formatting**
- **Range formatting**
- **Multiple workspace folders**
- **Automatic linter/formatter configuration** - taken from any
`pyproject.toml` files in the workspace.

Many thanks to @MichaReiser for his [proof-of-concept
work](https://github.com/astral-sh/ruff/pull/7262), which was important
groundwork for making this PR possible.

## Architectural Decisions

I've made an executive choice to go with `lsp-server` as a base
framework for the LSP, in favor of `tower-lsp`. There were several
reasons for this:

1. I would like to avoid `async` in our implementation. LSPs are mostly
computationally bound rather than I/O bound, and `async` adds a lot of
complexity to the API, while also making harder to reason about
execution order. This leads into the second reason, which is...
2. Any handlers that mutate state should be blocking and run in the
event loop, and the state should be lock-free. This is the approach that
`rust-analyzer` uses (also with the `lsp-server`/`lsp-types` crates as a
framework), and it gives us assurances about data mutation and execution
order. `tower-lsp` doesn't support this, which has caused some
[issues](https://github.com/ebkalderon/tower-lsp/issues/284) around data
races and out-of-order handler execution.
3. In general, I think it makes sense to have tight control over
scheduling and the specifics of our implementation, in exchange for a
slightly higher up-front cost of writing it ourselves. We'll be able to
fine-tune it to our needs and support future LSP features without
depending on an upstream maintainer.

## Test Plan

The pre-release of `ruff_server` will have snapshot tests for common
document editing scenarios. An expanded test suite is on the roadmap for
future version of `ruff_server`.
2024-03-08 20:57:23 -08:00
Charlie Marsh
a892fc755d Bump version to v0.3.2 (#10304) 2024-03-09 00:24:22 +00:00
Gautier Moin
a067d87ccc Fix incorrect Parameter range for *args and **kwargs (#10283)
## Summary

Fix #10282 

This PR updates the Python grammar to include the `*` character in
`*args` `**kwargs` in the range of the `Parameter`
```
def f(*args, **kwargs): pass
#      ~~~~    ~~~~~~    <-- range before the PR
#     ^^^^^  ^^^^^^^^    <-- range after
```

The invalid syntax `def f(*, **kwargs): ...` is also now correctly
reported.

## Test Plan

Test cases were added to `function.rs`.
2024-03-08 18:57:49 -05:00
Micha Reiser
b64f2ea401 Formatter: Improve single-with item formatting for Python 3.8 or older (#10276)
## Summary

This PR changes how we format `with` statements with a single with item
for Python 3.8 or older. This change is not compatible with Black.

This is how we format a single-item with statement today 

```python
def run(data_path, model_uri):
    with pyspark.sql.SparkSession.builder.config(
        key="spark.python.worker.reuse", value=True
    ).config(key="spark.ui.enabled", value=False).master(
        "local-cluster[2, 1, 1024]"
    ).getOrCreate():
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

This is different than how we would format the same expression if it is
inside any other clause header (`while`, `if`, ...):

```python
def run(data_path, model_uri):
    while (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

```

Which seems inconsistent to me. 

This PR changes the formatting of the single-item with Python 3.8 or
older to match that of other clause headers.

```python
def run(data_path, model_uri):
    with (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

According to our versioning policy, this style change is gated behind a
preview flag.

## Test Plan

See added tests.

Added
2024-03-08 23:56:02 +00:00
Micha Reiser
4bce801065 Fix unstable with-items formatting (#10274)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/10267

The issue with the current formatting is that the formatter flips
between the `SingleParenthesizedContextManager` and
`ParenthesizeIfExpands` or `SingleWithTarget` because the layouts use
incompatible formatting ( `SingleParenthesizedContextManager`:
`maybe_parenthesize_expression(context)` vs `ParenthesizeIfExpands`:
`parenthesize_if_expands(item)`, `SingleWithTarget`:
`optional_parentheses(item)`.

The fix is to ensure that the layouts between which the formatter flips
when adding or removing parentheses are the same. I do this by
introducing a new `SingleWithoutTarget` layout that uses the same
formatting as `SingleParenthesizedContextManager` if it has no target
and prefer `SingleWithoutTarget` over using `ParenthesizeIfExpands` or
`SingleWithTarget`.

## Formatting change

The downside is that we now use `maybe_parenthesize_expression` over
`parenthesize_if_expands` for expressions where
`can_omit_optional_parentheses` returns `false`. This can lead to stable
formatting changes. I only found one formatting change in our ecosystem
check and, unfortunately, this is necessary to fix the instability (and
instability fixes are okay to have as part of minor changes according to
our versioning policy)

The benefit of the change is that `with` items with a single context
manager and without a target are now formatted identically to how the
same expression would be formatted in other clause headers.

## Test Plan

I ran the ecosystem check locally
2024-03-08 23:48:47 +00:00
Micha Reiser
a56d42f183 Refactor with statement formatting to have explicit layouts (#10296)
## Summary

This PR refactors the with item formatting to use more explicit layouts
to make it easier to understand the different formatting cases.

The benefit of the explicit layout is that it makes it easier to reasons
about layout transition between format runs. For example, today it's
possible that `SingleWithTarget` or `ParenthesizeIfExpands` add
parentheses around the with items for `with aaaaaaaaaa + bbbbbbbbbbbb:
pass`, resulting in `with (aaaaaaaaaa + bbbbbbbbbbbb): pass`. The
problem with this is that the next formatting pass uses the
`SingleParenthesizedContextExpression` layout that uses
`maybe_parenthesize_expression` which is different from
`parenthesize_if_expands(&expr)` or `optional_parentheses(&expr)`.

## Test Plan

`cargo test`

I ran the ecosystem checks locally and there are no changes.
2024-03-08 18:40:39 -05:00
Alex Waygood
1d97f27335 Start tracking quoting style in the AST (#10298)
This PR modifies our AST so that nodes for string literals, bytes literals and f-strings all retain the following information:
- The quoting style used (double or single quotes)
- Whether the string is triple-quoted or not
- Whether the string is raw or not

This PR is a followup to #10256. Like with that PR, this PR does not, in itself, fix any bugs. However, it means that we will have the necessary information to preserve quoting style and rawness of strings in the `ExprGenerator` in a followup PR, which will allow us to provide a fix for https://github.com/astral-sh/ruff/issues/7799.

The information is recorded on the AST nodes using a bitflag field on each node, similarly to how we recorded the information on `Tok::String`, `Tok::FStringStart` and `Tok::FStringMiddle` tokens in #10298. Rather than reusing the bitflag I used for the tokens, however, I decided to create a custom bitflag for each AST node.

Using different bitflags for each node allows us to make invalid states unrepresentable: it is valid to set a `u` prefix on a string literal, but not on a bytes literal or an f-string. It also allows us to have better debug representations for each AST node modified in this PR.
2024-03-08 19:11:47 +00:00
Micha Reiser
965adbed4b Fix trailing kwargs end of line comment after slash (#10297)
## Summary

Fixes the handling end of line comments that belong to `**kwargs` when
the `**kwargs` come after a slash.

The issue was that we missed to include the `**kwargs` start position
when determining the start of the next node coming after the `/`.

Fixes https://github.com/astral-sh/ruff/issues/10281

## Test Plan

Added test
2024-03-08 14:45:26 +00:00
Alex Waygood
c504d7ab11 Track quoting style in the tokenizer (#10256) 2024-03-08 08:40:06 +00:00
Tom Kuson
72c9f7e4c9 Include actual conditions in E712 diagnostics (#10254)
## Summary

Changes the generic recommendation to replace

```python
if foo == True: ...
```

with `if cond:` to `if foo:`.

Still uses a generic message for compound comparisons as a specific
message starts to become confusing. For example,

```python
if foo == True != False: ...
```

produces two recommendations, one of which would recommend `if True:`,
which is confusing.

Resolves [recommendation in a previous
PR](https://github.com/astral-sh/ruff/pull/8613/files#r1514915070).

## Test Plan

`cargo nextest run`
2024-03-08 01:20:56 +00:00
Charlie Marsh
57be3fce90 Treat typing.Annotated subscripts as type definitions (#10285)
## Summary

I think this code has existed since the start of `typing.Annotated`
support (https://github.com/astral-sh/ruff/pull/333), and was then
overlooked over a series of refactors.

Closes https://github.com/astral-sh/ruff/issues/10279.
2024-03-07 19:51:54 -05:00
Charlie Marsh
7a675cd822 Remove Maturin pin (#10284)
## Summary

As of
https://github.com/pypa/gh-action-pypi-publish/releases/tag/v1.8.13, all
relevant dependencies have been updated to support Metadata 2.2, so we
can remove our Maturin pin.
2024-03-07 19:42:22 -05:00
Samuel Cormier-Iijima
7b4a73d421 Fix E203 false positive for slices in format strings (#10280)
## Summary

The code later in this file that checks for slices relies on the stack
of brackets to determine the position. I'm not sure why format strings
were being excluded from this, but the tests still pass with these match
guards removed.

Closes #10278

## Test Plan

~Still needs a test.~ Test case added for this example.
2024-03-07 17:09:05 -05:00
Charlie Marsh
91af5a4b74 [pyupgrade] Allow fixes for f-string rule regardless of line length (UP032) (#10263)
## Summary

This is a follow-up to https://github.com/astral-sh/ruff/pull/10238 to
offer fixes for the f-string rule regardless of the line length of the
resulting fix. To quote Alex in the linked PR:

> Yes, from the user's perspective I'd rather have a fix that may lead
to line length issues than have to fix them myself :-) Cleaning up line
lengths is easier than changing from `"".format()` to `f""`

I agree with this position, which is that if we're going to offer a
diagnostic, we should really be offering the user the ability to fix it
-- otherwise, we're just inconveniencing them.
2024-03-07 08:59:29 -05:00
Charlie Marsh
461cdad53a Avoid repeating function calls in f-string conversions (#10265)
## Summary

Given a format string like `"{x} {x}".format(x=foo())`, we should avoid
converting to an f-string, since doing so would require repeating the
function call (`f"{foo()} {foo()}"`), which could introduce side
effects.

Closes https://github.com/astral-sh/ruff/issues/10258.
2024-03-06 23:33:19 -05:00
312 changed files with 39783 additions and 30121 deletions

2
.gitattributes vendored
View File

@@ -2,6 +2,8 @@
crates/ruff_linter/resources/test/fixtures/isort/line_ending_crlf.py text eol=crlf
crates/ruff_linter/resources/test/fixtures/pycodestyle/W605_1.py text eol=crlf
crates/ruff_linter/resources/test/fixtures/pycodestyle/W391_2.py text eol=crlf
crates/ruff_linter/resources/test/fixtures/pycodestyle/W391_3.py text eol=crlf
crates/ruff_python_formatter/resources/test/fixtures/ruff/docstring_code_examples_crlf.py text eol=crlf
crates/ruff_python_formatter/tests/snapshots/format@docstring_code_examples_crlf.py.snap text eol=crlf

View File

@@ -28,7 +28,6 @@ env:
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
MATURIN_VERSION: "1.4.0"
jobs:
sdist:
@@ -45,7 +44,6 @@ jobs:
- name: "Build sdist"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
command: sdist
args: --out dist
- name: "Test sdist"
@@ -74,7 +72,6 @@ jobs:
- name: "Build wheels - x86_64"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
target: x86_64
args: --release --locked --out dist
- name: "Test wheel - x86_64"
@@ -115,7 +112,6 @@ jobs:
- name: "Build wheels - universal2"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
args: --release --locked --target universal2-apple-darwin --out dist
- name: "Test wheel - universal2"
run: |
@@ -164,7 +160,6 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
target: ${{ matrix.platform.target }}
args: --release --locked --out dist
- name: "Test wheel"
@@ -213,7 +208,6 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
target: ${{ matrix.target }}
manylinux: auto
args: --release --locked --out dist
@@ -276,7 +270,6 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
target: ${{ matrix.platform.target }}
manylinux: auto
docker-options: ${{ matrix.platform.maturin_docker_options }}
@@ -333,7 +326,6 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist
@@ -389,7 +381,6 @@ jobs:
- name: "Build wheels"
uses: PyO3/maturin-action@v1
with:
maturin-version: ${{ env.MATURIN_VERSION }}
target: ${{ matrix.platform.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist
@@ -526,7 +517,7 @@ jobs:
path: binaries
merge-multiple: true
- name: "Publish to GitHub"
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: true
files: binaries/*

View File

@@ -1,5 +1,25 @@
# Changelog
## 0.3.2
### Preview features
- Improve single-`with` item formatting for Python 3.8 or older ([#10276](https://github.com/astral-sh/ruff/pull/10276))
### Rule changes
- \[`pyupgrade`\] Allow fixes for f-string rule regardless of line length (`UP032`) ([#10263](https://github.com/astral-sh/ruff/pull/10263))
- \[`pycodestyle`\] Include actual conditions in E712 diagnostics ([#10254](https://github.com/astral-sh/ruff/pull/10254))
### Bug fixes
- Fix trailing kwargs end of line comment after slash ([#10297](https://github.com/astral-sh/ruff/pull/10297))
- Fix unstable `with` items formatting ([#10274](https://github.com/astral-sh/ruff/pull/10274))
- Avoid repeating function calls in f-string conversions ([#10265](https://github.com/astral-sh/ruff/pull/10265))
- Fix E203 false positive for slices in format strings ([#10280](https://github.com/astral-sh/ruff/pull/10280))
- Fix incorrect `Parameter` range for `*args` and `**kwargs` ([#10283](https://github.com/astral-sh/ruff/pull/10283))
- Treat `typing.Annotated` subscripts as type definitions ([#10285](https://github.com/astral-sh/ruff/pull/10285))
## 0.3.1
### Preview features

View File

@@ -329,13 +329,13 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
### Creating a new release
We use an experimental in-house tool for managing releases.
1. Install `rooster`: `pip install git+https://github.com/zanieb/rooster@main`
1. Run `rooster release`; this command will:
1. Install `uv`: `curl -LsSf https://astral.sh/uv/install.sh | sh`
1. Run `./scripts/release/bump.sh`; this command will:
- Generate a temporary virtual environment with `rooster`
- Generate a changelog entry in `CHANGELOG.md`
- Update versions in `pyproject.toml` and `Cargo.toml`
- Update references to versions in the `README.md` and documentation
- Display contributors for the release
1. The changelog should then be editorialized for consistency
- Often labels will be missing from pull requests they will need to be manually organized into the proper section
- Changes should be edited to be user-facing descriptions, avoiding internal details
@@ -359,7 +359,7 @@ We use an experimental in-house tool for managing releases.
1. Open the draft release in the GitHub release section
1. Copy the changelog for the release into the GitHub release
- See previous releases for formatting of section headers
1. Generate the contributor list with `rooster contributors` and add to the release notes
1. Append the contributors from the `bump.sh` script
1. If needed, [update the schemastore](https://github.com/astral-sh/ruff/blob/main/scripts/update_schemastore.py).
1. One can determine if an update is needed when
`git diff old-version-tag new-version-tag -- ruff.schema.json` returns a non-empty diff.

190
Cargo.lock generated
View File

@@ -270,9 +270,9 @@ dependencies = [
[[package]]
name = "chrono"
version = "0.4.34"
version = "0.4.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5bc015644b92d5890fab7489e49d21f879d5c990186827d42ec511919404f38b"
checksum = "8eaf5903dcbc0a39312feb77df2ff4c76387d591b9fc7b04a238dcf8bb62639a"
dependencies = [
"android-tzdata",
"iana-time-zone",
@@ -309,9 +309,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.1"
version = "4.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c918d541ef2913577a0f9566e9ce27cb35b6df072075769e0b26cb5a554520da"
checksum = "b230ab84b0ffdf890d5a10abdbc8b83ae1c4918275daea1ab8801f71536b2651"
dependencies = [
"clap_builder",
"clap_derive",
@@ -319,9 +319,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.1"
version = "4.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f3e7391dad68afb0c2ede1bf619f579a3dc9c2ec67f089baa397123a2f3d1eb"
checksum = "ae129e2e766ae0ec03484e609954119f123cc1fe650337e155d03b022f24f7b4"
dependencies = [
"anstream",
"anstyle",
@@ -528,6 +528,19 @@ dependencies = [
"itertools 0.10.5",
]
[[package]]
name = "crossbeam"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1137cd7e7fc0fb5d3c5a8678be38ec56e819125d8d7907411fe24ccb943faca8"
dependencies = [
"crossbeam-channel",
"crossbeam-deque",
"crossbeam-epoch",
"crossbeam-queue",
"crossbeam-utils",
]
[[package]]
name = "crossbeam-channel"
version = "0.5.12"
@@ -556,6 +569,15 @@ dependencies = [
"crossbeam-utils",
]
[[package]]
name = "crossbeam-queue"
version = "0.3.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df0346b5d5e76ac2fe4e327c5fd1118d6be7c51dfb18f9b7922923f287471e35"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "crossbeam-utils"
version = "0.8.19"
@@ -1156,10 +1178,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1a46d1a171d865aa5f83f92695765caa047a9b4cbae2cbf37dbd613a793fd4c"
[[package]]
name = "js-sys"
version = "0.3.68"
name = "jod-thread"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "406cda4b368d531c842222cf9d2600a9a4acce8d29423695379c6868a143a9ee"
checksum = "8b23360e99b8717f20aaa4598f5a6541efbe30630039fbc7706cf954a87947ae"
[[package]]
name = "js-sys"
version = "0.3.69"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "29c15563dc2726973df627357ce0c9ddddbea194836909d655df6a75d2cf296d"
dependencies = [
"wasm-bindgen",
]
@@ -1327,6 +1355,31 @@ version = "0.4.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90ed8c1e510134f979dbc4f070f87d4313098b704861a105fe34231c70a3901c"
[[package]]
name = "lsp-server"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "248f65b78f6db5d8e1b1604b4098a28b43d21a8eb1deeca22b1c421b276c7095"
dependencies = [
"crossbeam-channel",
"log",
"serde",
"serde_json",
]
[[package]]
name = "lsp-types"
version = "0.95.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "158c1911354ef73e8fe42da6b10c0484cb65c7f1007f28022e847706c1ab6984"
dependencies = [
"bitflags 1.3.2",
"serde",
"serde_json",
"serde_repr",
"url",
]
[[package]]
name = "matchers"
version = "0.1.0"
@@ -1950,7 +2003,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.3.1"
version = "0.3.2"
dependencies = [
"anyhow",
"argfile",
@@ -1982,6 +2035,7 @@ dependencies = [
"ruff_notebook",
"ruff_python_ast",
"ruff_python_formatter",
"ruff_server",
"ruff_source_file",
"ruff_text_size",
"ruff_workspace",
@@ -1996,6 +2050,8 @@ dependencies = [
"tikv-jemallocator",
"toml",
"tracing",
"tracing-subscriber",
"tracing-tree",
"walkdir",
"wild",
]
@@ -2111,7 +2167,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.3.1"
version = "0.3.2"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2360,9 +2416,38 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "ruff_server"
version = "0.2.2"
dependencies = [
"anyhow",
"crossbeam",
"insta",
"jod-thread",
"libc",
"lsp-server",
"lsp-types",
"ruff_diagnostics",
"ruff_formatter",
"ruff_linter",
"ruff_python_ast",
"ruff_python_codegen",
"ruff_python_formatter",
"ruff_python_index",
"ruff_python_parser",
"ruff_source_file",
"ruff_text_size",
"ruff_workspace",
"rustc-hash",
"serde",
"serde_json",
"similar",
"tracing",
]
[[package]]
name = "ruff_shrinking"
version = "0.3.1"
version = "0.3.2"
dependencies = [
"anyhow",
"clap",
@@ -2631,6 +2716,17 @@ dependencies = [
"serde",
]
[[package]]
name = "serde_repr"
version = "0.1.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b2e6b945e9d3df726b65d6ee24060aff8e3533d431f677a9695db04eff9dfdb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.52",
]
[[package]]
name = "serde_spanned"
version = "0.6.5"
@@ -2954,22 +3050,6 @@ dependencies = [
"tikv-jemalloc-sys",
]
[[package]]
name = "time"
version = "0.3.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cd0cbfecb4d19b5ea75bb31ad904eb5b9fa13f21079c3b92017ebdf4999a5890"
dependencies = [
"serde",
"time-core",
]
[[package]]
name = "time-core"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e153e1f1acaef8acc537e68b44906d2db6436e2b35ac2c6b42640fff91f00fd"
[[package]]
name = "tiny-keccak"
version = "2.0.2"
@@ -3083,6 +3163,17 @@ dependencies = [
"tracing-subscriber",
]
[[package]]
name = "tracing-log"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f751112709b4e791d8ce53e32c4ed2d353565a795ce84da2285393f41557bdf2"
dependencies = [
"log",
"once_cell",
"tracing-core",
]
[[package]]
name = "tracing-log"
version = "0.2.0"
@@ -3109,7 +3200,19 @@ dependencies = [
"thread_local",
"tracing",
"tracing-core",
"tracing-log",
"tracing-log 0.2.0",
]
[[package]]
name = "tracing-tree"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2ec6adcab41b1391b08a308cc6302b79f8095d1673f6947c2dc65ffb028b0b2d"
dependencies = [
"nu-ansi-term",
"tracing-core",
"tracing-log 0.1.4",
"tracing-subscriber",
]
[[package]]
@@ -3195,9 +3298,9 @@ checksum = "f962df74c8c05a667b5ee8bcf162993134c104e96440b663c8daa176dc772d8c"
[[package]]
name = "unicode_names2"
version = "1.2.1"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac64ef2f016dc69dfa8283394a70b057066eb054d5fcb6b9eb17bd2ec5097211"
checksum = "addeebf294df7922a1164f729fb27ebbbcea99cc32b3bf08afab62757f707677"
dependencies = [
"phf",
"unicode_names2_generator",
@@ -3205,15 +3308,14 @@ dependencies = [
[[package]]
name = "unicode_names2_generator"
version = "1.2.1"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "013f6a731e80f3930de580e55ba41dfa846de4e0fdee4a701f97989cb1597d6a"
checksum = "f444b8bba042fe3c1251ffaca35c603f2dc2ccc08d595c65a8c4f76f3e8426c0"
dependencies = [
"getopts",
"log",
"phf_codegen",
"rand",
"time",
]
[[package]]
@@ -3352,9 +3454,9 @@ checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
[[package]]
name = "wasm-bindgen"
version = "0.2.91"
version = "0.2.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1e124130aee3fb58c5bdd6b639a0509486b0338acaaae0c84a5124b0f588b7f"
checksum = "4be2531df63900aeb2bca0daaaddec08491ee64ceecbee5076636a3b026795a8"
dependencies = [
"cfg-if",
"wasm-bindgen-macro",
@@ -3362,9 +3464,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.91"
version = "0.2.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9e7e1900c352b609c8488ad12639a311045f40a35491fb69ba8c12f758af70b"
checksum = "614d787b966d3989fa7bb98a654e369c762374fd3213d212cfc0251257e747da"
dependencies = [
"bumpalo",
"log",
@@ -3389,9 +3491,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.91"
version = "0.2.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b30af9e2d358182b5c7449424f017eba305ed32a7010509ede96cdc4696c46ed"
checksum = "a1f8823de937b71b9460c0c34e25f3da88250760bec0ebac694b49997550d726"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -3399,9 +3501,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.91"
version = "0.2.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "642f325be6301eb8107a83d12a8ac6c1e1c54345a7ef1a9261962dfefda09e66"
checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7"
dependencies = [
"proc-macro2",
"quote",
@@ -3412,9 +3514,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.91"
version = "0.2.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4f186bd2dcf04330886ce82d6f33dd75a7bfcf69ecf5763b89fcde53b6ac9838"
checksum = "af190c94f2773fdb3729c55b007a722abb5384da03bc0986df4c289bf5567e96"
[[package]]
name = "wasm-bindgen-test"

View File

@@ -21,8 +21,8 @@ bincode = { version = "1.3.3" }
bitflags = { version = "2.4.1" }
bstr = { version = "1.9.1" }
cachedir = { version = "0.3.1" }
chrono = { version = "0.4.34", default-features = false, features = ["clock"] }
clap = { version = "4.5.1", features = ["derive"] }
chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
clap = { version = "4.5.2", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "2.0.0" }
codspeed-criterion-compat = { version = "2.4.0", default-features = false }
@@ -32,6 +32,7 @@ console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dirs = { version = "5.0.0" }
drop_bomb = { version = "0.1.5" }
env_logger = { version = "0.10.1" }
@@ -51,11 +52,15 @@ insta-cmd = { version = "0.4.0" }
is-macro = { version = "0.3.5" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.12.1" }
js-sys = { version = "0.3.67" }
js-sys = { version = "0.3.69" }
jod-thread = { version = "0.1.2" }
lalrpop-util = { version = "0.20.0", default-features = false }
lexical-parse-float = { version = "0.8.0", features = ["format"] }
libc = { version = "0.2.153" }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
lsp-server = { version = "0.7.6" }
lsp-types = { version = "0.95.0", features = ["proposed"] }
memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
@@ -97,16 +102,17 @@ toml = { version = "0.8.9" }
tracing = { version = "0.1.40" }
tracing-indicatif = { version = "0.3.6" }
tracing-subscriber = { version = "0.3.18", features = ["env-filter"] }
tracing-tree = { version = "0.2.4" }
typed-arena = { version = "2.0.2" }
unic-ucd-category = { version = "0.9" }
unicode-ident = { version = "1.0.12" }
unicode-width = { version = "0.1.11" }
unicode_names2 = { version = "1.2.1" }
unicode_names2 = { version = "1.2.2" }
ureq = { version = "2.9.6" }
url = { version = "2.5.0" }
uuid = { version = "1.6.1", features = ["v4", "fast-rng", "macro-diagnostics", "js"] }
walkdir = { version = "2.3.2" }
wasm-bindgen = { version = "0.2.84" }
wasm-bindgen = { version = "0.2.92" }
wasm-bindgen-test = { version = "0.3.40" }
wild = { version = "2" }

View File

@@ -151,7 +151,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.3.1
rev: v0.3.2
hooks:
# Run the linter.
- id: ruff

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.3.1"
version = "0.3.2"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -20,6 +20,7 @@ ruff_macros = { path = "../ruff_macros" }
ruff_notebook = { path = "../ruff_notebook" }
ruff_python_ast = { path = "../ruff_python_ast" }
ruff_python_formatter = { path = "../ruff_python_formatter" }
ruff_server = { path = "../ruff_server" }
ruff_source_file = { path = "../ruff_source_file" }
ruff_text_size = { path = "../ruff_text_size" }
ruff_workspace = { path = "../ruff_workspace" }
@@ -52,6 +53,8 @@ tempfile = { workspace = true }
thiserror = { workspace = true }
toml = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tracing-subscriber = { workspace = true, features = ["registry"]}
tracing-tree = { workspace = true }
walkdir = { workspace = true }
wild = { workspace = true }

View File

@@ -126,6 +126,8 @@ pub enum Command {
GenerateShellCompletion { shell: clap_complete_command::Shell },
/// Run the Ruff formatter on the given files or directories.
Format(FormatCommand),
/// Run the language server.
Server(ServerCommand),
/// Display Ruff's version
Version {
#[arg(long, value_enum, default_value = "text")]
@@ -494,6 +496,9 @@ pub struct FormatCommand {
pub range: Option<FormatRange>,
}
#[derive(Clone, Debug, clap::Parser)]
pub struct ServerCommand;
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
pub enum HelpFormat {
Text,

View File

@@ -7,6 +7,7 @@ pub(crate) mod format;
pub(crate) mod format_stdin;
pub(crate) mod linter;
pub(crate) mod rule;
pub(crate) mod server;
pub(crate) mod show_files;
pub(crate) mod show_settings;
pub(crate) mod version;

View File

@@ -0,0 +1,69 @@
use crate::ExitStatus;
use anyhow::Result;
use ruff_linter::logging::LogLevel;
use ruff_server::Server;
use tracing::{level_filters::LevelFilter, metadata::Level, subscriber::Interest, Metadata};
use tracing_subscriber::{
layer::{Context, Filter, SubscriberExt},
Layer, Registry,
};
use tracing_tree::time::Uptime;
pub(crate) fn run_server(log_level: LogLevel) -> Result<ExitStatus> {
let trace_level = if log_level == LogLevel::Verbose {
Level::TRACE
} else {
Level::DEBUG
};
let subscriber = Registry::default().with(
tracing_tree::HierarchicalLayer::default()
.with_indent_lines(true)
.with_indent_amount(2)
.with_bracketed_fields(true)
.with_targets(true)
.with_writer(|| Box::new(std::io::stderr()))
.with_timer(Uptime::default())
.with_filter(LoggingFilter { trace_level }),
);
tracing::subscriber::set_global_default(subscriber)?;
let server = Server::new()?;
server.run().map(|()| ExitStatus::Success)
}
struct LoggingFilter {
trace_level: Level,
}
impl LoggingFilter {
fn is_enabled(&self, meta: &Metadata<'_>) -> bool {
let filter = if meta.target().starts_with("ruff") {
self.trace_level
} else {
Level::INFO
};
meta.level() <= &filter
}
}
impl<S> Filter<S> for LoggingFilter {
fn enabled(&self, meta: &Metadata<'_>, _cx: &Context<'_, S>) -> bool {
self.is_enabled(meta)
}
fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
if self.is_enabled(meta) {
Interest::always()
} else {
Interest::never()
}
}
fn max_level_hint(&self) -> Option<LevelFilter> {
Some(LevelFilter::from_level(self.trace_level))
}
}

View File

@@ -7,7 +7,7 @@ use std::process::ExitCode;
use std::sync::mpsc::channel;
use anyhow::Result;
use args::GlobalConfigArgs;
use args::{GlobalConfigArgs, ServerCommand};
use clap::CommandFactory;
use colored::Colorize;
use log::warn;
@@ -190,6 +190,7 @@ pub fn run(
}
Command::Check(args) => check(args, global_options),
Command::Format(args) => format(args, global_options),
Command::Server(args) => server(args, global_options.log_level()),
}
}
@@ -203,6 +204,12 @@ fn format(args: FormatCommand, global_options: GlobalConfigArgs) -> Result<ExitS
}
}
#[allow(clippy::needless_pass_by_value)] // TODO: remove once we start taking arguments from here
fn server(args: ServerCommand, log_level: LogLevel) -> Result<ExitStatus> {
let ServerCommand {} = args;
commands::server::run_server(log_level)
}
pub fn check(args: CheckCommand, global_options: GlobalConfigArgs) -> Result<ExitStatus> {
let (cli, config_arguments) = args.partition(global_options)?;

View File

@@ -241,7 +241,22 @@ linter.flake8_gettext.functions_names = [
ngettext,
]
linter.flake8_implicit_str_concat.allow_multiline = true
linter.flake8_import_conventions.aliases = {"matplotlib": "mpl", "matplotlib.pyplot": "plt", "pandas": "pd", "seaborn": "sns", "tensorflow": "tf", "networkx": "nx", "plotly.express": "px", "polars": "pl", "numpy": "np", "panel": "pn", "pyarrow": "pa", "altair": "alt", "tkinter": "tk", "holoviews": "hv"}
linter.flake8_import_conventions.aliases = {
altair = alt,
holoviews = hv,
matplotlib = mpl,
matplotlib.pyplot = plt,
networkx = nx,
numpy = np,
pandas = pd,
panel = pn,
plotly.express = px,
polars = pl,
pyarrow = pa,
seaborn = sns,
tensorflow = tf,
tkinter = tk,
}
linter.flake8_import_conventions.banned_aliases = {}
linter.flake8_import_conventions.banned_from = []
linter.flake8_pytest_style.fixture_parentheses = true

View File

@@ -545,6 +545,10 @@ impl PrintedRange {
&self.code
}
pub fn into_code(self) -> String {
self.code
}
/// The range the formatted code corresponds to in the source document.
pub fn source_range(&self) -> TextRange {
self.source_range

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.3.1"
version = "0.3.2"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -0,0 +1,22 @@
import os
import random
import a_lib
# OK
random.SystemRandom()
# Errors
random.Random()
random.random()
random.randrange()
random.randint()
random.choice()
random.choices()
random.uniform()
random.triangular()
random.randbytes()
# Unrelated
os.urandom()
a_lib.random()

View File

@@ -1,52 +1,47 @@
import crypt
import hashlib
from hashlib import new as hashlib_new
from hashlib import sha1 as hashlib_sha1
# Invalid
# Errors
hashlib.new('md5')
hashlib.new('md4', b'test')
hashlib.new(name='md5', data=b'test')
hashlib.new('MD4', data=b'test')
hashlib.new('sha1')
hashlib.new('sha1', data=b'test')
hashlib.new('sha', data=b'test')
hashlib.new(name='SHA', data=b'test')
hashlib.sha(data=b'test')
hashlib.md5()
hashlib_new('sha1')
hashlib_sha1('sha1')
# usedforsecurity arg only available in Python 3.9+
hashlib.new('sha1', usedforsecurity=True)
# Valid
crypt.crypt("test", salt=crypt.METHOD_CRYPT)
crypt.crypt("test", salt=crypt.METHOD_MD5)
crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
crypt.crypt("test", crypt.METHOD_BLOWFISH)
crypt.mksalt(crypt.METHOD_CRYPT)
crypt.mksalt(crypt.METHOD_MD5)
crypt.mksalt(crypt.METHOD_BLOWFISH)
# OK
hashlib.new('sha256')
hashlib.new('SHA512')
hashlib.sha256(data=b'test')
# usedforsecurity arg only available in Python 3.9+
hashlib_new(name='sha1', usedforsecurity=False)
# usedforsecurity arg only available in Python 3.9+
hashlib_sha1(name='sha1', usedforsecurity=False)
# usedforsecurity arg only available in Python 3.9+
hashlib.md4(usedforsecurity=False)
# usedforsecurity arg only available in Python 3.9+
hashlib.new(name='sha256', usedforsecurity=False)
crypt.crypt("test")
crypt.crypt("test", salt=crypt.METHOD_SHA256)
crypt.crypt("test", salt=crypt.METHOD_SHA512)
crypt.mksalt()
crypt.mksalt(crypt.METHOD_SHA256)
crypt.mksalt(crypt.METHOD_SHA512)

View File

@@ -1,4 +1,5 @@
import os
import subprocess
import commands
import popen2
@@ -16,6 +17,8 @@ popen2.Popen3("true")
popen2.Popen4("true")
commands.getoutput("true")
commands.getstatusoutput("true")
subprocess.getoutput("true")
subprocess.getstatusoutput("true")
# Check command argument looks unsafe.

View File

@@ -0,0 +1,34 @@
from django.contrib.auth.models import User
# Errors
User.objects.filter(username='admin').extra(dict(could_be='insecure'))
User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
query = '"username") AS "username", * FROM "auth_user" WHERE 1=1 OR "username"=? --'
User.objects.filter(username='admin').extra(select={'test': query})
where_var = ['1=1) OR 1=1 AND (1=1']
User.objects.filter(username='admin').extra(where=where_var)
where_str = '1=1) OR 1=1 AND (1=1'
User.objects.filter(username='admin').extra(where=[where_str])
tables_var = ['django_content_type" WHERE "auth_user"."username"="admin']
User.objects.all().extra(tables=tables_var).distinct()
tables_str = 'django_content_type" WHERE "auth_user"."username"="admin'
User.objects.all().extra(tables=[tables_str]).distinct()
# OK
User.objects.filter(username='admin').extra(
select={'test': 'secure'},
where=['secure'],
tables=['secure']
)
User.objects.filter(username='admin').extra({'test': 'secure'})
User.objects.filter(username='admin').extra(select={'test': 'secure'})
User.objects.filter(username='admin').extra(where=['secure'])

View File

@@ -14,9 +14,6 @@ reversed(sorted(x, reverse=not x))
reversed(sorted(i for i in range(42)))
reversed(sorted((i for i in range(42)), reverse=True))
def reversed(*args, **kwargs):
return None
reversed(sorted(x, reverse=True))
# Regression test for: https://github.com/astral-sh/ruff/issues/10335
reversed(sorted([1, 2, 3], reverse=False or True))
reversed(sorted([1, 2, 3], reverse=(False or True)))

View File

@@ -40,4 +40,7 @@ f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
# Make sure we do not unescape quotes
this_is_fine = "This is an \\'escaped\\' quote"
this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
# Invalid escapes in bytestrings are also triggered:
x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004

View File

@@ -153,3 +153,9 @@ ham[lower +1 :, "columnname"]
#: E203:1:13
ham[lower + 1 :, "columnname"]
#: Okay
f"{ham[lower +1 :, "columnname"]}"
#: E203:1:13
f"{ham[lower + 1 :, "columnname"]}"

View File

@@ -0,0 +1 @@
a = (1 or)

View File

@@ -0,0 +1,88 @@
a = 2 + 2
a = (2 + 2)
a = 2 + \
3 \
+ 4
a = (3 -\
2 + \
7)
z = 5 + \
(3 -\
2 + \
7) + \
4
b = [2 +
2]
b = [
2 + 4 + 5 + \
44 \
- 5
]
c = (True and
False \
or False \
and True \
)
c = (True and
False)
d = True and \
False or \
False \
and not True
s = {
'x': 2 + \
2
}
s = {
'x': 2 +
2
}
x = {2 + 4 \
+ 3}
y = (
2 + 2 # \
+ 3 # \
+ 4 \
+ 3
)
x = """
(\\
)
"""
("""hello \
""")
("hello \
")
x = "abc" \
"xyz"
x = ("abc" \
"xyz")
def foo():
x = (a + \
2)

View File

@@ -0,0 +1,14 @@
# Unix style
def foo() -> None:
pass
def bar() -> None:
pass
if __name__ == '__main__':
foo()
bar()

View File

@@ -0,0 +1,13 @@
# Unix style
def foo() -> None:
pass
def bar() -> None:
pass
if __name__ == '__main__':
foo()
bar()

View File

@@ -0,0 +1,17 @@
# Windows style
def foo() -> None:
pass
def bar() -> None:
pass
if __name__ == '__main__':
foo()
bar()

View File

@@ -0,0 +1,13 @@
# Windows style
def foo() -> None:
pass
def bar() -> None:
pass
if __name__ == '__main__':
foo()
bar()

View File

@@ -0,0 +1,5 @@
# This is fine
def foo():
pass
# Some comment

View File

@@ -0,0 +1,7 @@
"""Test: ensure that we treat strings in `typing.Annotation` as type definitions."""
from pathlib import Path
from re import RegexFlag
from typing import Annotated
p: Annotated["Path", int] = 1

View File

@@ -0,0 +1,16 @@
"""Test case: strings used within calls within type annotations."""
from typing import Callable
import bpy
from mypy_extensions import VarArg
class LightShow(bpy.types.Operator):
label = "Create Character"
name = "lightshow.letter_creation"
filepath: bpy.props.StringProperty(subtype="FILE_PATH") # OK
def f(x: Callable[[VarArg("os")], None]): # F821
pass

View File

@@ -0,0 +1,44 @@
"""Tests for constructs allowed in `.pyi` stub files but not at runtime"""
from typing import Optional, TypeAlias, Union
__version__: str
__author__: str
# Forward references:
MaybeCStr: TypeAlias = Optional[CStr] # valid in a `.pyi` stub file, not in a `.py` runtime file
MaybeCStr2: TypeAlias = Optional["CStr"] # always okay
CStr: TypeAlias = Union[C, str] # valid in a `.pyi` stub file, not in a `.py` runtime file
CStr2: TypeAlias = Union["C", str] # always okay
# References to a class from inside the class:
class C:
other: C = ... # valid in a `.pyi` stub file, not in a `.py` runtime file
other2: "C" = ... # always okay
def from_str(self, s: str) -> C: ... # valid in a `.pyi` stub file, not in a `.py` runtime file
def from_str2(self, s: str) -> "C": ... # always okay
# Circular references:
class A:
foo: B # valid in a `.pyi` stub file, not in a `.py` runtime file
foo2: "B" # always okay
bar: dict[str, B] # valid in a `.pyi` stub file, not in a `.py` runtime file
bar2: dict[str, "A"] # always okay
class B:
foo: A # always okay
bar: dict[str, A] # always okay
class Leaf: ...
class Tree(list[Tree | Leaf]): ... # valid in a `.pyi` stub file, not in a `.py` runtime file
class Tree2(list["Tree | Leaf"]): ... # always okay
# Annotations are treated as assignments in .pyi files, but not in .py files
class MyClass:
foo: int
bar = foo # valid in a `.pyi` stub file, not in a `.py` runtime file
bar = "foo" # always okay
baz: MyClass
eggs = baz # valid in a `.pyi` stub file, not in a `.py` runtime file
eggs = "baz" # always okay

View File

@@ -0,0 +1,44 @@
"""Tests for constructs allowed in `.pyi` stub files but not at runtime"""
from typing import Optional, TypeAlias, Union
__version__: str
__author__: str
# Forward references:
MaybeCStr: TypeAlias = Optional[CStr] # valid in a `.pyi` stub file, not in a `.py` runtime file
MaybeCStr2: TypeAlias = Optional["CStr"] # always okay
CStr: TypeAlias = Union[C, str] # valid in a `.pyi` stub file, not in a `.py` runtime file
CStr2: TypeAlias = Union["C", str] # always okay
# References to a class from inside the class:
class C:
other: C = ... # valid in a `.pyi` stub file, not in a `.py` runtime file
other2: "C" = ... # always okay
def from_str(self, s: str) -> C: ... # valid in a `.pyi` stub file, not in a `.py` runtime file
def from_str2(self, s: str) -> "C": ... # always okay
# Circular references:
class A:
foo: B # valid in a `.pyi` stub file, not in a `.py` runtime file
foo2: "B" # always okay
bar: dict[str, B] # valid in a `.pyi` stub file, not in a `.py` runtime file
bar2: dict[str, "A"] # always okay
class B:
foo: A # always okay
bar: dict[str, A] # always okay
class Leaf: ...
class Tree(list[Tree | Leaf]): ... # valid in a `.pyi` stub file, not in a `.py` runtime file
class Tree2(list["Tree | Leaf"]): ... # always okay
# Annotations are treated as assignments in .pyi files, but not in .py files
class MyClass:
foo: int
bar = foo # valid in a `.pyi` stub file, not in a `.py` runtime file
bar = "foo" # always okay
baz: MyClass
eggs = baz # valid in a `.pyi` stub file, not in a `.py` runtime file
eggs = "baz" # always okay

View File

@@ -0,0 +1,48 @@
"""Tests for constructs allowed when `__future__` annotations are enabled but not otherwise"""
from __future__ import annotations
from typing import Optional, TypeAlias, Union
__version__: str
__author__: str
# References to a class from inside the class:
class C:
other: C = ... # valid when `__future__.annotations are enabled
other2: "C" = ... # always okay
def from_str(self, s: str) -> C: ... # valid when `__future__.annotations are enabled
def from_str2(self, s: str) -> "C": ... # always okay
# Circular references:
class A:
foo: B # valid when `__future__.annotations are enabled
foo2: "B" # always okay
bar: dict[str, B] # valid when `__future__.annotations are enabled
bar2: dict[str, "A"] # always okay
class B:
foo: A # always okay
bar: dict[str, A] # always okay
# Annotations are treated as assignments in .pyi files, but not in .py files
class MyClass:
foo: int
bar = foo # Still invalid even when `__future__.annotations` are enabled
bar = "foo" # always okay
baz: MyClass
eggs = baz # Still invalid even when `__future__.annotations` are enabled
eggs = "baz" # always okay
# Forward references:
MaybeDStr: TypeAlias = Optional[DStr] # Still invalid even when `__future__.annotations` are enabled
MaybeDStr2: TypeAlias = Optional["DStr"] # always okay
DStr: TypeAlias = Union[D, str] # Still invalid even when `__future__.annotations` are enabled
DStr2: TypeAlias = Union["D", str] # always okay
class D: ...
# More circular references
class Leaf: ...
class Tree(list[Tree | Leaf]): ... # Still invalid even when `__future__.annotations` are enabled
class Tree2(list["Tree | Leaf"]): ... # always okay

View File

@@ -0,0 +1,10 @@
"""Test: inner class annotation."""
class RandomClass:
def bad_func(self) -> InnerClass: ... # F821
def good_func(self) -> OuterClass.InnerClass: ... # Okay
class OuterClass:
class InnerClass: ...
def good_func(self) -> InnerClass: ... # Okay

View File

@@ -0,0 +1,4 @@
a = 1
b: int # Considered a binding in a `.pyi` stub file, not in a `.py` runtime file
__all__ = ["a", "b", "c"] # c is flagged as missing; b is not

View File

@@ -54,3 +54,15 @@ class StudentE(StudentD):
def setup(self):
pass
class StudentF(object):
__slots__ = ("name", "__dict__")
def __init__(self, name, middle_name):
self.name = name
self.middle_name = middle_name # [assigning-non-slot]
self.setup()
def setup(self):
pass

View File

@@ -252,3 +252,10 @@ raise ValueError(
# The dictionary should be parenthesized.
"{}".format({0: 1}())
# The string shouldn't be converted, since it would require repeating the function call.
"{x} {x}".format(x=foo())
"{0} {0}".format(foo())
# The string _should_ be converted, since the function call is repeated in the arguments.
"{0} {1}".format(foo(), foo())

View File

@@ -427,7 +427,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
pyupgrade::rules::format_literals(checker, call, &summary);
}
if checker.enabled(Rule::FString) {
pyupgrade::rules::f_strings(checker, call, &summary, value);
pyupgrade::rules::f_strings(checker, call, &summary);
}
}
}
@@ -632,6 +632,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
]) {
flake8_bandit::rules::shell_injection(checker, call);
}
if checker.enabled(Rule::DjangoExtra) {
flake8_bandit::rules::django_extra(checker, call);
}
if checker.enabled(Rule::DjangoRawSql) {
flake8_bandit::rules::django_raw_sql(checker, call);
}

View File

@@ -9,7 +9,7 @@ use crate::rules::{flake8_builtins, pep8_naming, pycodestyle};
pub(crate) fn parameter(parameter: &Parameter, checker: &mut Checker) {
if checker.enabled(Rule::AmbiguousVariableName) {
if let Some(diagnostic) =
pycodestyle::rules::ambiguous_variable_name(&parameter.name, parameter.range())
pycodestyle::rules::ambiguous_variable_name(&parameter.name, parameter.name.range())
{
checker.diagnostics.push(diagnostic);
}

View File

@@ -938,6 +938,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
&& !self.semantic.in_deferred_type_definition()
&& self.semantic.in_type_definition()
&& self.semantic.future_annotations()
&& (self.semantic.in_typing_only_annotation() || self.source_type.is_stub())
{
if let Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) = expr {
self.visit.string_type_definitions.push((
@@ -1348,7 +1349,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
{
let mut iter = elts.iter();
if let Some(expr) = iter.next() {
self.visit_expr(expr);
self.visit_type_definition(expr);
}
for expr in iter {
self.visit_non_type_definition(expr);
@@ -1839,11 +1840,13 @@ impl<'a> Checker<'a> {
flags.insert(BindingFlags::UNPACKED_ASSIGNMENT);
}
// Match the left-hand side of an annotated assignment, like `x` in `x: int`.
// Match the left-hand side of an annotated assignment without a value,
// like `x` in `x: int`. N.B. In stub files, these should be viewed
// as assignments on par with statements such as `x: int = 5`.
if matches!(
parent,
Stmt::AnnAssign(ast::StmtAnnAssign { value: None, .. })
) && !self.semantic.in_annotation()
) && !(self.semantic.in_annotation() || self.source_type.is_stub())
{
self.add_binding(id, expr.range(), BindingKind::Annotation, flags);
return;

View File

@@ -1,6 +1,7 @@
use crate::line_width::IndentWidth;
use ruff_diagnostics::Diagnostic;
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::TokenKind;
use ruff_source_file::Locator;
@@ -9,8 +10,8 @@ use ruff_text_size::{Ranged, TextRange};
use crate::registry::AsRule;
use crate::rules::pycodestyle::rules::logical_lines::{
extraneous_whitespace, indentation, missing_whitespace, missing_whitespace_after_keyword,
missing_whitespace_around_operator, space_after_comma, space_around_operator,
whitespace_around_keywords, whitespace_around_named_parameter_equals,
missing_whitespace_around_operator, redundant_backslash, space_after_comma,
space_around_operator, whitespace_around_keywords, whitespace_around_named_parameter_equals,
whitespace_before_comment, whitespace_before_parameters, LogicalLines, TokenFlags,
};
use crate::settings::LinterSettings;
@@ -35,6 +36,7 @@ pub(crate) fn expand_indent(line: &str, indent_width: IndentWidth) -> usize {
pub(crate) fn check_logical_lines(
tokens: &[LexResult],
locator: &Locator,
indexer: &Indexer,
stylist: &Stylist,
settings: &LinterSettings,
) -> Vec<Diagnostic> {
@@ -73,6 +75,7 @@ pub(crate) fn check_logical_lines(
if line.flags().contains(TokenFlags::BRACKET) {
whitespace_before_parameters(&line, &mut context);
redundant_backslash(&line, locator, indexer, &mut context);
}
// Extract the indentation level.

View File

@@ -203,6 +203,10 @@ pub(crate) fn check_tokens(
flake8_fixme::rules::todos(&mut diagnostics, &todo_comments);
}
if settings.rules.enabled(Rule::TooManyNewlinesAtEndOfFile) {
pycodestyle::rules::too_many_newlines_at_end_of_file(&mut diagnostics, tokens);
}
diagnostics.retain(|diagnostic| settings.rules.enabled(diagnostic.kind.rule()));
diagnostics

View File

@@ -146,6 +146,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pycodestyle, "E401") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleImportsOnOneLine),
(Pycodestyle, "E402") => (RuleGroup::Stable, rules::pycodestyle::rules::ModuleImportNotAtTopOfFile),
(Pycodestyle, "E501") => (RuleGroup::Stable, rules::pycodestyle::rules::LineTooLong),
(Pycodestyle, "E502") => (RuleGroup::Preview, rules::pycodestyle::rules::logical_lines::RedundantBackslash),
(Pycodestyle, "E701") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleStatementsOnOneLineColon),
(Pycodestyle, "E702") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleStatementsOnOneLineSemicolon),
(Pycodestyle, "E703") => (RuleGroup::Stable, rules::pycodestyle::rules::UselessSemicolon),
@@ -167,6 +168,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pycodestyle, "W291") => (RuleGroup::Stable, rules::pycodestyle::rules::TrailingWhitespace),
(Pycodestyle, "W292") => (RuleGroup::Stable, rules::pycodestyle::rules::MissingNewlineAtEndOfFile),
(Pycodestyle, "W293") => (RuleGroup::Stable, rules::pycodestyle::rules::BlankLineWithWhitespace),
(Pycodestyle, "W391") => (RuleGroup::Preview, rules::pycodestyle::rules::TooManyNewlinesAtEndOfFile),
(Pycodestyle, "W505") => (RuleGroup::Stable, rules::pycodestyle::rules::DocLineTooLong),
(Pycodestyle, "W605") => (RuleGroup::Stable, rules::pycodestyle::rules::InvalidEscapeSequence),
@@ -680,6 +682,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Bandit, "607") => (RuleGroup::Stable, rules::flake8_bandit::rules::StartProcessWithPartialPath),
(Flake8Bandit, "608") => (RuleGroup::Stable, rules::flake8_bandit::rules::HardcodedSQLExpression),
(Flake8Bandit, "609") => (RuleGroup::Stable, rules::flake8_bandit::rules::UnixCommandWildcardInjection),
(Flake8Bandit, "610") => (RuleGroup::Preview, rules::flake8_bandit::rules::DjangoExtra),
(Flake8Bandit, "611") => (RuleGroup::Stable, rules::flake8_bandit::rules::DjangoRawSql),
(Flake8Bandit, "612") => (RuleGroup::Stable, rules::flake8_bandit::rules::LoggingConfigInsecureListen),
(Flake8Bandit, "701") => (RuleGroup::Stable, rules::flake8_bandit::rules::Jinja2AutoescapeFalse),

View File

@@ -1,5 +1,6 @@
use libcst_native::{
Expression, Name, ParenthesizableWhitespace, SimpleWhitespace, UnaryOperation,
Expression, LeftParen, Name, ParenthesizableWhitespace, ParenthesizedNode, RightParen,
SimpleWhitespace, UnaryOperation,
};
/// Return a [`ParenthesizableWhitespace`] containing a single space.
@@ -24,6 +25,7 @@ pub(crate) fn negate<'a>(expression: &Expression<'a>) -> Expression<'a> {
}
}
// If the expression is `True` or `False`, return the opposite.
if let Expression::Name(ref expression) = expression {
match expression.value {
"True" => {
@@ -44,11 +46,32 @@ pub(crate) fn negate<'a>(expression: &Expression<'a>) -> Expression<'a> {
}
}
// If the expression is higher precedence than the unary `not`, we need to wrap it in
// parentheses.
//
// For example: given `a and b`, we need to return `not (a and b)`, rather than `not a and b`.
//
// See: <https://docs.python.org/3/reference/expressions.html#operator-precedence>
let needs_parens = matches!(
expression,
Expression::BooleanOperation(_)
| Expression::IfExp(_)
| Expression::Lambda(_)
| Expression::NamedExpr(_)
);
let has_parens = !expression.lpar().is_empty() && !expression.rpar().is_empty();
// Otherwise, wrap in a `not` operator.
Expression::UnaryOperation(Box::new(UnaryOperation {
operator: libcst_native::UnaryOp::Not {
whitespace_after: space(),
},
expression: Box::new(expression.clone()),
expression: Box::new(if needs_parens && !has_parens {
expression
.clone()
.with_parens(LeftParen::default(), RightParen::default())
} else {
expression.clone()
}),
lpar: vec![],
rpar: vec![],
}))

View File

@@ -131,10 +131,7 @@ fn extract_noqa_line_for(lxr: &[LexResult], locator: &Locator, indexer: &Indexer
// For multi-line strings, we expect `noqa` directives on the last line of the
// string.
Tok::String {
triple_quoted: true,
..
} => {
Tok::String { kind, .. } if kind.is_triple_quoted() => {
if locator.contains_line_break(*range) {
string_mappings.push(TextRange::new(
locator.line_start(range.start()),

View File

@@ -418,29 +418,6 @@ pub(crate) fn fits(
all_lines_fit(fix, node, locator, line_length.value() as usize, tab_size)
}
/// Returns `true` if the fix fits within the maximum configured line length, or produces lines that
/// are shorter than the maximum length of the existing AST node.
pub(crate) fn fits_or_shrinks(
fix: &str,
node: AnyNodeRef,
locator: &Locator,
line_length: LineLength,
tab_size: IndentWidth,
) -> bool {
// Use the larger of the line length limit, or the longest line in the existing AST node.
let line_length = std::iter::once(line_length.value() as usize)
.chain(
locator
.slice(locator.lines_range(node.range()))
.universal_newlines()
.map(|line| LineWidthBuilder::new(tab_size).add_str(&line).get()),
)
.max()
.unwrap_or(line_length.value() as usize);
all_lines_fit(fix, node, locator, line_length, tab_size)
}
/// Returns `true` if all lines in the fix are shorter than the given line length.
fn all_lines_fit(
fix: &str,

View File

@@ -4,6 +4,7 @@
//! and subject to change drastically.
//!
//! [Ruff]: https://github.com/astral-sh/ruff
#![recursion_limit = "256"]
#[cfg(feature = "clap")]
pub use registry::clap_completion::RuleParser;

View File

@@ -132,7 +132,7 @@ pub fn check_path(
.any(|rule_code| rule_code.lint_source().is_logical_lines())
{
diagnostics.extend(crate::checkers::logical_lines::check_logical_lines(
&tokens, locator, stylist, settings,
&tokens, locator, indexer, stylist, settings,
));
}

View File

@@ -300,6 +300,7 @@ impl Rule {
| Rule::SingleLineImplicitStringConcatenation
| Rule::TabIndentation
| Rule::TooManyBlankLines
| Rule::TooManyNewlinesAtEndOfFile
| Rule::TrailingCommaOnBareTuple
| Rule::TypeCommentInStub
| Rule::UselessSemicolon
@@ -327,6 +328,7 @@ impl Rule {
| Rule::NoSpaceAfterBlockComment
| Rule::NoSpaceAfterInlineComment
| Rule::OverIndented
| Rule::RedundantBackslash
| Rule::TabAfterComma
| Rule::TabAfterKeyword
| Rule::TabAfterOperator

View File

@@ -48,6 +48,7 @@ mod tests {
#[test_case(Rule::SuspiciousEvalUsage, Path::new("S307.py"))]
#[test_case(Rule::SuspiciousMarkSafeUsage, Path::new("S308.py"))]
#[test_case(Rule::SuspiciousURLOpenUsage, Path::new("S310.py"))]
#[test_case(Rule::SuspiciousNonCryptographicRandomUsage, Path::new("S311.py"))]
#[test_case(Rule::SuspiciousTelnetUsage, Path::new("S312.py"))]
#[test_case(Rule::SuspiciousTelnetlibImport, Path::new("S401.py"))]
#[test_case(Rule::SuspiciousFtplibImport, Path::new("S402.py"))]
@@ -68,6 +69,7 @@ mod tests {
#[test_case(Rule::UnixCommandWildcardInjection, Path::new("S609.py"))]
#[test_case(Rule::UnsafeYAMLLoad, Path::new("S506.py"))]
#[test_case(Rule::WeakCryptographicKey, Path::new("S505.py"))]
#[test_case(Rule::DjangoExtra, Path::new("S610.py"))]
#[test_case(Rule::DjangoRawSql, Path::new("S611.py"))]
#[test_case(Rule::TarfileUnsafeMembers, Path::new("S202.py"))]
fn rules(rule_code: Rule, path: &Path) -> Result<()> {

View File

@@ -0,0 +1,81 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, ExprAttribute, ExprDict, ExprList};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for uses of Django's `extra` function.
///
/// ## Why is this bad?
/// Django's `extra` function can be used to execute arbitrary SQL queries,
/// which can in turn lead to SQL injection vulnerabilities.
///
/// ## Example
/// ```python
/// from django.contrib.auth.models import User
///
/// User.objects.all().extra(select={"test": "%secure" % "nos"})
/// ```
///
/// ## References
/// - [Django documentation: SQL injection protection](https://docs.djangoproject.com/en/dev/topics/security/#sql-injection-protection)
/// - [Common Weakness Enumeration: CWE-89](https://cwe.mitre.org/data/definitions/89.html)
#[violation]
pub struct DjangoExtra;
impl Violation for DjangoExtra {
#[derive_message_formats]
fn message(&self) -> String {
format!("Use of Django `extra` can lead to SQL injection vulnerabilities")
}
}
/// S610
pub(crate) fn django_extra(checker: &mut Checker, call: &ast::ExprCall) {
let Expr::Attribute(ExprAttribute { attr, .. }) = call.func.as_ref() else {
return;
};
if attr.as_str() != "extra" {
return;
}
if is_call_insecure(call) {
checker
.diagnostics
.push(Diagnostic::new(DjangoExtra, call.arguments.range()));
}
}
fn is_call_insecure(call: &ast::ExprCall) -> bool {
for (argument_name, position) in [("select", 0), ("where", 1), ("tables", 3)] {
if let Some(argument) = call.arguments.find_argument(argument_name, position) {
match argument_name {
"select" => match argument {
Expr::Dict(ExprDict { keys, values, .. }) => {
if !keys.iter().flatten().all(Expr::is_string_literal_expr) {
return true;
}
if !values.iter().all(Expr::is_string_literal_expr) {
return true;
}
}
_ => return true,
},
"where" | "tables" => match argument {
Expr::List(ExprList { elts, .. }) => {
if !elts.iter().all(Expr::is_string_literal_expr) {
return true;
}
}
_ => return true,
},
_ => (),
}
}
}
false
}

View File

@@ -9,7 +9,8 @@ use crate::checkers::ast::Checker;
use super::super::helpers::string_literal;
/// ## What it does
/// Checks for uses of weak or broken cryptographic hash functions.
/// Checks for uses of weak or broken cryptographic hash functions in
/// `hashlib` and `crypt` libraries.
///
/// ## Why is this bad?
/// Weak or broken cryptographic hash functions may be susceptible to
@@ -43,68 +44,134 @@ use super::super::helpers::string_literal;
///
/// ## References
/// - [Python documentation: `hashlib` — Secure hashes and message digests](https://docs.python.org/3/library/hashlib.html)
/// - [Python documentation: `crypt` — Function to check Unix passwords](https://docs.python.org/3/library/crypt.html)
/// - [Common Weakness Enumeration: CWE-327](https://cwe.mitre.org/data/definitions/327.html)
/// - [Common Weakness Enumeration: CWE-328](https://cwe.mitre.org/data/definitions/328.html)
/// - [Common Weakness Enumeration: CWE-916](https://cwe.mitre.org/data/definitions/916.html)
#[violation]
pub struct HashlibInsecureHashFunction {
library: String,
string: String,
}
impl Violation for HashlibInsecureHashFunction {
#[derive_message_formats]
fn message(&self) -> String {
let HashlibInsecureHashFunction { string } = self;
format!("Probable use of insecure hash functions in `hashlib`: `{string}`")
let HashlibInsecureHashFunction { library, string } = self;
format!("Probable use of insecure hash functions in `{library}`: `{string}`")
}
}
/// S324
pub(crate) fn hashlib_insecure_hash_functions(checker: &mut Checker, call: &ast::ExprCall) {
if let Some(hashlib_call) = checker
if let Some(weak_hash_call) = checker
.semantic()
.resolve_qualified_name(&call.func)
.and_then(|qualified_name| match qualified_name.segments() {
["hashlib", "new"] => Some(HashlibCall::New),
["hashlib", "md4"] => Some(HashlibCall::WeakHash("md4")),
["hashlib", "md5"] => Some(HashlibCall::WeakHash("md5")),
["hashlib", "sha"] => Some(HashlibCall::WeakHash("sha")),
["hashlib", "sha1"] => Some(HashlibCall::WeakHash("sha1")),
["hashlib", "new"] => Some(WeakHashCall::Hashlib {
call: HashlibCall::New,
}),
["hashlib", "md4"] => Some(WeakHashCall::Hashlib {
call: HashlibCall::WeakHash("md4"),
}),
["hashlib", "md5"] => Some(WeakHashCall::Hashlib {
call: HashlibCall::WeakHash("md5"),
}),
["hashlib", "sha"] => Some(WeakHashCall::Hashlib {
call: HashlibCall::WeakHash("sha"),
}),
["hashlib", "sha1"] => Some(WeakHashCall::Hashlib {
call: HashlibCall::WeakHash("sha1"),
}),
["crypt", "crypt" | "mksalt"] => Some(WeakHashCall::Crypt),
_ => None,
})
{
if !is_used_for_security(&call.arguments) {
return;
}
match hashlib_call {
HashlibCall::New => {
if let Some(name_arg) = call.arguments.find_argument("name", 0) {
if let Some(hash_func_name) = string_literal(name_arg) {
// `hashlib.new` accepts both lowercase and uppercase names for hash
// functions.
if matches!(
hash_func_name,
"md4" | "md5" | "sha" | "sha1" | "MD4" | "MD5" | "SHA" | "SHA1"
) {
checker.diagnostics.push(Diagnostic::new(
HashlibInsecureHashFunction {
string: hash_func_name.to_string(),
},
name_arg.range(),
));
}
}
}
match weak_hash_call {
WeakHashCall::Hashlib { call: hashlib_call } => {
detect_insecure_hashlib_calls(checker, call, hashlib_call);
}
HashlibCall::WeakHash(func_name) => {
WeakHashCall::Crypt => detect_insecure_crypt_calls(checker, call),
}
}
}
fn detect_insecure_hashlib_calls(
checker: &mut Checker,
call: &ast::ExprCall,
hashlib_call: HashlibCall,
) {
if !is_used_for_security(&call.arguments) {
return;
}
match hashlib_call {
HashlibCall::New => {
let Some(name_arg) = call.arguments.find_argument("name", 0) else {
return;
};
let Some(hash_func_name) = string_literal(name_arg) else {
return;
};
// `hashlib.new` accepts both lowercase and uppercase names for hash
// functions.
if matches!(
hash_func_name,
"md4" | "md5" | "sha" | "sha1" | "MD4" | "MD5" | "SHA" | "SHA1"
) {
checker.diagnostics.push(Diagnostic::new(
HashlibInsecureHashFunction {
string: (*func_name).to_string(),
library: "hashlib".to_string(),
string: hash_func_name.to_string(),
},
call.func.range(),
name_arg.range(),
));
}
}
HashlibCall::WeakHash(func_name) => {
checker.diagnostics.push(Diagnostic::new(
HashlibInsecureHashFunction {
library: "hashlib".to_string(),
string: (*func_name).to_string(),
},
call.func.range(),
));
}
}
}
fn detect_insecure_crypt_calls(checker: &mut Checker, call: &ast::ExprCall) {
let Some(method) = checker
.semantic()
.resolve_qualified_name(&call.func)
.and_then(|qualified_name| match qualified_name.segments() {
["crypt", "crypt"] => Some(("salt", 1)),
["crypt", "mksalt"] => Some(("method", 0)),
_ => None,
})
.and_then(|(argument_name, position)| {
call.arguments.find_argument(argument_name, position)
})
else {
return;
};
let Some(qualified_name) = checker.semantic().resolve_qualified_name(method) else {
return;
};
if matches!(
qualified_name.segments(),
["crypt", "METHOD_CRYPT" | "METHOD_MD5" | "METHOD_BLOWFISH"]
) {
checker.diagnostics.push(Diagnostic::new(
HashlibInsecureHashFunction {
library: "crypt".to_string(),
string: qualified_name.to_string(),
},
method.range(),
));
}
}
@@ -114,7 +181,13 @@ fn is_used_for_security(arguments: &Arguments) -> bool {
.map_or(true, |keyword| !is_const_false(&keyword.value))
}
#[derive(Debug)]
#[derive(Debug, Copy, Clone)]
enum WeakHashCall {
Hashlib { call: HashlibCall },
Crypt,
}
#[derive(Debug, Copy, Clone)]
enum HashlibCall {
New,
WeakHash(&'static str),

View File

@@ -1,5 +1,6 @@
pub(crate) use assert_used::*;
pub(crate) use bad_file_permissions::*;
pub(crate) use django_extra::*;
pub(crate) use django_raw_sql::*;
pub(crate) use exec_used::*;
pub(crate) use flask_debug_true::*;
@@ -33,6 +34,7 @@ pub(crate) use weak_cryptographic_key::*;
mod assert_used;
mod bad_file_permissions;
mod django_extra;
mod django_raw_sql;
mod exec_used;
mod flask_debug_true;

View File

@@ -433,6 +433,7 @@ fn get_call_kind(func: &Expr, semantic: &SemanticModel) -> Option<CallKind> {
"Popen" | "call" | "check_call" | "check_output" | "run" => {
Some(CallKind::Subprocess)
}
"getoutput" | "getstatusoutput" => Some(CallKind::Shell),
_ => None,
},
"popen2" => match submodule {

View File

@@ -867,7 +867,7 @@ pub(crate) fn suspicious_function_call(checker: &mut Checker, call: &ExprCall) {
["urllib", "request", "URLopener" | "FancyURLopener"] |
["six", "moves", "urllib", "request", "URLopener" | "FancyURLopener"] => Some(SuspiciousURLOpenUsage.into()),
// NonCryptographicRandom
["random", "random" | "randrange" | "randint" | "choice" | "choices" | "uniform" | "triangular"] => Some(SuspiciousNonCryptographicRandomUsage.into()),
["random", "Random" | "random" | "randrange" | "randint" | "choice" | "choices" | "uniform" | "triangular" | "randbytes"] => Some(SuspiciousNonCryptographicRandomUsage.into()),
// UnverifiedContext
["ssl", "_create_unverified_context"] => Some(SuspiciousUnverifiedContextUsage.into()),
// XMLCElementTree

View File

@@ -0,0 +1,90 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S311.py:10:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
9 | # Errors
10 | random.Random()
| ^^^^^^^^^^^^^^^ S311
11 | random.random()
12 | random.randrange()
|
S311.py:11:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
9 | # Errors
10 | random.Random()
11 | random.random()
| ^^^^^^^^^^^^^^^ S311
12 | random.randrange()
13 | random.randint()
|
S311.py:12:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
10 | random.Random()
11 | random.random()
12 | random.randrange()
| ^^^^^^^^^^^^^^^^^^ S311
13 | random.randint()
14 | random.choice()
|
S311.py:13:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
11 | random.random()
12 | random.randrange()
13 | random.randint()
| ^^^^^^^^^^^^^^^^ S311
14 | random.choice()
15 | random.choices()
|
S311.py:14:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
12 | random.randrange()
13 | random.randint()
14 | random.choice()
| ^^^^^^^^^^^^^^^ S311
15 | random.choices()
16 | random.uniform()
|
S311.py:15:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
13 | random.randint()
14 | random.choice()
15 | random.choices()
| ^^^^^^^^^^^^^^^^ S311
16 | random.uniform()
17 | random.triangular()
|
S311.py:16:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
14 | random.choice()
15 | random.choices()
16 | random.uniform()
| ^^^^^^^^^^^^^^^^ S311
17 | random.triangular()
18 | random.randbytes()
|
S311.py:17:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
15 | random.choices()
16 | random.uniform()
17 | random.triangular()
| ^^^^^^^^^^^^^^^^^^^ S311
18 | random.randbytes()
|
S311.py:18:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
16 | random.uniform()
17 | random.triangular()
18 | random.randbytes()
| ^^^^^^^^^^^^^^^^^^ S311
19 |
20 | # Unrelated
|

View File

@@ -3,131 +3,195 @@ source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S324.py:7:13: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
5 | # Invalid
6 |
6 | # Errors
7 | hashlib.new('md5')
| ^^^^^ S324
8 |
9 | hashlib.new('md4', b'test')
8 | hashlib.new('md4', b'test')
9 | hashlib.new(name='md5', data=b'test')
|
S324.py:9:13: S324 Probable use of insecure hash functions in `hashlib`: `md4`
S324.py:8:13: S324 Probable use of insecure hash functions in `hashlib`: `md4`
|
6 | # Errors
7 | hashlib.new('md5')
8 | hashlib.new('md4', b'test')
| ^^^^^ S324
9 | hashlib.new(name='md5', data=b'test')
10 | hashlib.new('MD4', data=b'test')
|
S324.py:9:18: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
7 | hashlib.new('md5')
8 |
9 | hashlib.new('md4', b'test')
| ^^^^^ S324
10 |
11 | hashlib.new(name='md5', data=b'test')
|
S324.py:11:18: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
9 | hashlib.new('md4', b'test')
10 |
11 | hashlib.new(name='md5', data=b'test')
8 | hashlib.new('md4', b'test')
9 | hashlib.new(name='md5', data=b'test')
| ^^^^^ S324
12 |
13 | hashlib.new('MD4', data=b'test')
10 | hashlib.new('MD4', data=b'test')
11 | hashlib.new('sha1')
|
S324.py:13:13: S324 Probable use of insecure hash functions in `hashlib`: `MD4`
S324.py:10:13: S324 Probable use of insecure hash functions in `hashlib`: `MD4`
|
11 | hashlib.new(name='md5', data=b'test')
12 |
13 | hashlib.new('MD4', data=b'test')
8 | hashlib.new('md4', b'test')
9 | hashlib.new(name='md5', data=b'test')
10 | hashlib.new('MD4', data=b'test')
| ^^^^^ S324
14 |
15 | hashlib.new('sha1')
11 | hashlib.new('sha1')
12 | hashlib.new('sha1', data=b'test')
|
S324.py:15:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
S324.py:11:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
13 | hashlib.new('MD4', data=b'test')
14 |
15 | hashlib.new('sha1')
9 | hashlib.new(name='md5', data=b'test')
10 | hashlib.new('MD4', data=b'test')
11 | hashlib.new('sha1')
| ^^^^^^ S324
16 |
17 | hashlib.new('sha1', data=b'test')
12 | hashlib.new('sha1', data=b'test')
13 | hashlib.new('sha', data=b'test')
|
S324.py:12:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
10 | hashlib.new('MD4', data=b'test')
11 | hashlib.new('sha1')
12 | hashlib.new('sha1', data=b'test')
| ^^^^^^ S324
13 | hashlib.new('sha', data=b'test')
14 | hashlib.new(name='SHA', data=b'test')
|
S324.py:13:13: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
11 | hashlib.new('sha1')
12 | hashlib.new('sha1', data=b'test')
13 | hashlib.new('sha', data=b'test')
| ^^^^^ S324
14 | hashlib.new(name='SHA', data=b'test')
15 | hashlib.sha(data=b'test')
|
S324.py:14:18: S324 Probable use of insecure hash functions in `hashlib`: `SHA`
|
12 | hashlib.new('sha1', data=b'test')
13 | hashlib.new('sha', data=b'test')
14 | hashlib.new(name='SHA', data=b'test')
| ^^^^^ S324
15 | hashlib.sha(data=b'test')
16 | hashlib.md5()
|
S324.py:15:1: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
13 | hashlib.new('sha', data=b'test')
14 | hashlib.new(name='SHA', data=b'test')
15 | hashlib.sha(data=b'test')
| ^^^^^^^^^^^ S324
16 | hashlib.md5()
17 | hashlib_new('sha1')
|
S324.py:16:1: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
14 | hashlib.new(name='SHA', data=b'test')
15 | hashlib.sha(data=b'test')
16 | hashlib.md5()
| ^^^^^^^^^^^ S324
17 | hashlib_new('sha1')
18 | hashlib_sha1('sha1')
|
S324.py:17:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
15 | hashlib.new('sha1')
16 |
17 | hashlib.new('sha1', data=b'test')
15 | hashlib.sha(data=b'test')
16 | hashlib.md5()
17 | hashlib_new('sha1')
| ^^^^^^ S324
18 |
19 | hashlib.new('sha', data=b'test')
18 | hashlib_sha1('sha1')
19 | # usedforsecurity arg only available in Python 3.9+
|
S324.py:19:13: S324 Probable use of insecure hash functions in `hashlib`: `sha`
S324.py:18:1: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
17 | hashlib.new('sha1', data=b'test')
18 |
19 | hashlib.new('sha', data=b'test')
| ^^^^^ S324
20 |
21 | hashlib.new(name='SHA', data=b'test')
|
S324.py:21:18: S324 Probable use of insecure hash functions in `hashlib`: `SHA`
|
19 | hashlib.new('sha', data=b'test')
20 |
21 | hashlib.new(name='SHA', data=b'test')
| ^^^^^ S324
22 |
23 | hashlib.sha(data=b'test')
|
S324.py:23:1: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
21 | hashlib.new(name='SHA', data=b'test')
22 |
23 | hashlib.sha(data=b'test')
| ^^^^^^^^^^^ S324
24 |
25 | hashlib.md5()
|
S324.py:25:1: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
23 | hashlib.sha(data=b'test')
24 |
25 | hashlib.md5()
| ^^^^^^^^^^^ S324
26 |
27 | hashlib_new('sha1')
|
S324.py:27:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
25 | hashlib.md5()
26 |
27 | hashlib_new('sha1')
| ^^^^^^ S324
28 |
29 | hashlib_sha1('sha1')
|
S324.py:29:1: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
27 | hashlib_new('sha1')
28 |
29 | hashlib_sha1('sha1')
16 | hashlib.md5()
17 | hashlib_new('sha1')
18 | hashlib_sha1('sha1')
| ^^^^^^^^^^^^ S324
30 |
31 | # usedforsecurity arg only available in Python 3.9+
19 | # usedforsecurity arg only available in Python 3.9+
20 | hashlib.new('sha1', usedforsecurity=True)
|
S324.py:32:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
S324.py:20:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
31 | # usedforsecurity arg only available in Python 3.9+
32 | hashlib.new('sha1', usedforsecurity=True)
18 | hashlib_sha1('sha1')
19 | # usedforsecurity arg only available in Python 3.9+
20 | hashlib.new('sha1', usedforsecurity=True)
| ^^^^^^ S324
33 |
34 | # Valid
21 |
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
|
S324.py:22:26: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_CRYPT`
|
20 | hashlib.new('sha1', usedforsecurity=True)
21 |
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
| ^^^^^^^^^^^^^^^^^^ S324
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
|
S324.py:23:26: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_MD5`
|
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
| ^^^^^^^^^^^^^^^^ S324
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
S324.py:24:26: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_BLOWFISH`
|
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
| ^^^^^^^^^^^^^^^^^^^^^ S324
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
S324.py:25:21: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_BLOWFISH`
|
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
| ^^^^^^^^^^^^^^^^^^^^^ S324
26 |
27 | crypt.mksalt(crypt.METHOD_CRYPT)
|
S324.py:27:14: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_CRYPT`
|
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
26 |
27 | crypt.mksalt(crypt.METHOD_CRYPT)
| ^^^^^^^^^^^^^^^^^^ S324
28 | crypt.mksalt(crypt.METHOD_MD5)
29 | crypt.mksalt(crypt.METHOD_BLOWFISH)
|
S324.py:28:14: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_MD5`
|
27 | crypt.mksalt(crypt.METHOD_CRYPT)
28 | crypt.mksalt(crypt.METHOD_MD5)
| ^^^^^^^^^^^^^^^^ S324
29 | crypt.mksalt(crypt.METHOD_BLOWFISH)
|
S324.py:29:14: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_BLOWFISH`
|
27 | crypt.mksalt(crypt.METHOD_CRYPT)
28 | crypt.mksalt(crypt.METHOD_MD5)
29 | crypt.mksalt(crypt.METHOD_BLOWFISH)
| ^^^^^^^^^^^^^^^^^^^^^ S324
30 |
31 | # OK
|

View File

@@ -1,147 +1,165 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S605.py:7:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
6 | # Check all shell functions.
7 | os.system("true")
| ^^^^^^ S605
8 | os.popen("true")
9 | os.popen2("true")
|
S605.py:8:10: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
S605.py:8:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
6 | # Check all shell functions.
7 | os.system("true")
8 | os.popen("true")
| ^^^^^^ S605
9 | os.popen2("true")
10 | os.popen3("true")
|
S605.py:9:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
7 | os.system("true")
8 | os.popen("true")
9 | os.popen2("true")
7 | # Check all shell functions.
8 | os.system("true")
| ^^^^^^ S605
10 | os.popen3("true")
11 | os.popen4("true")
9 | os.popen("true")
10 | os.popen2("true")
|
S605.py:9:10: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
7 | # Check all shell functions.
8 | os.system("true")
9 | os.popen("true")
| ^^^^^^ S605
10 | os.popen2("true")
11 | os.popen3("true")
|
S605.py:10:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
8 | os.popen("true")
9 | os.popen2("true")
10 | os.popen3("true")
8 | os.system("true")
9 | os.popen("true")
10 | os.popen2("true")
| ^^^^^^ S605
11 | os.popen4("true")
12 | popen2.popen2("true")
11 | os.popen3("true")
12 | os.popen4("true")
|
S605.py:11:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
9 | os.popen2("true")
10 | os.popen3("true")
11 | os.popen4("true")
9 | os.popen("true")
10 | os.popen2("true")
11 | os.popen3("true")
| ^^^^^^ S605
12 | popen2.popen2("true")
13 | popen2.popen3("true")
12 | os.popen4("true")
13 | popen2.popen2("true")
|
S605.py:12:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
S605.py:12:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
10 | os.popen3("true")
11 | os.popen4("true")
12 | popen2.popen2("true")
| ^^^^^^ S605
13 | popen2.popen3("true")
14 | popen2.popen4("true")
10 | os.popen2("true")
11 | os.popen3("true")
12 | os.popen4("true")
| ^^^^^^ S605
13 | popen2.popen2("true")
14 | popen2.popen3("true")
|
S605.py:13:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
11 | os.popen4("true")
12 | popen2.popen2("true")
13 | popen2.popen3("true")
11 | os.popen3("true")
12 | os.popen4("true")
13 | popen2.popen2("true")
| ^^^^^^ S605
14 | popen2.popen4("true")
15 | popen2.Popen3("true")
14 | popen2.popen3("true")
15 | popen2.popen4("true")
|
S605.py:14:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
12 | popen2.popen2("true")
13 | popen2.popen3("true")
14 | popen2.popen4("true")
12 | os.popen4("true")
13 | popen2.popen2("true")
14 | popen2.popen3("true")
| ^^^^^^ S605
15 | popen2.Popen3("true")
16 | popen2.Popen4("true")
15 | popen2.popen4("true")
16 | popen2.Popen3("true")
|
S605.py:15:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
13 | popen2.popen3("true")
14 | popen2.popen4("true")
15 | popen2.Popen3("true")
13 | popen2.popen2("true")
14 | popen2.popen3("true")
15 | popen2.popen4("true")
| ^^^^^^ S605
16 | popen2.Popen4("true")
17 | commands.getoutput("true")
16 | popen2.Popen3("true")
17 | popen2.Popen4("true")
|
S605.py:16:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
14 | popen2.popen4("true")
15 | popen2.Popen3("true")
16 | popen2.Popen4("true")
14 | popen2.popen3("true")
15 | popen2.popen4("true")
16 | popen2.Popen3("true")
| ^^^^^^ S605
17 | commands.getoutput("true")
18 | commands.getstatusoutput("true")
17 | popen2.Popen4("true")
18 | commands.getoutput("true")
|
S605.py:17:20: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
S605.py:17:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
15 | popen2.Popen3("true")
16 | popen2.Popen4("true")
17 | commands.getoutput("true")
15 | popen2.popen4("true")
16 | popen2.Popen3("true")
17 | popen2.Popen4("true")
| ^^^^^^ S605
18 | commands.getoutput("true")
19 | commands.getstatusoutput("true")
|
S605.py:18:20: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
16 | popen2.Popen3("true")
17 | popen2.Popen4("true")
18 | commands.getoutput("true")
| ^^^^^^ S605
18 | commands.getstatusoutput("true")
19 | commands.getstatusoutput("true")
20 | subprocess.getoutput("true")
|
S605.py:18:26: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
S605.py:19:26: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
16 | popen2.Popen4("true")
17 | commands.getoutput("true")
18 | commands.getstatusoutput("true")
17 | popen2.Popen4("true")
18 | commands.getoutput("true")
19 | commands.getstatusoutput("true")
| ^^^^^^ S605
20 | subprocess.getoutput("true")
21 | subprocess.getstatusoutput("true")
|
S605.py:23:11: S605 Starting a process with a shell, possible injection detected
S605.py:20:22: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
21 | # Check command argument looks unsafe.
22 | var_string = "true"
23 | os.system(var_string)
18 | commands.getoutput("true")
19 | commands.getstatusoutput("true")
20 | subprocess.getoutput("true")
| ^^^^^^ S605
21 | subprocess.getstatusoutput("true")
|
S605.py:21:28: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
19 | commands.getstatusoutput("true")
20 | subprocess.getoutput("true")
21 | subprocess.getstatusoutput("true")
| ^^^^^^ S605
|
S605.py:26:11: S605 Starting a process with a shell, possible injection detected
|
24 | # Check command argument looks unsafe.
25 | var_string = "true"
26 | os.system(var_string)
| ^^^^^^^^^^ S605
24 | os.system([var_string])
25 | os.system([var_string, ""])
27 | os.system([var_string])
28 | os.system([var_string, ""])
|
S605.py:24:11: S605 Starting a process with a shell, possible injection detected
S605.py:27:11: S605 Starting a process with a shell, possible injection detected
|
22 | var_string = "true"
23 | os.system(var_string)
24 | os.system([var_string])
25 | var_string = "true"
26 | os.system(var_string)
27 | os.system([var_string])
| ^^^^^^^^^^^^ S605
25 | os.system([var_string, ""])
28 | os.system([var_string, ""])
|
S605.py:25:11: S605 Starting a process with a shell, possible injection detected
S605.py:28:11: S605 Starting a process with a shell, possible injection detected
|
23 | os.system(var_string)
24 | os.system([var_string])
25 | os.system([var_string, ""])
26 | os.system(var_string)
27 | os.system([var_string])
28 | os.system([var_string, ""])
| ^^^^^^^^^^^^^^^^ S605
|

View File

@@ -0,0 +1,105 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S610.py:4:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
3 | # Errors
4 | User.objects.filter(username='admin').extra(dict(could_be='insecure'))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
S610.py:5:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
3 | # Errors
4 | User.objects.filter(username='admin').extra(dict(could_be='insecure'))
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
S610.py:6:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
4 | User.objects.filter(username='admin').extra(dict(could_be='insecure'))
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
|
S610.py:7:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
9 | User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
|
S610.py:8:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
9 | User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
|
S610.py:9:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
9 | User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
10 |
11 | query = '"username") AS "username", * FROM "auth_user" WHERE 1=1 OR "username"=? --'
|
S610.py:12:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
11 | query = '"username") AS "username", * FROM "auth_user" WHERE 1=1 OR "username"=? --'
12 | User.objects.filter(username='admin').extra(select={'test': query})
| ^^^^^^^^^^^^^^^^^^^^^^^^ S610
13 |
14 | where_var = ['1=1) OR 1=1 AND (1=1']
|
S610.py:15:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
14 | where_var = ['1=1) OR 1=1 AND (1=1']
15 | User.objects.filter(username='admin').extra(where=where_var)
| ^^^^^^^^^^^^^^^^^ S610
16 |
17 | where_str = '1=1) OR 1=1 AND (1=1'
|
S610.py:18:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
17 | where_str = '1=1) OR 1=1 AND (1=1'
18 | User.objects.filter(username='admin').extra(where=[where_str])
| ^^^^^^^^^^^^^^^^^^^ S610
19 |
20 | tables_var = ['django_content_type" WHERE "auth_user"."username"="admin']
|
S610.py:21:25: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
20 | tables_var = ['django_content_type" WHERE "auth_user"."username"="admin']
21 | User.objects.all().extra(tables=tables_var).distinct()
| ^^^^^^^^^^^^^^^^^^^ S610
22 |
23 | tables_str = 'django_content_type" WHERE "auth_user"."username"="admin'
|
S610.py:24:25: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
23 | tables_str = 'django_content_type" WHERE "auth_user"."username"="admin'
24 | User.objects.all().extra(tables=[tables_str]).distinct()
| ^^^^^^^^^^^^^^^^^^^^^ S610
25 |
26 | # OK
|

View File

@@ -73,7 +73,7 @@ pub(crate) fn builtin_argument_shadowing(checker: &mut Checker, parameter: &Para
BuiltinArgumentShadowing {
name: parameter.name.to_string(),
},
parameter.range(),
parameter.name.range(),
));
}
}

View File

@@ -243,7 +243,7 @@ pub(crate) fn trailing_commas(
// F-strings are handled as `String` token type with the complete range
// of the outermost f-string. This means that the expression inside the
// f-string is not checked for trailing commas.
Tok::FStringStart => {
Tok::FStringStart(_) => {
fstrings = fstrings.saturating_add(1);
None
}

View File

@@ -205,7 +205,7 @@ C413.py:14:1: C413 [*] Unnecessary `reversed` call around `sorted()`
14 |+sorted((i for i in range(42)), reverse=True)
15 15 | reversed(sorted((i for i in range(42)), reverse=True))
16 16 |
17 17 |
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
C413.py:15:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
@@ -213,6 +213,8 @@ C413.py:15:1: C413 [*] Unnecessary `reversed` call around `sorted()`
14 | reversed(sorted(i for i in range(42)))
15 | reversed(sorted((i for i in range(42)), reverse=True))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C413
16 |
17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
= help: Remove unnecessary `reversed` call
@@ -223,7 +225,38 @@ C413.py:15:1: C413 [*] Unnecessary `reversed` call around `sorted()`
15 |-reversed(sorted((i for i in range(42)), reverse=True))
15 |+sorted((i for i in range(42)), reverse=False)
16 16 |
17 17 |
18 18 | def reversed(*args, **kwargs):
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
18 18 | reversed(sorted([1, 2, 3], reverse=False or True))
C413.py:18:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
18 | reversed(sorted([1, 2, 3], reverse=False or True))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C413
19 | reversed(sorted([1, 2, 3], reverse=(False or True)))
|
= help: Remove unnecessary `reversed` call
Unsafe fix
15 15 | reversed(sorted((i for i in range(42)), reverse=True))
16 16 |
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
18 |-reversed(sorted([1, 2, 3], reverse=False or True))
18 |+sorted([1, 2, 3], reverse=not (False or True))
19 19 | reversed(sorted([1, 2, 3], reverse=(False or True)))
C413.py:19:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
18 | reversed(sorted([1, 2, 3], reverse=False or True))
19 | reversed(sorted([1, 2, 3], reverse=(False or True)))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C413
|
= help: Remove unnecessary `reversed` call
Unsafe fix
16 16 |
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
18 18 | reversed(sorted([1, 2, 3], reverse=False or True))
19 |-reversed(sorted([1, 2, 3], reverse=(False or True)))
19 |+sorted([1, 2, 3], reverse=not (False or True))

View File

@@ -110,7 +110,7 @@ pub(crate) fn implicit(
{
let (a_range, b_range) = match (a_tok, b_tok) {
(Tok::String { .. }, Tok::String { .. }) => (*a_range, *b_range),
(Tok::String { .. }, Tok::FStringStart) => {
(Tok::String { .. }, Tok::FStringStart(_)) => {
match indexer.fstring_ranges().innermost(b_range.start()) {
Some(b_range) => (*a_range, b_range),
None => continue,
@@ -122,7 +122,7 @@ pub(crate) fn implicit(
None => continue,
}
}
(Tok::FStringEnd, Tok::FStringStart) => {
(Tok::FStringEnd, Tok::FStringStart(_)) => {
match (
indexer.fstring_ranges().innermost(a_range.start()),
indexer.fstring_ranges().innermost(b_range.start()),

View File

@@ -11,7 +11,7 @@ mod tests {
use crate::assert_messages;
use crate::registry::Rule;
use crate::rules::flake8_import_conventions::settings::default_aliases;
use crate::rules::flake8_import_conventions::settings::{default_aliases, BannedAliases};
use crate::settings::LinterSettings;
use crate::test::test_path;
@@ -57,17 +57,20 @@ mod tests {
banned_aliases: FxHashMap::from_iter([
(
"typing".to_string(),
vec!["t".to_string(), "ty".to_string()],
BannedAliases::from_iter(["t".to_string(), "ty".to_string()]),
),
(
"numpy".to_string(),
vec!["nmp".to_string(), "npy".to_string()],
BannedAliases::from_iter(["nmp".to_string(), "npy".to_string()]),
),
(
"tensorflow.keras.backend".to_string(),
vec!["K".to_string()],
BannedAliases::from_iter(["K".to_string()]),
),
(
"torch.nn.functional".to_string(),
BannedAliases::from_iter(["F".to_string()]),
),
("torch.nn.functional".to_string(), vec!["F".to_string()]),
]),
banned_from: FxHashSet::default(),
},

View File

@@ -1,10 +1,12 @@
use ruff_python_ast::Stmt;
use rustc_hash::FxHashMap;
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::Stmt;
use ruff_text_size::Ranged;
use crate::rules::flake8_import_conventions::settings::BannedAliases;
/// ## What it does
/// Checks for imports that use non-standard naming conventions, like
/// `import tensorflow.keras.backend as K`.
@@ -49,7 +51,7 @@ pub(crate) fn banned_import_alias(
stmt: &Stmt,
name: &str,
asname: &str,
banned_conventions: &FxHashMap<String, Vec<String>>,
banned_conventions: &FxHashMap<String, BannedAliases>,
) -> Option<Diagnostic> {
if let Some(banned_aliases) = banned_conventions.get(name) {
if banned_aliases

View File

@@ -1,11 +1,14 @@
//! Settings for import conventions.
use rustc_hash::{FxHashMap, FxHashSet};
use std::fmt::{Display, Formatter};
use crate::display_settings;
use rustc_hash::{FxHashMap, FxHashSet};
use serde::{Deserialize, Serialize};
use ruff_macros::CacheKey;
use crate::display_settings;
const CONVENTIONAL_ALIASES: &[(&str, &str)] = &[
("altair", "alt"),
("matplotlib", "mpl"),
@@ -23,10 +26,41 @@ const CONVENTIONAL_ALIASES: &[(&str, &str)] = &[
("pyarrow", "pa"),
];
#[derive(Debug, Default, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub struct BannedAliases(Vec<String>);
impl Display for BannedAliases {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "[")?;
for (i, alias) in self.0.iter().enumerate() {
if i > 0 {
write!(f, ", ")?;
}
write!(f, "{alias}")?;
}
write!(f, "]")
}
}
impl BannedAliases {
/// Returns an iterator over the banned aliases.
pub fn iter(&self) -> impl Iterator<Item = &str> {
self.0.iter().map(String::as_str)
}
}
impl FromIterator<String> for BannedAliases {
fn from_iter<I: IntoIterator<Item = String>>(iter: I) -> Self {
Self(iter.into_iter().collect())
}
}
#[derive(Debug, CacheKey)]
pub struct Settings {
pub aliases: FxHashMap<String, String>,
pub banned_aliases: FxHashMap<String, Vec<String>>,
pub banned_aliases: FxHashMap<String, BannedAliases>,
pub banned_from: FxHashSet<String>,
}
@@ -53,9 +87,9 @@ impl Display for Settings {
formatter = f,
namespace = "linter.flake8_import_conventions",
fields = [
self.aliases | debug,
self.banned_aliases | debug,
self.banned_from | array,
self.aliases | map,
self.banned_aliases | map,
self.banned_from | set,
]
}
Ok(())

View File

@@ -413,5 +413,3 @@ PT018.py:65:5: PT018 [*] Assertion should be broken down into multiple parts
70 72 |
71 73 | assert (not self.find_graph_output(node.output[0]) or
72 74 | self.find_graph_input(node.input[0]))

View File

@@ -1,6 +1,5 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::str::{is_triple_quote, leading_quote};
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
use ruff_source_file::Locator;
@@ -158,7 +157,7 @@ pub(crate) fn avoidable_escaped_quote(
// ```python
// f'"foo" {'nested'}"
// ```
if matches!(tok, Tok::String { .. } | Tok::FStringStart) {
if matches!(tok, Tok::String { .. } | Tok::FStringStart(_)) {
if let Some(fstring_context) = fstrings.last_mut() {
fstring_context.ignore_escaped_quotes();
continue;
@@ -170,16 +169,13 @@ pub(crate) fn avoidable_escaped_quote(
Tok::String {
value: string_contents,
kind,
triple_quoted,
} => {
if kind.is_raw() || *triple_quoted {
if kind.is_raw_string() || kind.is_triple_quoted() {
continue;
}
// Check if we're using the preferred quotation style.
if !leading_quote(locator.slice(tok_range)).is_some_and(|text| {
contains_quote(text, quotes_settings.inline_quotes.as_char())
}) {
if Quote::from(kind.quote_style()) != quotes_settings.inline_quotes {
continue;
}
@@ -192,7 +188,7 @@ pub(crate) fn avoidable_escaped_quote(
let mut diagnostic = Diagnostic::new(AvoidableEscapedQuote, tok_range);
let fixed_contents = format!(
"{prefix}{quote}{value}{quote}",
prefix = kind.as_str(),
prefix = kind.prefix_str(),
quote = quotes_settings.inline_quotes.opposite().as_char(),
value = unescape_string(
string_contents,
@@ -206,12 +202,11 @@ pub(crate) fn avoidable_escaped_quote(
diagnostics.push(diagnostic);
}
}
Tok::FStringStart => {
let text = locator.slice(tok_range);
Tok::FStringStart(kind) => {
// Check for escaped quote only if we're using the preferred quotation
// style and it isn't a triple-quoted f-string.
let check_for_escaped_quote = !is_triple_quote(text)
&& contains_quote(text, quotes_settings.inline_quotes.as_char());
let check_for_escaped_quote = !kind.is_triple_quoted()
&& Quote::from(kind.quote_style()) == quotes_settings.inline_quotes;
fstrings.push(FStringContext::new(
check_for_escaped_quote,
tok_range,
@@ -220,9 +215,8 @@ pub(crate) fn avoidable_escaped_quote(
}
Tok::FStringMiddle {
value: string_contents,
is_raw,
triple_quoted: _,
} if !is_raw => {
kind,
} if !kind.is_raw_string() => {
let Some(context) = fstrings.last_mut() else {
continue;
};
@@ -315,17 +309,12 @@ pub(crate) fn unnecessary_escaped_quote(
Tok::String {
value: string_contents,
kind,
triple_quoted,
} => {
if kind.is_raw() || *triple_quoted {
if kind.is_raw_string() || kind.is_triple_quoted() {
continue;
}
let leading = match leading_quote(locator.slice(tok_range)) {
Some("\"") => Quote::Double,
Some("'") => Quote::Single,
_ => continue,
};
let leading = kind.quote_style();
if !contains_escaped_quote(string_contents, leading.opposite().as_char()) {
continue;
}
@@ -333,7 +322,7 @@ pub(crate) fn unnecessary_escaped_quote(
let mut diagnostic = Diagnostic::new(UnnecessaryEscapedQuote, tok_range);
let fixed_contents = format!(
"{prefix}{quote}{value}{quote}",
prefix = kind.as_str(),
prefix = kind.prefix_str(),
quote = leading.as_char(),
value = unescape_string(string_contents, leading.opposite().as_char())
);
@@ -343,16 +332,11 @@ pub(crate) fn unnecessary_escaped_quote(
)));
diagnostics.push(diagnostic);
}
Tok::FStringStart => {
let text = locator.slice(tok_range);
Tok::FStringStart(kind) => {
// Check for escaped quote only if we're using the preferred quotation
// style and it isn't a triple-quoted f-string.
let check_for_escaped_quote = !is_triple_quote(text);
let quote_style = if contains_quote(text, Quote::Single.as_char()) {
Quote::Single
} else {
Quote::Double
};
let check_for_escaped_quote = !kind.is_triple_quoted();
let quote_style = Quote::from(kind.quote_style());
fstrings.push(FStringContext::new(
check_for_escaped_quote,
tok_range,
@@ -361,9 +345,8 @@ pub(crate) fn unnecessary_escaped_quote(
}
Tok::FStringMiddle {
value: string_contents,
is_raw,
triple_quoted: _,
} if !is_raw => {
kind,
} if !kind.is_raw_string() => {
let Some(context) = fstrings.last_mut() else {
continue;
};

View File

@@ -383,7 +383,7 @@ struct FStringRangeBuilder {
impl FStringRangeBuilder {
fn visit_token(&mut self, token: &Tok, range: TextRange) {
match token {
Tok::FStringStart => {
Tok::FStringStart(_) => {
if self.nesting == 0 {
self.start_location = range.start();
}

View File

@@ -22,6 +22,15 @@ impl Default for Quote {
}
}
impl From<ruff_python_ast::str::QuoteStyle> for Quote {
fn from(value: ruff_python_ast::str::QuoteStyle) -> Self {
match value {
ruff_python_ast::str::QuoteStyle::Double => Self::Double,
ruff_python_ast::str::QuoteStyle::Single => Self::Single,
}
}
}
#[derive(Debug, CacheKey)]
pub struct Settings {
pub inline_quotes: Quote,

View File

@@ -326,8 +326,10 @@ singles_escaped_unnecessary.py:43:26: Q004 [*] Unnecessary escape on inner quote
|
41 | # Make sure we do not unescape quotes
42 | this_is_fine = "This is an \\'escaped\\' quote"
43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
44 |
45 | # Invalid escapes in bytestrings are also triggered:
|
= help: Remove backslash
@@ -335,7 +337,23 @@ singles_escaped_unnecessary.py:43:26: Q004 [*] Unnecessary escape on inner quote
40 40 |
41 41 | # Make sure we do not unescape quotes
42 42 | this_is_fine = "This is an \\'escaped\\' quote"
43 |-this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
43 |+this_should_raise_Q004 = "This is an \\'escaped\\' quote with an extra backslash"
43 |-this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
43 |+this_should_raise_Q004 = "This is an \\'escaped\\' quote with an extra backslash" # Q004
44 44 |
45 45 | # Invalid escapes in bytestrings are also triggered:
46 46 | x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
singles_escaped_unnecessary.py:46:5: Q004 [*] Unnecessary escape on inner quote character
|
45 | # Invalid escapes in bytestrings are also triggered:
46 | x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
|
= help: Remove backslash
Safe fix
43 43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
44 44 |
45 45 | # Invalid escapes in bytestrings are also triggered:
46 |-x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
46 |+x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce'rm\x0e\xcd\xe9.\xf8\xd2" # Q004

View File

@@ -1,3 +1,4 @@
use ast::{StringLiteralFlags, StringLiteralPrefix};
use ruff_python_ast::{self as ast, Arguments, Expr};
use ruff_text_size::Ranged;
@@ -218,7 +219,13 @@ fn check_os_environ_subscript(checker: &mut Checker, expr: &Expr) {
);
let node = ast::StringLiteral {
value: capital_env_var.into_boxed_str(),
unicode: env_var.is_unicode(),
flags: StringLiteralFlags::default().with_prefix({
if env_var.is_unicode() {
StringLiteralPrefix::UString
} else {
StringLiteralPrefix::None
}
}),
..ast::StringLiteral::default()
};
let new_env_var = node.into();

View File

@@ -13,6 +13,12 @@ pub struct ApiBan {
pub msg: String,
}
impl Display for ApiBan {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.msg)
}
}
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey, Default)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
@@ -47,7 +53,7 @@ impl Display for Settings {
namespace = "linter.flake8_tidy_imports",
fields = [
self.ban_relative_imports,
self.banned_api | debug,
self.banned_api | map,
self.banned_module_level_imports | array,
]
}

View File

@@ -1,3 +1,4 @@
use ast::FStringFlags;
use itertools::Itertools;
use crate::fix::edits::pad;
@@ -97,6 +98,7 @@ fn build_fstring(joiner: &str, joinees: &[Expr]) -> Option<Expr> {
let node = ast::FString {
elements: f_string_elements,
range: TextRange::default(),
flags: FStringFlags::default(),
};
Some(node.into())
}

View File

@@ -278,7 +278,7 @@ mod tests {
use std::path::Path;
use anyhow::Result;
use rustc_hash::FxHashMap;
use rustc_hash::{FxHashMap, FxHashSet};
use test_case::test_case;
use ruff_text_size::Ranged;
@@ -495,7 +495,7 @@ mod tests {
Path::new("isort").join(path).as_path(),
&LinterSettings {
isort: super::settings::Settings {
force_to_top: BTreeSet::from([
force_to_top: FxHashSet::from_iter([
"z".to_string(),
"lib1".to_string(),
"lib3".to_string(),
@@ -575,9 +575,10 @@ mod tests {
&LinterSettings {
isort: super::settings::Settings {
force_single_line: true,
single_line_exclusions: vec!["os".to_string(), "logging.handlers".to_string()]
.into_iter()
.collect::<BTreeSet<_>>(),
single_line_exclusions: FxHashSet::from_iter([
"os".to_string(),
"logging.handlers".to_string(),
]),
..super::settings::Settings::default()
},
src: vec![test_resource_path("fixtures/isort")],
@@ -636,7 +637,7 @@ mod tests {
&LinterSettings {
isort: super::settings::Settings {
order_by_type: true,
classes: BTreeSet::from([
classes: FxHashSet::from_iter([
"SVC".to_string(),
"SELU".to_string(),
"N_CLASS".to_string(),
@@ -664,7 +665,7 @@ mod tests {
&LinterSettings {
isort: super::settings::Settings {
order_by_type: true,
constants: BTreeSet::from([
constants: FxHashSet::from_iter([
"Const".to_string(),
"constant".to_string(),
"First".to_string(),
@@ -694,7 +695,7 @@ mod tests {
&LinterSettings {
isort: super::settings::Settings {
order_by_type: true,
variables: BTreeSet::from([
variables: FxHashSet::from_iter([
"VAR".to_string(),
"Variable".to_string(),
"MyVar".to_string(),
@@ -721,7 +722,7 @@ mod tests {
&LinterSettings {
isort: super::settings::Settings {
force_sort_within_sections: true,
force_to_top: BTreeSet::from(["z".to_string()]),
force_to_top: FxHashSet::from_iter(["z".to_string()]),
..super::settings::Settings::default()
},
src: vec![test_resource_path("fixtures/isort")],
@@ -771,7 +772,7 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from([
required_imports: BTreeSet::from_iter([
"from __future__ import annotations".to_string()
]),
..super::settings::Settings::default()
@@ -801,7 +802,7 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from([
required_imports: BTreeSet::from_iter([
"from __future__ import annotations as _annotations".to_string(),
]),
..super::settings::Settings::default()
@@ -824,7 +825,7 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from([
required_imports: BTreeSet::from_iter([
"from __future__ import annotations".to_string(),
"from __future__ import generator_stop".to_string(),
]),
@@ -848,7 +849,7 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from(["from __future__ import annotations, \
required_imports: BTreeSet::from_iter(["from __future__ import annotations, \
generator_stop"
.to_string()]),
..super::settings::Settings::default()
@@ -871,7 +872,7 @@ mod tests {
&LinterSettings {
src: vec![test_resource_path("fixtures/isort")],
isort: super::settings::Settings {
required_imports: BTreeSet::from(["import os".to_string()]),
required_imports: BTreeSet::from_iter(["import os".to_string()]),
..super::settings::Settings::default()
},
..LinterSettings::for_rule(Rule::MissingRequiredImport)
@@ -1002,7 +1003,7 @@ mod tests {
Path::new("isort").join(path).as_path(),
&LinterSettings {
isort: super::settings::Settings {
no_lines_before: BTreeSet::from([
no_lines_before: FxHashSet::from_iter([
ImportSection::Known(ImportType::Future),
ImportSection::Known(ImportType::StandardLibrary),
ImportSection::Known(ImportType::ThirdParty),
@@ -1030,7 +1031,7 @@ mod tests {
Path::new("isort").join(path).as_path(),
&LinterSettings {
isort: super::settings::Settings {
no_lines_before: BTreeSet::from([
no_lines_before: FxHashSet::from_iter([
ImportSection::Known(ImportType::StandardLibrary),
ImportSection::Known(ImportType::LocalFolder),
]),

View File

@@ -5,12 +5,13 @@ use std::error::Error;
use std::fmt;
use std::fmt::{Display, Formatter};
use rustc_hash::FxHashSet;
use serde::{Deserialize, Serialize};
use strum::IntoEnumIterator;
use crate::display_settings;
use ruff_macros::CacheKey;
use crate::display_settings;
use crate::rules::isort::categorize::KnownModules;
use crate::rules::isort::ImportType;
@@ -52,17 +53,17 @@ pub struct Settings {
pub force_sort_within_sections: bool,
pub case_sensitive: bool,
pub force_wrap_aliases: bool,
pub force_to_top: BTreeSet<String>,
pub force_to_top: FxHashSet<String>,
pub known_modules: KnownModules,
pub detect_same_package: bool,
pub order_by_type: bool,
pub relative_imports_order: RelativeImportsOrder,
pub single_line_exclusions: BTreeSet<String>,
pub single_line_exclusions: FxHashSet<String>,
pub split_on_trailing_comma: bool,
pub classes: BTreeSet<String>,
pub constants: BTreeSet<String>,
pub variables: BTreeSet<String>,
pub no_lines_before: BTreeSet<ImportSection>,
pub classes: FxHashSet<String>,
pub constants: FxHashSet<String>,
pub variables: FxHashSet<String>,
pub no_lines_before: FxHashSet<ImportSection>,
pub lines_after_imports: isize,
pub lines_between_types: usize,
pub forced_separate: Vec<String>,
@@ -77,23 +78,23 @@ pub struct Settings {
impl Default for Settings {
fn default() -> Self {
Self {
required_imports: BTreeSet::new(),
required_imports: BTreeSet::default(),
combine_as_imports: false,
force_single_line: false,
force_sort_within_sections: false,
detect_same_package: true,
case_sensitive: false,
force_wrap_aliases: false,
force_to_top: BTreeSet::new(),
force_to_top: FxHashSet::default(),
known_modules: KnownModules::default(),
order_by_type: true,
relative_imports_order: RelativeImportsOrder::default(),
single_line_exclusions: BTreeSet::new(),
single_line_exclusions: FxHashSet::default(),
split_on_trailing_comma: true,
classes: BTreeSet::new(),
constants: BTreeSet::new(),
variables: BTreeSet::new(),
no_lines_before: BTreeSet::new(),
classes: FxHashSet::default(),
constants: FxHashSet::default(),
variables: FxHashSet::default(),
no_lines_before: FxHashSet::default(),
lines_after_imports: -1,
lines_between_types: 0,
forced_separate: Vec::new(),
@@ -113,23 +114,23 @@ impl Display for Settings {
formatter = f,
namespace = "linter.isort",
fields = [
self.required_imports | array,
self.required_imports | set,
self.combine_as_imports,
self.force_single_line,
self.force_sort_within_sections,
self.detect_same_package,
self.case_sensitive,
self.force_wrap_aliases,
self.force_to_top | array,
self.force_to_top | set,
self.known_modules,
self.order_by_type,
self.relative_imports_order,
self.single_line_exclusions | array,
self.single_line_exclusions | set,
self.split_on_trailing_comma,
self.classes | array,
self.constants | array,
self.variables | array,
self.no_lines_before | array,
self.classes | set,
self.constants | set,
self.variables | set,
self.no_lines_before | set,
self.lines_after_imports,
self.lines_between_types,
self.forced_separate | array,
@@ -155,7 +156,7 @@ pub enum SettingsError {
InvalidUserDefinedSection(glob::PatternError),
}
impl fmt::Display for SettingsError {
impl Display for SettingsError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SettingsError::InvalidKnownThirdParty(err) => {

View File

@@ -71,6 +71,12 @@ mod tests {
#[test_case(Rule::IsLiteral, Path::new("constant_literals.py"))]
#[test_case(Rule::TypeComparison, Path::new("E721.py"))]
#[test_case(Rule::ModuleImportNotAtTopOfFile, Path::new("E402_2.py"))]
#[test_case(Rule::RedundantBackslash, Path::new("E502.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_0.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_1.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_2.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_3.py"))]
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_4.py"))]
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
let snapshot = format!(
"preview__{}_{}",
@@ -148,6 +154,23 @@ mod tests {
Ok(())
}
/// Tests the compatibility of E2 rules (E202, E225 and E275) on syntactically incorrect code.
#[test]
fn white_space_syntax_error_compatibility() -> Result<()> {
let diagnostics = test_path(
Path::new("pycodestyle").join("E2_syntax_error.py"),
&settings::LinterSettings {
..settings::LinterSettings::for_rules([
Rule::MissingWhitespaceAroundOperator,
Rule::MissingWhitespaceAfterKeyword,
Rule::WhitespaceBeforeCloseBracket,
])
},
)?;
assert_messages!(diagnostics);
Ok(())
}
#[test_case(Rule::BlankLineBetweenMethods, Path::new("E30.py"))]
#[test_case(Rule::BlankLinesTopLevel, Path::new("E30.py"))]
#[test_case(Rule::TooManyBlankLines, Path::new("E30.py"))]

View File

@@ -3,7 +3,7 @@ use memchr::memchr_iter;
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_index::Indexer;
use ruff_python_parser::{StringKind, Tok};
use ruff_python_parser::Tok;
use ruff_source_file::Locator;
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
@@ -66,21 +66,21 @@ pub(crate) fn invalid_escape_sequence(
token: &Tok,
token_range: TextRange,
) {
let (token_source_code, string_start_location) = match token {
Tok::FStringMiddle { value, is_raw, .. } => {
if *is_raw {
let (token_source_code, string_start_location, kind) = match token {
Tok::FStringMiddle { value, kind } => {
if kind.is_raw_string() {
return;
}
let Some(range) = indexer.fstring_ranges().innermost(token_range.start()) else {
return;
};
(&**value, range.start())
(&**value, range.start(), kind)
}
Tok::String { kind, .. } => {
if kind.is_raw() {
if kind.is_raw_string() {
return;
}
(locator.slice(token_range), token_range.start())
(locator.slice(token_range), token_range.start(), kind)
}
_ => return,
};
@@ -207,13 +207,7 @@ pub(crate) fn invalid_escape_sequence(
invalid_escape_char.range(),
);
if matches!(
token,
Tok::String {
kind: StringKind::Unicode,
..
}
) {
if kind.is_u_string() {
// Replace the Unicode prefix with `r`.
diagnostic.set_fix(Fix::safe_edit(Edit::replacement(
"r".to_string(),

View File

@@ -9,6 +9,7 @@ use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::fix::snippet::SourceCodeSnippet;
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
enum EqCmpOp {
@@ -102,39 +103,52 @@ impl AlwaysFixableViolation for NoneComparison {
///
/// [PEP 8]: https://peps.python.org/pep-0008/#programming-recommendations
#[violation]
pub struct TrueFalseComparison(bool, EqCmpOp);
pub struct TrueFalseComparison {
value: bool,
op: EqCmpOp,
cond: Option<SourceCodeSnippet>,
}
impl AlwaysFixableViolation for TrueFalseComparison {
#[derive_message_formats]
fn message(&self) -> String {
let TrueFalseComparison(value, op) = self;
let TrueFalseComparison { value, op, cond } = self;
let Some(cond) = cond else {
return "Avoid equality comparisons to `True` or `False`".to_string();
};
let cond = cond.truncated_display();
match (value, op) {
(true, EqCmpOp::Eq) => {
format!("Avoid equality comparisons to `True`; use `if cond:` for truth checks")
format!("Avoid equality comparisons to `True`; use `if {cond}:` for truth checks")
}
(true, EqCmpOp::NotEq) => {
format!(
"Avoid inequality comparisons to `True`; use `if not cond:` for false checks"
"Avoid inequality comparisons to `True`; use `if not {cond}:` for false checks"
)
}
(false, EqCmpOp::Eq) => {
format!(
"Avoid equality comparisons to `False`; use `if not cond:` for false checks"
"Avoid equality comparisons to `False`; use `if not {cond}:` for false checks"
)
}
(false, EqCmpOp::NotEq) => {
format!("Avoid inequality comparisons to `False`; use `if cond:` for truth checks")
format!(
"Avoid inequality comparisons to `False`; use `if {cond}:` for truth checks"
)
}
}
}
fn fix_title(&self) -> String {
let TrueFalseComparison(value, op) = self;
let TrueFalseComparison { value, op, cond } = self;
let Some(cond) = cond.as_ref().and_then(|cond| cond.full_display()) else {
return "Replace comparison".to_string();
};
match (value, op) {
(true, EqCmpOp::Eq) => "Replace with `cond`".to_string(),
(true, EqCmpOp::NotEq) => "Replace with `not cond`".to_string(),
(false, EqCmpOp::Eq) => "Replace with `not cond`".to_string(),
(false, EqCmpOp::NotEq) => "Replace with `cond`".to_string(),
(true, EqCmpOp::Eq) => format!("Replace with `{cond}`"),
(true, EqCmpOp::NotEq) => format!("Replace with `not {cond}`"),
(false, EqCmpOp::Eq) => format!("Replace with `not {cond}`"),
(false, EqCmpOp::NotEq) => format!("Replace with `{cond}`"),
}
}
}
@@ -178,17 +192,35 @@ pub(crate) fn literal_comparisons(checker: &mut Checker, compare: &ast::ExprComp
if let Expr::BooleanLiteral(ast::ExprBooleanLiteral { value, .. }) = comparator {
match op {
EqCmpOp::Eq => {
let cond = if compare.ops.len() == 1 {
Some(SourceCodeSnippet::from_str(checker.locator().slice(next)))
} else {
None
};
let diagnostic = Diagnostic::new(
TrueFalseComparison(*value, op),
comparator.range(),
TrueFalseComparison {
value: *value,
op,
cond,
},
compare.range(),
);
bad_ops.insert(0, CmpOp::Is);
diagnostics.push(diagnostic);
}
EqCmpOp::NotEq => {
let cond = if compare.ops.len() == 1 {
Some(SourceCodeSnippet::from_str(checker.locator().slice(next)))
} else {
None
};
let diagnostic = Diagnostic::new(
TrueFalseComparison(*value, op),
comparator.range(),
TrueFalseComparison {
value: *value,
op,
cond,
},
compare.range(),
);
bad_ops.insert(0, CmpOp::IsNot);
diagnostics.push(diagnostic);
@@ -231,14 +263,40 @@ pub(crate) fn literal_comparisons(checker: &mut Checker, compare: &ast::ExprComp
if let Expr::BooleanLiteral(ast::ExprBooleanLiteral { value, .. }) = next {
match op {
EqCmpOp::Eq => {
let diagnostic =
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
let cond = if compare.ops.len() == 1 {
Some(SourceCodeSnippet::from_str(
checker.locator().slice(comparator),
))
} else {
None
};
let diagnostic = Diagnostic::new(
TrueFalseComparison {
value: *value,
op,
cond,
},
compare.range(),
);
bad_ops.insert(index, CmpOp::Is);
diagnostics.push(diagnostic);
}
EqCmpOp::NotEq => {
let diagnostic =
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
let cond = if compare.ops.len() == 1 {
Some(SourceCodeSnippet::from_str(
checker.locator().slice(comparator),
))
} else {
None
};
let diagnostic = Diagnostic::new(
TrueFalseComparison {
value: *value,
op,
cond,
},
compare.range(),
);
bad_ops.insert(index, CmpOp::IsNot);
diagnostics.push(diagnostic);
}

View File

@@ -137,16 +137,16 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &mut LogicalLin
match kind {
TokenKind::FStringStart => fstrings += 1,
TokenKind::FStringEnd => fstrings = fstrings.saturating_sub(1),
TokenKind::Lsqb if fstrings == 0 => {
TokenKind::Lsqb => {
brackets.push(kind);
}
TokenKind::Rsqb if fstrings == 0 => {
TokenKind::Rsqb => {
brackets.pop();
}
TokenKind::Lbrace if fstrings == 0 => {
TokenKind::Lbrace => {
brackets.push(kind);
}
TokenKind::Rbrace if fstrings == 0 => {
TokenKind::Rbrace => {
brackets.pop();
}
_ => {}

View File

@@ -59,7 +59,13 @@ pub(crate) fn missing_whitespace_after_keyword(
|| tok0_kind == TokenKind::Yield && tok1_kind == TokenKind::Rpar
|| matches!(
tok1_kind,
TokenKind::Colon | TokenKind::Newline | TokenKind::NonLogicalNewline
TokenKind::Colon
| TokenKind::Newline
| TokenKind::NonLogicalNewline
// In the event of a syntax error, do not attempt to add a whitespace.
| TokenKind::Rpar
| TokenKind::Rsqb
| TokenKind::Rbrace
))
&& tok0.end() == tok1.start()
{

View File

@@ -211,6 +211,21 @@ pub(crate) fn missing_whitespace_around_operator(
} else {
NeedsSpace::No
}
} else if tokens.peek().is_some_and(|token| {
matches!(
token.kind(),
TokenKind::Rpar | TokenKind::Rsqb | TokenKind::Rbrace
)
}) {
// There should not be a closing bracket directly after a token, as it is a syntax
// error. For example:
// ```
// 1+)
// ```
//
// However, allow it in order to prevent entering an infinite loop in which E225 adds a
// space only for E202 to remove it.
NeedsSpace::No
} else if is_whitespace_needed(kind) {
NeedsSpace::Yes
} else {

View File

@@ -3,6 +3,7 @@ pub(crate) use indentation::*;
pub(crate) use missing_whitespace::*;
pub(crate) use missing_whitespace_after_keyword::*;
pub(crate) use missing_whitespace_around_operator::*;
pub(crate) use redundant_backslash::*;
pub(crate) use space_around_operator::*;
pub(crate) use whitespace_around_keywords::*;
pub(crate) use whitespace_around_named_parameter_equals::*;
@@ -25,6 +26,7 @@ mod indentation;
mod missing_whitespace;
mod missing_whitespace_after_keyword;
mod missing_whitespace_around_operator;
mod redundant_backslash;
mod space_around_operator;
mod whitespace_around_keywords;
mod whitespace_around_named_parameter_equals;

View File

@@ -0,0 +1,92 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_index::Indexer;
use ruff_python_parser::TokenKind;
use ruff_source_file::Locator;
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::checkers::logical_lines::LogicalLinesContext;
use super::LogicalLine;
/// ## What it does
/// Checks for redundant backslashes between brackets.
///
/// ## Why is this bad?
/// Explicit line joins using a backslash are redundant between brackets.
///
/// ## Example
/// ```python
/// x = (2 + \
/// 2)
/// ```
///
/// Use instead:
/// ```python
/// x = (2 +
/// 2)
/// ```
///
/// [PEP 8]: https://peps.python.org/pep-0008/#maximum-line-length
#[violation]
pub struct RedundantBackslash;
impl AlwaysFixableViolation for RedundantBackslash {
#[derive_message_formats]
fn message(&self) -> String {
format!("Redundant backslash")
}
fn fix_title(&self) -> String {
"Remove redundant backslash".to_string()
}
}
/// E502
pub(crate) fn redundant_backslash(
line: &LogicalLine,
locator: &Locator,
indexer: &Indexer,
context: &mut LogicalLinesContext,
) {
let mut parens = 0;
let continuation_lines = indexer.continuation_line_starts();
let mut start_index = 0;
for token in line.tokens() {
match token.kind() {
TokenKind::Lpar | TokenKind::Lsqb | TokenKind::Lbrace => {
if parens == 0 {
let start = locator.line_start(token.start());
start_index = continuation_lines
.binary_search(&start)
.map_or_else(|err_index| err_index, |ok_index| ok_index);
}
parens += 1;
}
TokenKind::Rpar | TokenKind::Rsqb | TokenKind::Rbrace => {
parens -= 1;
if parens == 0 {
let end = locator.line_start(token.start());
let end_index = continuation_lines
.binary_search(&end)
.map_or_else(|err_index| err_index, |ok_index| ok_index);
for continuation_line in &continuation_lines[start_index..end_index] {
let backslash_end = locator.line_end(*continuation_line);
let backslash_start = backslash_end - TextSize::new(1);
let mut diagnostic = Diagnostic::new(
RedundantBackslash,
TextRange::new(backslash_start, backslash_end),
);
diagnostic.set_fix(Fix::safe_edit(Edit::deletion(
backslash_start,
backslash_end,
)));
context.push_diagnostic(diagnostic);
}
}
}
_ => continue,
}
}
}

View File

@@ -42,7 +42,7 @@ pub(crate) fn no_newline_at_end_of_file(
) -> Option<Diagnostic> {
let source = locator.contents();
// Ignore empty and BOM only files
// Ignore empty and BOM only files.
if source.is_empty() || source == "\u{feff}" {
return None;
}

View File

@@ -17,6 +17,7 @@ pub(crate) use module_import_not_at_top_of_file::*;
pub(crate) use multiple_imports_on_one_line::*;
pub(crate) use not_tests::*;
pub(crate) use tab_indentation::*;
pub(crate) use too_many_newlines_at_end_of_file::*;
pub(crate) use trailing_whitespace::*;
pub(crate) use type_comparison::*;
@@ -39,5 +40,6 @@ mod module_import_not_at_top_of_file;
mod multiple_imports_on_one_line;
mod not_tests;
mod tab_indentation;
mod too_many_newlines_at_end_of_file;
mod trailing_whitespace;
mod type_comparison;

View File

@@ -0,0 +1,99 @@
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
use ruff_text_size::{TextRange, TextSize};
/// ## What it does
/// Checks for files with multiple trailing blank lines.
///
/// ## Why is this bad?
/// Trailing blank lines in a file are superfluous.
///
/// However, the last line of the file should end with a newline.
///
/// ## Example
/// ```python
/// spam(1)\n\n\n
/// ```
///
/// Use instead:
/// ```python
/// spam(1)\n
/// ```
#[violation]
pub struct TooManyNewlinesAtEndOfFile {
num_trailing_newlines: u32,
}
impl AlwaysFixableViolation for TooManyNewlinesAtEndOfFile {
#[derive_message_formats]
fn message(&self) -> String {
let TooManyNewlinesAtEndOfFile {
num_trailing_newlines,
} = self;
// We expect a single trailing newline; so two trailing newlines is one too many, three
// trailing newlines is two too many, etc.
if *num_trailing_newlines > 2 {
format!("Too many newlines at end of file")
} else {
format!("Extra newline at end of file")
}
}
fn fix_title(&self) -> String {
let TooManyNewlinesAtEndOfFile {
num_trailing_newlines,
} = self;
if *num_trailing_newlines > 2 {
"Remove trailing newlines".to_string()
} else {
"Remove trailing newline".to_string()
}
}
}
/// W391
pub(crate) fn too_many_newlines_at_end_of_file(
diagnostics: &mut Vec<Diagnostic>,
lxr: &[LexResult],
) {
let mut num_trailing_newlines = 0u32;
let mut start: Option<TextSize> = None;
let mut end: Option<TextSize> = None;
// Count the number of trailing newlines.
for (tok, range) in lxr.iter().rev().flatten() {
match tok {
Tok::NonLogicalNewline | Tok::Newline => {
if num_trailing_newlines == 0 {
end = Some(range.end());
}
start = Some(range.end());
num_trailing_newlines += 1;
}
Tok::Dedent => continue,
_ => {
break;
}
}
}
if num_trailing_newlines == 0 || num_trailing_newlines == 1 {
return;
}
let range = match (start, end) {
(Some(start), Some(end)) => TextRange::new(start, end),
_ => return,
};
let mut diagnostic = Diagnostic::new(
TooManyNewlinesAtEndOfFile {
num_trailing_newlines,
},
range,
);
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
diagnostics.push(diagnostic);
}

View File

@@ -251,6 +251,8 @@ E20.py:155:14: E203 [*] Whitespace before ':'
154 | #: E203:1:13
155 | ham[lower + 1 :, "columnname"]
| ^^ E203
156 |
157 | #: Okay
|
= help: Remove whitespace before ':'
@@ -260,3 +262,21 @@ E20.py:155:14: E203 [*] Whitespace before ':'
154 154 | #: E203:1:13
155 |-ham[lower + 1 :, "columnname"]
155 |+ham[lower + 1:, "columnname"]
156 156 |
157 157 | #: Okay
158 158 | f"{ham[lower +1 :, "columnname"]}"
E20.py:161:17: E203 [*] Whitespace before ':'
|
160 | #: E203:1:13
161 | f"{ham[lower + 1 :, "columnname"]}"
| ^^ E203
|
= help: Remove whitespace before ':'
Safe fix
158 158 | f"{ham[lower +1 :, "columnname"]}"
159 159 |
160 160 | #: E203:1:13
161 |-f"{ham[lower + 1 :, "columnname"]}"
161 |+f"{ham[lower + 1:, "columnname"]}"

View File

@@ -1,15 +1,15 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
E712.py:2:11: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:2:4: E712 [*] Avoid equality comparisons to `True`; use `if res:` for truth checks
|
1 | #: E712
2 | if res == True:
| ^^^^ E712
| ^^^^^^^^^^^ E712
3 | pass
4 | #: E712
|
= help: Replace with `cond`
= help: Replace with `res`
Unsafe fix
1 1 | #: E712
@@ -19,16 +19,16 @@ E712.py:2:11: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
4 4 | #: E712
5 5 | if res != False:
E712.py:5:11: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` for truth checks
E712.py:5:4: E712 [*] Avoid inequality comparisons to `False`; use `if res:` for truth checks
|
3 | pass
4 | #: E712
5 | if res != False:
| ^^^^^ E712
| ^^^^^^^^^^^^ E712
6 | pass
7 | #: E712
|
= help: Replace with `cond`
= help: Replace with `res`
Unsafe fix
2 2 | if res == True:
@@ -40,16 +40,16 @@ E712.py:5:11: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` f
7 7 | #: E712
8 8 | if True != res:
E712.py:8:4: E712 [*] Avoid inequality comparisons to `True`; use `if not cond:` for false checks
E712.py:8:4: E712 [*] Avoid inequality comparisons to `True`; use `if not res:` for false checks
|
6 | pass
7 | #: E712
8 | if True != res:
| ^^^^ E712
| ^^^^^^^^^^^ E712
9 | pass
10 | #: E712
|
= help: Replace with `not cond`
= help: Replace with `not res`
Unsafe fix
5 5 | if res != False:
@@ -61,16 +61,16 @@ E712.py:8:4: E712 [*] Avoid inequality comparisons to `True`; use `if not cond:`
10 10 | #: E712
11 11 | if False == res:
E712.py:11:4: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
E712.py:11:4: E712 [*] Avoid equality comparisons to `False`; use `if not res:` for false checks
|
9 | pass
10 | #: E712
11 | if False == res:
| ^^^^^ E712
| ^^^^^^^^^^^^ E712
12 | pass
13 | #: E712
|
= help: Replace with `not cond`
= help: Replace with `not res`
Unsafe fix
8 8 | if True != res:
@@ -82,16 +82,16 @@ E712.py:11:4: E712 [*] Avoid equality comparisons to `False`; use `if not cond:`
13 13 | #: E712
14 14 | if res[1] == True:
E712.py:14:14: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:14:4: E712 [*] Avoid equality comparisons to `True`; use `if res[1]:` for truth checks
|
12 | pass
13 | #: E712
14 | if res[1] == True:
| ^^^^ E712
| ^^^^^^^^^^^^^^ E712
15 | pass
16 | #: E712
|
= help: Replace with `cond`
= help: Replace with `res[1]`
Unsafe fix
11 11 | if False == res:
@@ -103,16 +103,16 @@ E712.py:14:14: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
16 16 | #: E712
17 17 | if res[1] != False:
E712.py:17:14: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` for truth checks
E712.py:17:4: E712 [*] Avoid inequality comparisons to `False`; use `if res[1]:` for truth checks
|
15 | pass
16 | #: E712
17 | if res[1] != False:
| ^^^^^ E712
| ^^^^^^^^^^^^^^^ E712
18 | pass
19 | #: E712
|
= help: Replace with `cond`
= help: Replace with `res[1]`
Unsafe fix
14 14 | if res[1] == True:
@@ -124,12 +124,12 @@ E712.py:17:14: E712 [*] Avoid inequality comparisons to `False`; use `if cond:`
19 19 | #: E712
20 20 | var = 1 if cond == True else -1 if cond == False else cond
E712.py:20:20: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:20:12: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
18 | pass
19 | #: E712
20 | var = 1 if cond == True else -1 if cond == False else cond
| ^^^^ E712
| ^^^^^^^^^^^^ E712
21 | #: E712
22 | if (True) == TrueElement or x == TrueElement:
|
@@ -145,12 +145,12 @@ E712.py:20:20: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
22 22 | if (True) == TrueElement or x == TrueElement:
23 23 | pass
E712.py:20:44: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
E712.py:20:36: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
|
18 | pass
19 | #: E712
20 | var = 1 if cond == True else -1 if cond == False else cond
| ^^^^^ E712
| ^^^^^^^^^^^^^ E712
21 | #: E712
22 | if (True) == TrueElement or x == TrueElement:
|
@@ -166,15 +166,15 @@ E712.py:20:44: E712 [*] Avoid equality comparisons to `False`; use `if not cond:
22 22 | if (True) == TrueElement or x == TrueElement:
23 23 | pass
E712.py:22:5: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:22:4: E712 [*] Avoid equality comparisons to `True`; use `if TrueElement:` for truth checks
|
20 | var = 1 if cond == True else -1 if cond == False else cond
21 | #: E712
22 | if (True) == TrueElement or x == TrueElement:
| ^^^^ E712
| ^^^^^^^^^^^^^^^^^^^^^ E712
23 | pass
|
= help: Replace with `cond`
= help: Replace with `TrueElement`
Unsafe fix
19 19 | #: E712
@@ -186,15 +186,15 @@ E712.py:22:5: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
24 24 |
25 25 | if res == True != False:
E712.py:25:11: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:25:4: E712 [*] Avoid equality comparisons to `True` or `False`
|
23 | pass
24 |
25 | if res == True != False:
| ^^^^ E712
| ^^^^^^^^^^^^^^^^^^^^ E712
26 | pass
|
= help: Replace with `cond`
= help: Replace comparison
Unsafe fix
22 22 | if (True) == TrueElement or x == TrueElement:
@@ -206,15 +206,15 @@ E712.py:25:11: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
27 27 |
28 28 | if(True) == TrueElement or x == TrueElement:
E712.py:25:19: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` for truth checks
E712.py:25:4: E712 [*] Avoid equality comparisons to `True` or `False`
|
23 | pass
24 |
25 | if res == True != False:
| ^^^^^ E712
| ^^^^^^^^^^^^^^^^^^^^ E712
26 | pass
|
= help: Replace with `cond`
= help: Replace comparison
Unsafe fix
22 22 | if (True) == TrueElement or x == TrueElement:
@@ -226,15 +226,15 @@ E712.py:25:19: E712 [*] Avoid inequality comparisons to `False`; use `if cond:`
27 27 |
28 28 | if(True) == TrueElement or x == TrueElement:
E712.py:28:4: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:28:3: E712 [*] Avoid equality comparisons to `True`; use `if TrueElement:` for truth checks
|
26 | pass
27 |
28 | if(True) == TrueElement or x == TrueElement:
| ^^^^ E712
| ^^^^^^^^^^^^^^^^^^^^^ E712
29 | pass
|
= help: Replace with `cond`
= help: Replace with `TrueElement`
Unsafe fix
25 25 | if res == True != False:
@@ -246,15 +246,15 @@ E712.py:28:4: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
30 30 |
31 31 | if (yield i) == True:
E712.py:31:17: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
E712.py:31:4: E712 [*] Avoid equality comparisons to `True`; use `if yield i:` for truth checks
|
29 | pass
30 |
31 | if (yield i) == True:
| ^^^^ E712
| ^^^^^^^^^^^^^^^^^ E712
32 | print("even")
|
= help: Replace with `cond`
= help: Replace with `yield i`
Unsafe fix
28 28 | if(True) == TrueElement or x == TrueElement:
@@ -265,5 +265,3 @@ E712.py:31:17: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
32 32 | print("even")
33 33 |
34 34 | #: Okay

View File

@@ -106,16 +106,16 @@ constant_literals.py:12:4: F632 [*] Use `==` to compare constant literals
14 14 | if False == None: # E711, E712 (fix)
15 15 | pass
constant_literals.py:14:4: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
constant_literals.py:14:4: E712 [*] Avoid equality comparisons to `False`; use `if not None:` for false checks
|
12 | if False is "abc": # F632 (fix, but leaves behind unfixable E712)
13 | pass
14 | if False == None: # E711, E712 (fix)
| ^^^^^ E712
| ^^^^^^^^^^^^^ E712
15 | pass
16 | if None == False: # E711, E712 (fix)
|
= help: Replace with `not cond`
= help: Replace with `not None`
Unsafe fix
11 11 | pass
@@ -168,15 +168,15 @@ constant_literals.py:16:4: E711 [*] Comparison to `None` should be `cond is None
18 18 |
19 19 | named_var = []
constant_literals.py:16:12: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
constant_literals.py:16:4: E712 [*] Avoid equality comparisons to `False`; use `if not None:` for false checks
|
14 | if False == None: # E711, E712 (fix)
15 | pass
16 | if None == False: # E711, E712 (fix)
| ^^^^^ E712
| ^^^^^^^^^^^^^ E712
17 | pass
|
= help: Replace with `not cond`
= help: Replace with `not None`
Unsafe fix
13 13 | pass
@@ -187,5 +187,3 @@ constant_literals.py:16:12: E712 [*] Avoid equality comparisons to `False`; use
17 17 | pass
18 18 |
19 19 | named_var = []

View File

@@ -0,0 +1,281 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
E502.py:9:9: E502 [*] Redundant backslash
|
7 | + 4
8 |
9 | a = (3 -\
| ^ E502
10 | 2 + \
11 | 7)
|
= help: Remove redundant backslash
Safe fix
6 6 | 3 \
7 7 | + 4
8 8 |
9 |-a = (3 -\
9 |+a = (3 -
10 10 | 2 + \
11 11 | 7)
12 12 |
E502.py:10:11: E502 [*] Redundant backslash
|
9 | a = (3 -\
10 | 2 + \
| ^ E502
11 | 7)
|
= help: Remove redundant backslash
Safe fix
7 7 | + 4
8 8 |
9 9 | a = (3 -\
10 |- 2 + \
10 |+ 2 +
11 11 | 7)
12 12 |
13 13 | z = 5 + \
E502.py:14:9: E502 [*] Redundant backslash
|
13 | z = 5 + \
14 | (3 -\
| ^ E502
15 | 2 + \
16 | 7) + \
|
= help: Remove redundant backslash
Safe fix
11 11 | 7)
12 12 |
13 13 | z = 5 + \
14 |- (3 -\
14 |+ (3 -
15 15 | 2 + \
16 16 | 7) + \
17 17 | 4
E502.py:15:11: E502 [*] Redundant backslash
|
13 | z = 5 + \
14 | (3 -\
15 | 2 + \
| ^ E502
16 | 7) + \
17 | 4
|
= help: Remove redundant backslash
Safe fix
12 12 |
13 13 | z = 5 + \
14 14 | (3 -\
15 |- 2 + \
15 |+ 2 +
16 16 | 7) + \
17 17 | 4
18 18 |
E502.py:23:17: E502 [*] Redundant backslash
|
22 | b = [
23 | 2 + 4 + 5 + \
| ^ E502
24 | 44 \
25 | - 5
|
= help: Remove redundant backslash
Safe fix
20 20 | 2]
21 21 |
22 22 | b = [
23 |- 2 + 4 + 5 + \
23 |+ 2 + 4 + 5 +
24 24 | 44 \
25 25 | - 5
26 26 | ]
E502.py:24:8: E502 [*] Redundant backslash
|
22 | b = [
23 | 2 + 4 + 5 + \
24 | 44 \
| ^ E502
25 | - 5
26 | ]
|
= help: Remove redundant backslash
Safe fix
21 21 |
22 22 | b = [
23 23 | 2 + 4 + 5 + \
24 |- 44 \
24 |+ 44
25 25 | - 5
26 26 | ]
27 27 |
E502.py:29:11: E502 [*] Redundant backslash
|
28 | c = (True and
29 | False \
| ^ E502
30 | or False \
31 | and True \
|
= help: Remove redundant backslash
Safe fix
26 26 | ]
27 27 |
28 28 | c = (True and
29 |- False \
29 |+ False
30 30 | or False \
31 31 | and True \
32 32 | )
E502.py:30:14: E502 [*] Redundant backslash
|
28 | c = (True and
29 | False \
30 | or False \
| ^ E502
31 | and True \
32 | )
|
= help: Remove redundant backslash
Safe fix
27 27 |
28 28 | c = (True and
29 29 | False \
30 |- or False \
30 |+ or False
31 31 | and True \
32 32 | )
33 33 |
E502.py:31:14: E502 [*] Redundant backslash
|
29 | False \
30 | or False \
31 | and True \
| ^ E502
32 | )
|
= help: Remove redundant backslash
Safe fix
28 28 | c = (True and
29 29 | False \
30 30 | or False \
31 |- and True \
31 |+ and True
32 32 | )
33 33 |
34 34 | c = (True and
E502.py:44:14: E502 [*] Redundant backslash
|
43 | s = {
44 | 'x': 2 + \
| ^ E502
45 | 2
46 | }
|
= help: Remove redundant backslash
Safe fix
41 41 |
42 42 |
43 43 | s = {
44 |- 'x': 2 + \
44 |+ 'x': 2 +
45 45 | 2
46 46 | }
47 47 |
E502.py:55:12: E502 [*] Redundant backslash
|
55 | x = {2 + 4 \
| ^ E502
56 | + 3}
|
= help: Remove redundant backslash
Safe fix
52 52 | }
53 53 |
54 54 |
55 |-x = {2 + 4 \
55 |+x = {2 + 4
56 56 | + 3}
57 57 |
58 58 | y = (
E502.py:61:9: E502 [*] Redundant backslash
|
59 | 2 + 2 # \
60 | + 3 # \
61 | + 4 \
| ^ E502
62 | + 3
63 | )
|
= help: Remove redundant backslash
Safe fix
58 58 | y = (
59 59 | 2 + 2 # \
60 60 | + 3 # \
61 |- + 4 \
61 |+ + 4
62 62 | + 3
63 63 | )
64 64 |
E502.py:82:12: E502 [*] Redundant backslash
|
80 | "xyz"
81 |
82 | x = ("abc" \
| ^ E502
83 | "xyz")
|
= help: Remove redundant backslash
Safe fix
79 79 | x = "abc" \
80 80 | "xyz"
81 81 |
82 |-x = ("abc" \
82 |+x = ("abc"
83 83 | "xyz")
84 84 |
85 85 |
E502.py:87:14: E502 [*] Redundant backslash
|
86 | def foo():
87 | x = (a + \
| ^ E502
88 | 2)
|
= help: Remove redundant backslash
Safe fix
84 84 |
85 85 |
86 86 | def foo():
87 |- x = (a + \
87 |+ x = (a +
88 88 | 2)

View File

@@ -0,0 +1,17 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
W391_0.py:14:1: W391 [*] Extra newline at end of file
|
12 | foo()
13 | bar()
14 |
| ^ W391
|
= help: Remove trailing newline
Safe fix
11 11 | if __name__ == '__main__':
12 12 | foo()
13 13 | bar()
14 |-

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---
W391_2.py:14:1: W391 [*] Too many newlines at end of file
|
12 | foo()
13 | bar()
14 | /
15 | |
16 | |
17 | |
|
= help: Remove trailing newlines
Safe fix
11 11 | if __name__ == '__main__':
12 12 | foo()
13 13 | bar()
14 |-
15 |-
16 |-
17 |-

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---

View File

@@ -0,0 +1,4 @@
---
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
---

View File

@@ -97,8 +97,8 @@ impl fmt::Display for Settings {
namespace = "linter.pydocstyle",
fields = [
self.convention | optional,
self.ignore_decorators | debug,
self.property_decorators | debug
self.ignore_decorators | set,
self.property_decorators | set
]
}
Ok(())

View File

@@ -55,6 +55,7 @@ mod tests {
#[test_case(Rule::UnusedImport, Path::new("F401_20.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_21.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_22.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_23.py"))]
#[test_case(Rule::ImportShadowedByLoopVar, Path::new("F402.py"))]
#[test_case(Rule::ImportShadowedByLoopVar, Path::new("F402.ipynb"))]
#[test_case(Rule::UndefinedLocalWithImportStar, Path::new("F403.py"))]
@@ -129,12 +130,14 @@ mod tests {
#[test_case(Rule::UndefinedName, Path::new("F821_3.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_4.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_5.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_5.pyi"))]
#[test_case(Rule::UndefinedName, Path::new("F821_6.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_7.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_8.pyi"))]
#[test_case(Rule::UndefinedName, Path::new("F821_9.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_10.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_11.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_11.pyi"))]
#[test_case(Rule::UndefinedName, Path::new("F821_12.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_13.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_14.py"))]
@@ -149,7 +152,11 @@ mod tests {
#[test_case(Rule::UndefinedName, Path::new("F821_23.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_24.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_25.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_26.py"))]
#[test_case(Rule::UndefinedName, Path::new("F821_26.pyi"))]
#[test_case(Rule::UndefinedName, Path::new("F821_27.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_0.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_0.pyi"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_1.py"))]
#[test_case(Rule::UndefinedExport, Path::new("F822_2.py"))]
#[test_case(Rule::UndefinedLocal, Path::new("F823.py"))]
@@ -205,7 +212,11 @@ mod tests {
fn init() -> Result<()> {
let diagnostics = test_path(
Path::new("pyflakes/__init__.py"),
&LinterSettings::for_rules(vec![Rule::UndefinedName, Rule::UndefinedExport]),
&LinterSettings::for_rules(vec![
Rule::UndefinedName,
Rule::UndefinedExport,
Rule::UnusedImport,
]),
)?;
assert_messages!(diagnostics);
Ok(())

Some files were not shown because too many files have changed in this diff Show More