Compare commits

...

38 Commits

Author SHA1 Message Date
Charlie Marsh
7fcf4c067c Remove string from comment 2024-01-06 16:19:48 -05:00
Charlie Marsh
f6841757eb Use comment_ranges for isort directive extraction (#9414)
## Summary

No need to iterate over the token stream to find comments -- we already
know where they are.
2024-01-06 16:05:13 -05:00
Charlie Marsh
1666c7a5cb Add size hints to string parser (#9413) 2024-01-06 15:59:34 -05:00
Charlie Marsh
e80b3db10d Remove duplicated NameFinder struct (#9412) 2024-01-06 20:47:28 +00:00
Charlie Marsh
701697c37e Support variable keys in static dictionary key rule (#9411)
Closes https://github.com/astral-sh/ruff/issues/9410.
2024-01-06 20:44:40 +00:00
Charlie Marsh
c2c9997682 Use DisplayParseError for stdin parser errors (#9409)
Just looks like an oversight in refactoring.
2024-01-06 15:28:12 +00:00
Charlie Marsh
cee09765ef Use transformed source code for diagnostic locations (#9408)
## Summary

After we apply fixes, the source code might be transformed. And yet,
we're using the _unmodified_ source code to compute locations in some
cases (e.g., for displaying parse errors, or Jupyter Notebook cells).
This can lead to subtle errors in reporting, or even panics. This PR
modifies the linter to use the _transformed_ source code for such
computations.

Closes https://github.com/astral-sh/ruff/issues/9407.
2024-01-06 10:22:34 -05:00
Alex Waygood
cde4a7d7bf [flake8-pyi] Fix false negative for PYI046 with unused generic protocols (#9405)
I just fixed this false negative in flake8-pyi
(https://github.com/PyCQA/flake8-pyi/pull/460), and then realised ruff
has the exact same bug! Luckily it's a very easy fix.

(The bug is that unused protocols go undetected if they're generic.)
2024-01-05 12:56:04 -06:00
Charlie Marsh
62eca330a8 Remove an unwrap from unnecessary_literal_union.rs (#9404) 2024-01-05 13:19:37 -05:00
Mikael Arguedas
59078c5403 homogenize PLR0914 message to match other PLR 09XX rules and pylint message (#9399) 2024-01-05 07:25:26 -05:00
Jack McIvor
6bf6521197 Fix minor typos (#9402) 2024-01-05 07:24:59 -05:00
qdegraaf
c11f65381f [flake8-bandit] Implement S503 SslWithBadDefaults rule (#9391)
## Summary

Adds S503 rule for the
[flake8-bandit](https://github.com/tylerwince/flake8-bandit) plugin
port.

Checks for function defs argument defaults which have an insecure
ssl_version value. See also
https://bandit.readthedocs.io/en/latest/_modules/bandit/plugins/insecure_ssl_tls.html#ssl_with_bad_defaults

Some logic and the `const` can be shared with
https://github.com/astral-sh/ruff/pull/9390. When one of the two is
merged.

## Test Plan

Fixture added

## Issue Link

Refers: https://github.com/astral-sh/ruff/issues/1646
2024-01-05 01:38:41 +00:00
qdegraaf
6dfc1ccd6f [flake8-bandit] Implement S502 SslInsecureVersion rule (#9390)
## Summary

Adds S502 rule for the
[flake8-bandit](https://github.com/tylerwince/flake8-bandit) plugin
port.

Checks for calls to any function with keywords arguments `ssl_version`
or `method` or for kwargs `method` in calls to `OpenSSL.SSL.Context` and
`ssl_version` in calls to `ssl.wrap_socket` which have an insecure
ssl_version valu. See also
https://bandit.readthedocs.io/en/latest/_modules/bandit/plugins/insecure_ssl_tls.html#ssl_with_bad_version

## Test Plan

Fixture added

## Issue Link

Refers: https://github.com/astral-sh/ruff/issues/1646
2024-01-05 01:27:41 +00:00
Charlie Marsh
60ba7a7c0d Allow # fmt: skip with interspersed same-line comments (#9395)
## Summary

This is similar to https://github.com/astral-sh/ruff/pull/8876, but more
limited in scope:

1. It only applies to `# fmt: skip` (like Black). Like `# isort: on`, `#
fmt: on` needs to be on its own line (still).
2. It only delimits on `#`, so you can do `# fmt: skip # noqa`, but not
`# fmt: skip - some other content` or `# fmt: skip; noqa`.

If we want to support the `;`-delimited version, we should revisit
later, since we don't support that in the linter (so `# fmt: skip; noqa`
wouldn't register a `noqa`).

Closes https://github.com/astral-sh/ruff/issues/8892.
2024-01-04 19:39:37 -05:00
Zanie Blue
4b8b3a1ced Add jupyter notebooks to ecosystem checks (#9293)
Implements https://github.com/astral-sh/ruff/pull/8873 via
https://github.com/astral-sh/ruff/pull/9286
2024-01-04 15:38:42 -06:00
Zanie Blue
967b2dcaf4 Fix ibis ecosystem branch (#9392) 2024-01-04 14:18:08 -05:00
Zanie Blue
aaa00976ae Generate deterministic ids when formatting notebooks (#9359)
When formatting notebooks, we populate the `id` field for cells that do
not have one. Previously, we generated a UUID v4 which resulted in
non-deterministic formatting. Here, we generate the UUID from a seeded
random number generator instead of using true randomness. For example,
here are the first five ids it would generate:

```
7fb27b94-1602-401d-9154-2211134fc71a
acae54e3-7e7d-407b-bb7b-55eff062a284
9a63283c-baf0-4dbc-ab1f-6479b197f3a8
8dd0d809-2fe7-4a7c-9628-1538738b07e2
72eea511-9410-473a-a328-ad9291626812
```

We also add a check that an id is not present in another cell to prevent
accidental introduction of duplicate ids.

The specification is lax, and we could just use incrementing integers
e.g. `0`, `1`, ... but I have a minor preference for retaining the UUID
format. Some discussion
[here](https://github.com/astral-sh/ruff/pull/9359#discussion_r1439607121)
— I'm happy to go either way though.

Discovered via #9293
2024-01-04 09:19:00 -06:00
Charlie Marsh
328262bfac Add cell indexes to all diagnostics (#9387)
## Summary

Ensures that any lint rules that include line locations render them as
relative to the cell (and include the cell number) when inside a Jupyter
notebook.

Closes https://github.com/astral-sh/ruff/issues/6672.

## Test Plan

`cargo test`
2024-01-04 14:02:23 +00:00
Charlie Marsh
f0d43dafcf Ignore trailing quotes for unclosed l-brace errors (#9388)
## Summary

Given:

```python
F"{"ڤ
```

We try to locate the "unclosed left brace" error by subtracting the
quote size from the lexer offset -- so we subtract 1 from the end of the
source, which puts us in the middle of a Unicode character. I don't
think we should try to adjust the offset in this way, since there can be
content _after_ the quote. For example, with the advent of PEP 701, this
string could reasonably be fixed as:

```python
F"{"ڤ"}"
````

Closes https://github.com/astral-sh/ruff/issues/9379.
2024-01-04 05:00:55 +00:00
Charlie Marsh
9a14f403c8 Add missing preview link (#9386) 2024-01-03 19:54:25 -05:00
Noah Jenner
1293383cdc [docs] - Fix admonition hyperlink colouring (#9385)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Fix the colouration of hyperlinks within admonitions on dark theme to be
more readable. Closes #9046

## Test Plan

<!-- How was it tested? -->
Documentation was regenerated via mkdocs and the supplied requirements.

Signed-off-by: 64815328+Eutropios@users.noreply.github.com
2024-01-03 19:41:27 -05:00
qdegraaf
3b323a09cb [flake8-bandit] Add S504 SslWithNoVersion rule (#9384)
## Summary
Adds `S504` rule for the
[flake8-bandit](https://github.com/tylerwince/flake8-bandit) plugin
port.

Checks for calls to `ssl.wrap_socket` which have no `ssl_version`
argument set. See also
https://bandit.readthedocs.io/en/latest/_modules/bandit/plugins/insecure_ssl_tls.html#ssl_with_no_version

## Test Plan

Fixture added 

## Issue Link

Refers: https://github.com/astral-sh/ruff/issues/1646
2024-01-03 21:56:41 +00:00
qdegraaf
5c93a524f1 [flake8-bandit] Implement S4XX suspicious import rules (#8831)
## Summary

Adds all `S4XX` rules to the
[flake8-bandit](https://github.com/tylerwince/flake8-bandit) plugin
port.

There is a lot of documentation to write, some tests can be expanded and
implementation can probably be refactored to be more compact. As there
is some discussion on whether this is actually useful. (See:
https://github.com/astral-sh/ruff/issues/1646#issuecomment-1732331441),
wanted to check which rules we want to have before I go through the
process of polishing this up.

## Test Plan

Fixtures for all rules based on `flake8-bandit`
[tests](https://github.com/tylerwince/flake8-bandit/tree/main/tests)

## Issue link

Refers: https://github.com/astral-sh/ruff/issues/1646
2024-01-03 18:26:26 +00:00
Steve C
e3ad163785 [pylint] Implement unnecessary-dunder-call (C2801) (#9166)
## Summary

Implements
[`C2801`/`unnecessary-dunder-calls`](https://pylint.readthedocs.io/en/stable/user_guide/messages/convention/unnecessary-dunder-call.html)

There are more that this could cover, but the implementations get a
little less straightforward and ugly. Might come back to it in a future
PR, or someone else can!

See: #970 

## Test Plan

`cargo test`
2024-01-03 18:08:37 +00:00
Charlie Marsh
0e202718fd Misc. small tweaks from perusing modules (#9383) 2024-01-03 12:30:25 -05:00
Charlie Marsh
7b6baff734 Respect multi-segment submodule imports when resolving qualified names (#9382)
Ensures that if the user has `import collections.abc`, then
`get_or_import_symbol` returns `collections.abc.Iterator` (or similar)
when requested.
2024-01-03 11:24:20 -05:00
Alex Waygood
1ffc738c84 [flake8-pyi] Add autofix for PYI058 (#9355)
## Summary

This PR adds an autofix for the newly added PYI058 rule (added in
#9313). ~~The PR's current implementation is that the fix is only
available if the fully qualified name of `Generator` or `AsyncGenerator`
is being used:~~
- ~~`-> typing.Generator` is converted to `-> typing.Iterator`;~~
- ~~`-> collections.abc.AsyncGenerator[str, Any]` is converted to `->
collections.abc.AsyncIterator[str]`;~~
- ~~but `-> Generator` is _not_ converted to `-> Iterator`. (It would
require more work to figure out if `Iterator` was already imported or
not. And if it wasn't, where should we import it from? `typing`,
`typing_extensions`, or `collections.abc`? It seems much more
complicated.)~~

The fix is marked as always safe for `__iter__` or `__aiter__` methods
in `.pyi` files, but unsafe for all such methods in `.py` files that
have more than one statement in the method body.

This felt slightly fiddly to accomplish, but I couldn't _see_ any
utilities in
https://github.com/astral-sh/ruff/tree/main/crates/ruff_linter/src/fix
that would have made it simpler to implement. Lmk if I'm missing
something, though -- my first time implementing an autofix! :)

## Test Plan

`cargo test` / `cargo insta review`.
2024-01-03 11:11:16 -05:00
Charlie Marsh
dc5094d42a Handle raises with implicit alternate branches (#9377)
Closes
https://github.com/astral-sh/ruff/issues/9304#issuecomment-1874739740.
2024-01-02 22:59:12 -05:00
Charlie Marsh
fd36754beb Avoid infinite loop in constant vs. None comparisons (#9376)
## Summary

We had an early `continue` in this loop, and we weren't setting
`comparator = next;` when continuing... This PR removes the early
continue altogether for clarity.

Closes https://github.com/astral-sh/ruff/issues/9374.

## Test Plan

`cargo test`
2024-01-02 22:04:52 -05:00
Addison Crump
154d3b9f4b Minor fuzzer improvements (#9375)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

- Adds timeouts to fuzzer cmin stages in the case of an infinite loop
- Adds executable flag to reinit-fuzzer.sh because it was annoying me

## Test Plan

Not needed.
2024-01-03 01:52:42 +00:00
Charlie Marsh
6c0734680e Re-enable cargo fuzz in CI (#9372) 2024-01-02 19:45:30 -05:00
Adrian
eac67a9464 Add CERN/Indico to "Who's Using Ruff?" (#9371)
Might be a nice name to have in that list ;)

Corresponding PR that added ruff to the project:
https://github.com/indico/indico/pull/6037
2024-01-02 17:47:22 -05:00
Charlie Marsh
fefc7e8199 Bump version to 0.1.11 (#9370) 2024-01-02 17:46:06 -05:00
Zanie Blue
973ae7e922 Disable the fuzzer CI job (#9369)
The job is failing to compile. We should resolve separately but I am
disabling for now since it breaks pull requests.

See https://github.com/astral-sh/ruff/issues/9368
2024-01-02 16:05:39 -06:00
Steve C
3fcc1402f6 [pylint] - implement super-without-brackets/W0245 (#9257)
## Summary

Implement
[`super-without-brackets`/`W0245`](https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/super-without-brackets.html)

See: #970 

## Test Plan

`cargo test`
2024-01-02 21:57:53 +00:00
Nick Drozd
08c60f513b Check path string properly (#9367)
A minor whoopsie, 158367bf9 forgot to update this line.

I'm not sure how this gets tested in CI.
2024-01-02 21:02:34 +00:00
Tom Kuson
38f4d9e335 Tweak relative-imports message (#9365)
## Summary

Changes message from `"Relative imports are banned"` to `"Prefer
absolute imports over relative imports from parent modules"`.

Closes #9363.

## Test Plan

`cargo test`
2024-01-02 20:11:24 +00:00
Charlie Marsh
f07d35051c Add fix safety note for yield-in-for-loop (#9364)
See: https://github.com/astral-sh/ruff/issues/8482.
2024-01-02 14:44:03 -05:00
158 changed files with 4682 additions and 961 deletions

View File

@@ -180,7 +180,7 @@ jobs:
- name: "Install cargo-fuzz"
uses: taiki-e/install-action@v2
with:
tool: cargo-fuzz@0.11
tool: cargo-fuzz@0.11.2
- run: cargo fuzz build -s none
scripts:

View File

@@ -1,5 +1,20 @@
# Changelog
## 0.1.11
### Preview features
- \[`pylint`\] Implement `super-without-brackets` (`W0245`) ([#9257](https://github.com/astral-sh/ruff/pull/9257))
### Bug fixes
- Check path string properly in `python -m ruff` invocations ([#9367](https://github.com/astral-sh/ruff/pull/9367))
### Documentation
- Tweak `relative-imports` message ([#9365](https://github.com/astral-sh/ruff/pull/9365))
- Add fix safety note for `yield-in-for-loop` ([#9364](https://github.com/astral-sh/ruff/pull/9364))
## 0.1.10
### Preview features

View File

@@ -370,7 +370,7 @@ See the [ruff-ecosystem package](https://github.com/astral-sh/ruff/tree/main/pyt
We have several ways of benchmarking and profiling Ruff:
- Our main performance benchmark comparing Ruff with other tools on the CPython codebase
- Microbenchmarks which the linter or the formatter on individual files. There run on pull requests.
- Microbenchmarks which run the linter or the formatter on individual files. These run on pull requests.
- Profiling the linter on either the microbenchmarks or entire projects
### CPython Benchmark

7
Cargo.lock generated
View File

@@ -2049,7 +2049,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.1.10"
version = "0.1.11"
dependencies = [
"anyhow",
"argfile",
@@ -2176,7 +2176,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.1.10"
version = "0.1.11"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2259,6 +2259,7 @@ dependencies = [
"insta",
"itertools 0.12.0",
"once_cell",
"rand",
"ruff_diagnostics",
"ruff_source_file",
"ruff_text_size",
@@ -2427,7 +2428,7 @@ dependencies = [
[[package]]
name = "ruff_shrinking"
version = "0.1.10"
version = "0.1.11"
dependencies = [
"anyhow",
"clap",

View File

@@ -150,7 +150,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.1.10
rev: v0.1.11
hooks:
# Run the linter.
- id: ruff
@@ -386,6 +386,7 @@ Ruff is used by a number of major open-source projects and companies, including:
- Benchling ([Refac](https://github.com/benchling/refac))
- [Bokeh](https://github.com/bokeh/bokeh)
- [Cryptography (PyCA)](https://github.com/pyca/cryptography)
- CERN ([Indico](https://getindico.io/))
- [DVC](https://github.com/iterative/dvc)
- [Dagger](https://github.com/dagger/dagger)
- [Dagster](https://github.com/dagster-io/dagster)

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_cli"
version = "0.1.10"
version = "0.1.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -1,5 +1,6 @@
#![cfg_attr(target_family = "wasm", allow(dead_code))]
use std::borrow::Cow;
use std::fs::File;
use std::io;
use std::ops::{Add, AddAssign};
@@ -273,6 +274,7 @@ pub(crate) fn lint_path(
data: (messages, imports),
error: parse_error,
},
transformed,
fixed,
) = if matches!(fix_mode, flags::FixMode::Apply | flags::FixMode::Diff) {
if let Ok(FixerResult {
@@ -301,7 +303,12 @@ pub(crate) fn lint_path(
flags::FixMode::Generate => {}
}
}
(result, fixed)
let transformed = if let Cow::Owned(transformed) = transformed {
transformed
} else {
source_kind
};
(result, transformed, fixed)
} else {
// If we fail to fix, lint the original source code.
let result = lint_only(
@@ -313,8 +320,9 @@ pub(crate) fn lint_path(
source_type,
ParseSource::None,
);
let transformed = source_kind;
let fixed = FxHashMap::default();
(result, fixed)
(result, transformed, fixed)
}
} else {
let result = lint_only(
@@ -326,8 +334,9 @@ pub(crate) fn lint_path(
source_type,
ParseSource::None,
);
let transformed = source_kind;
let fixed = FxHashMap::default();
(result, fixed)
(result, transformed, fixed)
};
let imports = imports.unwrap_or_default();
@@ -335,7 +344,7 @@ pub(crate) fn lint_path(
if let Some((cache, relative_path, key)) = caching {
// We don't cache parsing errors.
if parse_error.is_none() {
// `FixMode::Generate` and `FixMode::Diff` rely on side-effects (writing to disk,
// `FixMode::Apply` and `FixMode::Diff` rely on side-effects (writing to disk,
// and writing the diff to stdout, respectively). If a file has diagnostics, we
// need to avoid reading from and writing to the cache in these modes.
if match fix_mode {
@@ -350,7 +359,7 @@ pub(crate) fn lint_path(
LintCacheData::from_messages(
&messages,
imports.clone(),
source_kind.as_ipy_notebook().map(Notebook::index).cloned(),
transformed.as_ipy_notebook().map(Notebook::index).cloned(),
),
);
}
@@ -360,11 +369,11 @@ pub(crate) fn lint_path(
if let Some(error) = parse_error {
error!(
"{}",
DisplayParseError::from_source_kind(error, Some(path.to_path_buf()), &source_kind,)
DisplayParseError::from_source_kind(error, Some(path.to_path_buf()), &transformed)
);
}
let notebook_indexes = if let SourceKind::IpyNotebook(notebook) = source_kind {
let notebook_indexes = if let SourceKind::IpyNotebook(notebook) = transformed {
FxHashMap::from_iter([(path.to_string_lossy().to_string(), notebook.into_index())])
} else {
FxHashMap::default()
@@ -415,6 +424,7 @@ pub(crate) fn lint_stdin(
data: (messages, imports),
error: parse_error,
},
transformed,
fixed,
) = if matches!(fix_mode, flags::FixMode::Apply | flags::FixMode::Diff) {
if let Ok(FixerResult {
@@ -443,8 +453,12 @@ pub(crate) fn lint_stdin(
}
flags::FixMode::Generate => {}
}
(result, fixed)
let transformed = if let Cow::Owned(transformed) = transformed {
transformed
} else {
source_kind
};
(result, transformed, fixed)
} else {
// If we fail to fix, lint the original source code.
let result = lint_only(
@@ -456,14 +470,15 @@ pub(crate) fn lint_stdin(
source_type,
ParseSource::None,
);
let fixed = FxHashMap::default();
// Write the contents to stdout anyway.
if fix_mode.is_apply() {
source_kind.write(&mut io::stdout().lock())?;
}
(result, fixed)
let transformed = source_kind;
let fixed = FxHashMap::default();
(result, transformed, fixed)
}
} else {
let result = lint_only(
@@ -475,20 +490,21 @@ pub(crate) fn lint_stdin(
source_type,
ParseSource::None,
);
let transformed = source_kind;
let fixed = FxHashMap::default();
(result, fixed)
(result, transformed, fixed)
};
let imports = imports.unwrap_or_default();
if let Some(err) = parse_error {
if let Some(error) = parse_error {
error!(
"Failed to parse {}: {err}",
path.map_or_else(|| "-".into(), fs::relativize_path).bold()
"{}",
DisplayParseError::from_source_kind(error, path.map(Path::to_path_buf), &transformed)
);
}
let notebook_indexes = if let SourceKind::IpyNotebook(notebook) = source_kind {
let notebook_indexes = if let SourceKind::IpyNotebook(notebook) = transformed {
FxHashMap::from_iter([(
path.map_or_else(|| "-".into(), |path| path.to_string_lossy().to_string()),
notebook.into_index(),

View File

@@ -284,7 +284,8 @@ fn stdin_fix_jupyter() {
"metadata": {},
"outputs": [],
"source": [
"import os"
"import os\n",
"print(1)"
]
},
{
@@ -302,7 +303,8 @@ fn stdin_fix_jupyter() {
"metadata": {},
"outputs": [],
"source": [
"import sys"
"import sys\n",
"print(x)"
]
},
{
@@ -354,8 +356,8 @@ fn stdin_fix_jupyter() {
"nbformat": 4,
"nbformat_minor": 5
}"#), @r###"
success: true
exit_code: 0
success: false
exit_code: 1
----- stdout -----
{
"cells": [
@@ -365,7 +367,9 @@ fn stdin_fix_jupyter() {
"id": "dccc687c-96e2-4604-b957-a8a89b5bec06",
"metadata": {},
"outputs": [],
"source": []
"source": [
"print(1)"
]
},
{
"cell_type": "markdown",
@@ -381,7 +385,9 @@ fn stdin_fix_jupyter() {
"id": "cdce7b92-b0fb-4c02-86f6-e233b26fa84f",
"metadata": {},
"outputs": [],
"source": []
"source": [
"print(x)"
]
},
{
"cell_type": "code",
@@ -433,7 +439,8 @@ fn stdin_fix_jupyter() {
"nbformat_minor": 5
}
----- stderr -----
Found 2 errors (2 fixed, 0 remaining).
Jupyter.ipynb:cell 3:1:7: F821 Undefined name `x`
Found 3 errors (2 fixed, 1 remaining).
"###);
}
@@ -719,6 +726,22 @@ fn stdin_format_jupyter() {
"###);
}
#[test]
fn stdin_parse_error() {
let mut cmd = RuffCheck::default().build();
assert_cmd_snapshot!(cmd
.pass_stdin("from foo import =\n"), @r###"
success: false
exit_code: 1
----- stdout -----
-:1:17: E999 SyntaxError: Unexpected token '='
Found 1 error.
----- stderr -----
error: Failed to parse at 1:17: Unexpected token '='
"###);
}
#[test]
fn show_source() {
let mut cmd = RuffCheck::default().args(["--show-source"]).build();
@@ -743,6 +766,7 @@ fn show_source() {
fn explain_status_codes_f401() {
assert_cmd_snapshot!(ruff_cmd().args(["--explain", "F401"]));
}
#[test]
fn explain_status_codes_ruf404() {
assert_cmd_snapshot!(ruff_cmd().args(["--explain", "RUF404"]), @r###"

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.1.10"
version = "0.1.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -264,3 +264,41 @@ def func(x: int):
if x > 0:
return 1
raise ValueError
def func(x: int):
if x > 5:
raise ValueError
else:
pass
def func(x: int):
if x > 5:
raise ValueError
elif x > 10:
pass
def func(x: int):
if x > 5:
raise ValueError
elif x > 10:
return 5
def func():
try:
return 5
except:
pass
raise ValueError
def func(x: int):
match x:
case [1, 2, 3]:
return 1
case y:
return "foo"

View File

@@ -0,0 +1,2 @@
import telnetlib # S401
from telnetlib import Telnet # S401

View File

@@ -0,0 +1,2 @@
import ftplib # S402
from ftplib import FTP # S402

View File

@@ -0,0 +1,8 @@
import dill # S403
from dill import objects # S403
import shelve
from shelve import open
import cPickle
from cPickle import load
import pickle
from pickle import load

View File

@@ -0,0 +1,3 @@
import subprocess # S404
from subprocess import Popen # S404
from subprocess import Popen as pop # S404

View File

@@ -0,0 +1,4 @@
import xml.etree.cElementTree # S405
from xml.etree import cElementTree # S405
import xml.etree.ElementTree # S405
from xml.etree import ElementTree # S405

View File

@@ -0,0 +1,3 @@
from xml import sax # S406
import xml.sax as xmls # S406
import xml.sax # S406

View File

@@ -0,0 +1,2 @@
from xml.dom import expatbuilder # S407
import xml.dom.expatbuilder # S407

View File

@@ -0,0 +1,2 @@
from xml.dom.minidom import parseString # S408
import xml.dom.minidom # S408

View File

@@ -0,0 +1,2 @@
from xml.dom.pulldom import parseString # S409
import xml.dom.pulldom # S409

View File

@@ -0,0 +1,2 @@
import lxml # S410
from lxml import etree # S410

View File

@@ -0,0 +1,2 @@
import xmlrpc # S411
from xmlrpc import server # S411

View File

@@ -0,0 +1 @@
from twisted.web.twcgi import CGIScript # S412

View File

@@ -0,0 +1,4 @@
import Crypto.Hash # S413
from Crypto.Hash import MD2 # S413
import Crypto.PublicKey # S413
from Crypto.PublicKey import RSA # S413

View File

@@ -0,0 +1,3 @@
import pyghmi # S415
from pyghmi import foo # S415

View File

@@ -0,0 +1,16 @@
import ssl
from ssl import wrap_socket
from OpenSSL import SSL
from OpenSSL.SSL import Context
wrap_socket(ssl_version=ssl.PROTOCOL_SSLv3) # S502
ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1) # S502
ssl.wrap_socket(ssl_version=ssl.PROTOCOL_SSLv2) # S502
SSL.Context(method=SSL.SSLv2_METHOD) # S502
SSL.Context(method=SSL.SSLv23_METHOD) # S502
Context(method=SSL.SSLv3_METHOD) # S502
Context(method=SSL.TLSv1_METHOD) # S502
wrap_socket(ssl_version=ssl.PROTOCOL_TLS_CLIENT) # OK
SSL.Context(method=SSL.TLS_SERVER_METHOD) # OK
func(ssl_version=ssl.PROTOCOL_TLSv1_2) # OK

View File

@@ -0,0 +1,23 @@
import ssl
from OpenSSL import SSL
from ssl import PROTOCOL_TLSv1
def func(version=ssl.PROTOCOL_SSLv2): # S503
pass
def func(protocol=SSL.SSLv2_METHOD): # S503
pass
def func(version=SSL.SSLv23_METHOD): # S503
pass
def func(protocol=PROTOCOL_TLSv1): # S503
pass
def func(version=SSL.TLSv1_2_METHOD): # OK
pass

View File

@@ -0,0 +1,15 @@
import ssl
from ssl import wrap_socket
ssl.wrap_socket() # S504
wrap_socket() # S504
ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1_2) # OK
class Class:
def wrap_socket(self):
pass
obj = Class()
obj.wrap_socket() # OK

View File

@@ -1,5 +1,5 @@
import typing
from typing import Protocol
from typing import Protocol, TypeVar
class _Foo(Protocol):
@@ -10,9 +10,23 @@ class _Bar(typing.Protocol):
bar: int
_T = TypeVar("_T")
class _Baz(Protocol[_T]):
x: _T
# OK
class _UsedPrivateProtocol(Protocol):
bar: int
def uses__UsedPrivateProtocol(arg: _UsedPrivateProtocol) -> None: ...
# Also OK
class _UsedGenericPrivateProtocol(Protocol[_T]):
x: _T
def uses_some_private_protocols(
arg: _UsedPrivateProtocol, arg2: _UsedGenericPrivateProtocol[int]
) -> None: ...

View File

@@ -1,5 +1,5 @@
import typing
from typing import Protocol
from typing import Protocol, TypeVar
class _Foo(object, Protocol):
@@ -10,9 +10,23 @@ class _Bar(typing.Protocol):
bar: int
_T = TypeVar("_T")
class _Baz(Protocol[_T]):
x: _T
# OK
class _UsedPrivateProtocol(Protocol):
bar: int
def uses__UsedPrivateProtocol(arg: _UsedPrivateProtocol) -> None: ...
# Also OK
class _UsedGenericPrivateProtocol(Protocol[_T]):
x: _T
def uses_some_private_protocols(
arg: _UsedPrivateProtocol, arg2: _UsedGenericPrivateProtocol[int]
) -> None: ...

View File

@@ -1,82 +1,174 @@
import collections.abc
import typing
from collections.abc import AsyncGenerator, Generator
from typing import Any
def scope():
from collections.abc import Generator
class IteratorReturningSimpleGenerator1:
def __iter__(self) -> Generator: # PYI058 (use `Iterator`)
return (x for x in range(42))
class IteratorReturningSimpleGenerator1:
def __iter__(self) -> Generator:
... # PYI058 (use `Iterator`)
class IteratorReturningSimpleGenerator2:
def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: # PYI058 (use `Iterator`)
"""Fully documented, because I'm a runtime function!"""
yield from "abcdefg"
return None
class IteratorReturningSimpleGenerator3:
def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: # PYI058 (use `Iterator`)
yield "a"
yield "b"
yield "c"
return
def scope():
import typing
class AsyncIteratorReturningSimpleAsyncGenerator1:
def __aiter__(self) -> typing.AsyncGenerator: pass # PYI058 (Use `AsyncIterator`)
class IteratorReturningSimpleGenerator2:
def __iter__(self) -> typing.Generator:
... # PYI058 (use `Iterator`)
class AsyncIteratorReturningSimpleAsyncGenerator2:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, Any]: ... # PYI058 (Use `AsyncIterator`)
class AsyncIteratorReturningSimpleAsyncGenerator3:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: pass # PYI058 (Use `AsyncIterator`)
def scope():
import collections.abc
class CorrectIterator:
def __iter__(self) -> Iterator[str]: ... # OK
class IteratorReturningSimpleGenerator3:
def __iter__(self) -> collections.abc.Generator:
... # PYI058 (use `Iterator`)
class CorrectAsyncIterator:
def __aiter__(self) -> collections.abc.AsyncIterator[int]: ... # OK
class Fine:
def __iter__(self) -> typing.Self: ... # OK
def scope():
import collections.abc
from typing import Any
class StrangeButWeWontComplainHere:
def __aiter__(self) -> list[bytes]: ... # OK
class IteratorReturningSimpleGenerator4:
def __iter__(self, /) -> collections.abc.Generator[str, Any, None]:
... # PYI058 (use `Iterator`)
def __iter__(self) -> Generator: ... # OK (not in class scope)
def __aiter__(self) -> AsyncGenerator: ... # OK (not in class scope)
class IteratorReturningComplexGenerator:
def __iter__(self) -> Generator[str, int, bytes]: ... # OK
def scope():
import collections.abc
import typing
class AsyncIteratorReturningComplexAsyncGenerator:
def __aiter__(self) -> AsyncGenerator[str, int]: ... # OK
class IteratorReturningSimpleGenerator5:
def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]:
... # PYI058 (use `Iterator`)
class ClassWithInvalidAsyncAiterMethod:
async def __aiter__(self) -> AsyncGenerator: ... # OK
class IteratorWithUnusualParameters1:
def __iter__(self, foo) -> Generator: ... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters2:
def __iter__(self, *, bar) -> Generator: ... # OK
class IteratorReturningSimpleGenerator6:
def __iter__(self, /) -> Generator[str, None, None]:
... # PYI058 (use `Iterator`)
class IteratorWithUnusualParameters3:
def __iter__(self, *args) -> Generator: ... # OK
class IteratorWithUnusualParameters4:
def __iter__(self, **kwargs) -> Generator: ... # OK
def scope():
import typing_extensions
class IteratorWithIterMethodThatReturnsThings:
def __iter__(self) -> Generator: # OK
yield
return 42
class AsyncIteratorReturningSimpleAsyncGenerator1:
def __aiter__(
self,
) -> typing_extensions.AsyncGenerator:
... # PYI058 (Use `AsyncIterator`)
class IteratorWithIterMethodThatReceivesThingsFromSend:
def __iter__(self) -> Generator: # OK
x = yield 42
class IteratorWithNonTrivialIterBody:
def __iter__(self) -> Generator: # OK
foo, bar, baz = (1, 2, 3)
yield foo
yield bar
yield baz
def scope():
import collections.abc
class AsyncIteratorReturningSimpleAsyncGenerator2:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, Any]:
... # PYI058 (Use `AsyncIterator`)
def scope():
import collections.abc
class AsyncIteratorReturningSimpleAsyncGenerator3:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]:
... # PYI058 (Use `AsyncIterator`)
def scope():
from typing import Iterator
class CorrectIterator:
def __iter__(self) -> Iterator[str]:
... # OK
def scope():
import collections.abc
class CorrectAsyncIterator:
def __aiter__(self) -> collections.abc.AsyncIterator[int]:
... # OK
def scope():
import typing
class Fine:
def __iter__(self) -> typing.Self:
... # OK
def scope():
class StrangeButWeWontComplainHere:
def __aiter__(self) -> list[bytes]:
... # OK
def scope():
from collections.abc import Generator
def __iter__(self) -> Generator:
... # OK (not in class scope)
def scope():
from collections.abc import AsyncGenerator
def __aiter__(self) -> AsyncGenerator:
... # OK (not in class scope)
def scope():
from collections.abc import Generator
class IteratorReturningComplexGenerator:
def __iter__(self) -> Generator[str, int, bytes]:
... # OK
def scope():
from collections.abc import AsyncGenerator
class AsyncIteratorReturningComplexAsyncGenerator:
def __aiter__(self) -> AsyncGenerator[str, int]:
... # OK
def scope():
from collections.abc import AsyncGenerator
class ClassWithInvalidAsyncAiterMethod:
async def __aiter__(self) -> AsyncGenerator:
... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters1:
def __iter__(self, foo) -> Generator:
... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters2:
def __iter__(self, *, bar) -> Generator:
... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters3:
def __iter__(self, *args) -> Generator:
... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters4:
def __iter__(self, **kwargs) -> Generator:
... # OK

View File

@@ -1,58 +1,128 @@
import collections.abc
import typing
from collections.abc import AsyncGenerator, Generator
from typing import Any
def scope():
from collections.abc import Generator
class IteratorReturningSimpleGenerator1:
def __iter__(self) -> Generator: ... # PYI058 (use `Iterator`)
class IteratorReturningSimpleGenerator1:
def __iter__(self) -> Generator: ... # PYI058 (use `Iterator`)
class IteratorReturningSimpleGenerator2:
def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: ... # PYI058 (use `Iterator`)
def scope():
import typing
class IteratorReturningSimpleGenerator3:
def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: ... # PYI058 (use `Iterator`)
class IteratorReturningSimpleGenerator2:
def __iter__(self) -> typing.Generator: ... # PYI058 (use `Iterator`)
class AsyncIteratorReturningSimpleAsyncGenerator1:
def __aiter__(self) -> typing.AsyncGenerator: ... # PYI058 (Use `AsyncIterator`)
def scope():
import collections.abc
class AsyncIteratorReturningSimpleAsyncGenerator2:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, Any]: ... # PYI058 (Use `AsyncIterator`)
class IteratorReturningSimpleGenerator3:
def __iter__(self) -> collections.abc.Generator: ... # PYI058 (use `Iterator`)
class AsyncIteratorReturningSimpleAsyncGenerator3:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: ... # PYI058 (Use `AsyncIterator`)
def scope():
import collections.abc
from typing import Any
class CorrectIterator:
def __iter__(self) -> Iterator[str]: ... # OK
class IteratorReturningSimpleGenerator4:
def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: ... # PYI058 (use `Iterator`)
class CorrectAsyncIterator:
def __aiter__(self) -> collections.abc.AsyncIterator[int]: ... # OK
def scope():
import collections.abc
import typing
class Fine:
def __iter__(self) -> typing.Self: ... # OK
class IteratorReturningSimpleGenerator5:
def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: ... # PYI058 (use `Iterator`)
class StrangeButWeWontComplainHere:
def __aiter__(self) -> list[bytes]: ... # OK
def scope():
from collections.abc import Generator
def __iter__(self) -> Generator: ... # OK (not in class scope)
def __aiter__(self) -> AsyncGenerator: ... # OK (not in class scope)
class IteratorReturningSimpleGenerator6:
def __iter__(self, /) -> Generator[str, None, None]: ... # PYI058 (use `Iterator`)
class IteratorReturningComplexGenerator:
def __iter__(self) -> Generator[str, int, bytes]: ... # OK
def scope():
import typing_extensions
class AsyncIteratorReturningComplexAsyncGenerator:
def __aiter__(self) -> AsyncGenerator[str, int]: ... # OK
class AsyncIteratorReturningSimpleAsyncGenerator1:
def __aiter__(self,) -> typing_extensions.AsyncGenerator: ... # PYI058 (Use `AsyncIterator`)
class ClassWithInvalidAsyncAiterMethod:
async def __aiter__(self) -> AsyncGenerator: ... # OK
def scope():
import collections.abc
class IteratorWithUnusualParameters1:
def __iter__(self, foo) -> Generator: ... # OK
class AsyncIteratorReturningSimpleAsyncGenerator3:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]:
... # PYI058 (Use `AsyncIterator`)
class IteratorWithUnusualParameters2:
def __iter__(self, *, bar) -> Generator: ... # OK
def scope():
import collections.abc
class IteratorWithUnusualParameters3:
def __iter__(self, *args) -> Generator: ... # OK
class AsyncIteratorReturningSimpleAsyncGenerator3:
def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: ... # PYI058 (Use `AsyncIterator`)
class IteratorWithUnusualParameters4:
def __iter__(self, **kwargs) -> Generator: ... # OK
def scope():
from typing import Iterator
class CorrectIterator:
def __iter__(self) -> Iterator[str]: ... # OK
def scope():
import collections.abc
class CorrectAsyncIterator:
def __aiter__(self) -> collections.abc.AsyncIterator[int]: ... # OK
def scope():
import typing
class Fine:
def __iter__(self) -> typing.Self: ... # OK
def scope():
class StrangeButWeWontComplainHere:
def __aiter__(self) -> list[bytes]: ... # OK
def scope():
from collections.abc import Generator
def __iter__(self) -> Generator: ... # OK (not in class scope)
def scope():
from collections.abc import AsyncGenerator
def __aiter__(self) -> AsyncGenerator: ... # OK (not in class scope)
def scope():
from collections.abc import Generator
class IteratorReturningComplexGenerator:
def __iter__(self) -> Generator[str, int, bytes]: ... # OK
def scope():
from collections.abc import AsyncGenerator
class AsyncIteratorReturningComplexAsyncGenerator:
def __aiter__(self) -> AsyncGenerator[str, int]: ... # OK
def scope():
from collections.abc import AsyncGenerator
class ClassWithInvalidAsyncAiterMethod:
async def __aiter__(self) -> AsyncGenerator: ... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters1:
def __iter__(self, foo) -> Generator: ... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters2:
def __iter__(self, *, bar) -> Generator: ... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters3:
def __iter__(self, *args) -> Generator: ... # OK
def scope():
from collections.abc import Generator
class IteratorWithUnusualParameters4:
def __iter__(self, **kwargs) -> Generator: ... # OK

View File

@@ -51,3 +51,5 @@ if (True) == TrueElement or x == TrueElement:
assert (not foo) in bar
assert {"x": not foo} in bar
assert [42, not foo] in bar
assert x in c > 0 == None

View File

@@ -0,0 +1,60 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "33faf7ad-a3fd-4ac4-a0c3-52e507ed49df",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import os.path as path"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "481fb4bf-c1b9-47da-927f-3cfdfe4b49ec",
"metadata": {},
"outputs": [],
"source": [
"for os in range(3):\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"for path in range(3):\n",
" pass"
],
"metadata": {
"collapsed": false
},
"id": "2f0c65a5-0a0e-4080-afce-5a8ed0d706df"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (ruff-playground)",
"language": "python",
"name": "ruff-playground"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,33 @@
class Animal:
@staticmethod
def speak():
return f"This animal says something."
class BadDog(Animal):
@staticmethod
def speak():
original_speak = super.speak() # PLW0245
return f"{original_speak} But as a dog, it barks!"
class GoodDog(Animal):
@staticmethod
def speak():
original_speak = super().speak() # OK
return f"{original_speak} But as a dog, it barks!"
class FineDog(Animal):
@staticmethod
def speak():
super = "super"
original_speak = super.speak() # OK
return f"{original_speak} But as a dog, it barks!"
def super_without_class() -> None:
super.blah() # OK
super.blah() # OK

View File

@@ -0,0 +1,30 @@
from typing import Any
print((3.0).__add__(4.0)) # PLC2801
print((3.0).__sub__(4.0)) # PLC2801
print((3.0).__mul__(4.0)) # PLC2801
print((3.0).__truediv__(4.0)) # PLC2801
print((3.0).__floordiv__(4.0)) # PLC2801
print((3.0).__mod__(4.0)) # PLC2801
print((3.0).__eq__(4.0)) # PLC2801
print((3.0).__ne__(4.0)) # PLC2801
print((3.0).__lt__(4.0)) # PLC2801
print((3.0).__le__(4.0)) # PLC2801
print((3.0).__gt__(4.0)) # PLC2801
print((3.0).__ge__(4.0)) # PLC2801
print((3.0).__str__()) # PLC2801
print((3.0).__repr__()) # PLC2801
print([1, 2, 3].__len__()) # PLC2801
print((1).__neg__()) # PLC2801
class Thing:
def __init__(self, stuff: Any) -> None:
super().__init__() # OK
super().__class__(stuff=(1, 2, 3)) # OK
blah = lambda: {"a": 1}.__delitem__("a") # OK
blah = dict[{"a": 1}.__delitem__("a")] # OK

View File

@@ -1,13 +1,17 @@
data = ["some", "Data"]
constant = 5
# Ok
# OK
{value: value.upper() for value in data}
{value.lower(): value.upper() for value in data}
{v: v*v for v in range(10)}
{(0, "a", v): v*v for v in range(10)} # Tuple with variable
{v: v * v for v in range(10)}
{(0, "a", v): v * v for v in range(10)} # Tuple with variable
{constant: value.upper() for value in data for constant in data}
# Errors
{"key": value.upper() for value in data}
{True: value.upper() for value in data}
{0: value.upper() for value in data}
{(1, "a"): value.upper() for value in data} # constant tuple
{(1, "a"): value.upper() for value in data} # Constant tuple
{constant: value.upper() for value in data}
{constant + constant: value.upper() for value in data}

View File

@@ -151,13 +151,10 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
continue;
}
#[allow(deprecated)]
let line = checker.locator.compute_line_index(shadowed.start());
checker.diagnostics.push(Diagnostic::new(
pyflakes::rules::ImportShadowedByLoopVar {
name: name.to_string(),
line,
row: checker.compute_source_row(shadowed.start()),
},
binding.range(),
));
@@ -243,12 +240,10 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
continue;
}
#[allow(deprecated)]
let line = checker.locator.compute_line_index(shadowed.start());
let mut diagnostic = Diagnostic::new(
pyflakes::rules::RedefinedWhileUnused {
name: (*name).to_string(),
line,
row: checker.compute_source_row(shadowed.start()),
},
binding.range(),
);

View File

@@ -438,6 +438,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}
}
}
if checker.enabled(Rule::SuperWithoutBrackets) {
pylint::rules::super_without_brackets(checker, func);
}
if checker.enabled(Rule::BitCount) {
refurb::rules::bit_count(checker, call);
}
@@ -959,6 +962,15 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::TrioZeroSleepCall) {
flake8_trio::rules::zero_sleep_call(checker, call);
}
if checker.enabled(Rule::UnnecessaryDunderCall) {
pylint::rules::unnecessary_dunder_call(checker, call);
}
if checker.enabled(Rule::SslWithNoVersion) {
flake8_bandit::rules::ssl_with_no_version(checker, call);
}
if checker.enabled(Rule::SslInsecureVersion) {
flake8_bandit::rules::ssl_insecure_version(checker, call);
}
}
Expr::Dict(dict) => {
if checker.any_enabled(&[
@@ -1384,12 +1396,14 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
refurb::rules::reimplemented_starmap(checker, &comp.into());
}
}
Expr::DictComp(ast::ExprDictComp {
key,
value,
generators,
range: _,
}) => {
Expr::DictComp(
dict_comp @ ast::ExprDictComp {
key,
value,
generators,
range: _,
},
) => {
if checker.enabled(Rule::UnnecessaryListIndexLookup) {
pylint::rules::unnecessary_list_index_lookup_comprehension(checker, expr);
}
@@ -1410,7 +1424,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}
}
if checker.enabled(Rule::StaticKeyDictComprehension) {
ruff::rules::static_key_dict_comprehension(checker, key);
ruff::rules::static_key_dict_comprehension(checker, dict_comp);
}
}
Expr::GeneratorExp(

View File

@@ -374,6 +374,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::ReimplementedOperator) {
refurb::rules::reimplemented_operator(checker, &function_def.into());
}
if checker.enabled(Rule::SslWithBadDefaults) {
flake8_bandit::rules::ssl_with_bad_defaults(checker, function_def);
}
}
Stmt::Return(_) => {
if checker.enabled(Rule::ReturnOutsideFunction) {
@@ -552,6 +555,24 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
if checker.enabled(Rule::DeprecatedMockImport) {
pyupgrade::rules::deprecated_mock_import(checker, stmt);
}
if checker.any_enabled(&[
Rule::SuspiciousTelnetlibImport,
Rule::SuspiciousFtplibImport,
Rule::SuspiciousPickleImport,
Rule::SuspiciousSubprocessImport,
Rule::SuspiciousXmlEtreeImport,
Rule::SuspiciousXmlSaxImport,
Rule::SuspiciousXmlExpatImport,
Rule::SuspiciousXmlMinidomImport,
Rule::SuspiciousXmlPulldomImport,
Rule::SuspiciousLxmlImport,
Rule::SuspiciousXmlrpcImport,
Rule::SuspiciousHttpoxyImport,
Rule::SuspiciousPycryptoImport,
Rule::SuspiciousPyghmiImport,
]) {
flake8_bandit::rules::suspicious_imports(checker, stmt);
}
for alias in names {
if checker.enabled(Rule::NonAsciiImportName) {
@@ -751,6 +772,24 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
pyupgrade::rules::unnecessary_builtin_import(checker, stmt, module, names);
}
}
if checker.any_enabled(&[
Rule::SuspiciousTelnetlibImport,
Rule::SuspiciousFtplibImport,
Rule::SuspiciousPickleImport,
Rule::SuspiciousSubprocessImport,
Rule::SuspiciousXmlEtreeImport,
Rule::SuspiciousXmlSaxImport,
Rule::SuspiciousXmlExpatImport,
Rule::SuspiciousXmlMinidomImport,
Rule::SuspiciousXmlPulldomImport,
Rule::SuspiciousLxmlImport,
Rule::SuspiciousXmlrpcImport,
Rule::SuspiciousHttpoxyImport,
Rule::SuspiciousPycryptoImport,
Rule::SuspiciousPyghmiImport,
]) {
flake8_bandit::rules::suspicious_imports(checker, stmt);
}
if checker.enabled(Rule::BannedApi) {
if let Some(module) =
helpers::resolve_imported_module_path(level, module, checker.module_path)

View File

@@ -37,7 +37,7 @@ use ruff_python_ast::{
use ruff_text_size::{Ranged, TextRange, TextSize};
use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_notebook::CellOffsets;
use ruff_notebook::{CellOffsets, NotebookIndex};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::helpers::{
collect_import_from_member, extract_handled_exceptions, to_module_path,
@@ -56,7 +56,7 @@ use ruff_python_semantic::{
StarImport, SubmoduleImport,
};
use ruff_python_stdlib::builtins::{IPYTHON_BUILTINS, MAGIC_GLOBALS, PYTHON_BUILTINS};
use ruff_source_file::Locator;
use ruff_source_file::{Locator, OneIndexed, SourceRow};
use crate::checkers::ast::annotation::AnnotationContext;
use crate::checkers::ast::deferred::Deferred;
@@ -83,6 +83,8 @@ pub(crate) struct Checker<'a> {
pub(crate) source_type: PySourceType,
/// The [`CellOffsets`] for the current file, if it's a Jupyter notebook.
cell_offsets: Option<&'a CellOffsets>,
/// The [`NotebookIndex`] for the current file, if it's a Jupyter notebook.
notebook_index: Option<&'a NotebookIndex>,
/// The [`flags::Noqa`] for the current analysis (i.e., whether to respect suppression
/// comments).
noqa: flags::Noqa,
@@ -128,6 +130,7 @@ impl<'a> Checker<'a> {
importer: Importer<'a>,
source_type: PySourceType,
cell_offsets: Option<&'a CellOffsets>,
notebook_index: Option<&'a NotebookIndex>,
) -> Checker<'a> {
Checker {
settings,
@@ -146,6 +149,7 @@ impl<'a> Checker<'a> {
diagnostics: Vec::default(),
flake8_bugbear_seen: Vec::default(),
cell_offsets,
notebook_index,
last_stmt_end: TextSize::default(),
}
}
@@ -198,6 +202,20 @@ impl<'a> Checker<'a> {
}
}
/// Returns the [`SourceRow`] for the given offset.
pub(crate) fn compute_source_row(&self, offset: TextSize) -> SourceRow {
#[allow(deprecated)]
let line = self.locator.compute_line_index(offset);
if let Some(notebook_index) = self.notebook_index {
let cell = notebook_index.cell(line).unwrap_or(OneIndexed::MIN);
let line = notebook_index.cell_row(line).unwrap_or(OneIndexed::MIN);
SourceRow::Notebook { cell, line }
} else {
SourceRow::SourceFile { line }
}
}
/// The [`Locator`] for the current file, which enables extraction of source code from byte
/// offsets.
pub(crate) const fn locator(&self) -> &'a Locator<'a> {
@@ -1984,6 +2002,7 @@ pub(crate) fn check_ast(
package: Option<&Path>,
source_type: PySourceType,
cell_offsets: Option<&CellOffsets>,
notebook_index: Option<&NotebookIndex>,
) -> Vec<Diagnostic> {
let module_path = package.and_then(|package| to_module_path(package, path));
let module = Module {
@@ -2013,6 +2032,7 @@ pub(crate) fn check_ast(
Importer::new(python_ast, locator, stylist),
source_type,
cell_offsets,
notebook_index,
);
checker.bind_builtins();

View File

@@ -61,7 +61,7 @@ pub(crate) fn check_tokens(
}
}
Tok::FStringMiddle { .. } => Context::String,
Tok::Comment(_) => Context::Comment,
Tok::Comment => Context::Comment,
_ => continue,
};
ruff::rules::ambiguous_unicode_character(

View File

@@ -214,6 +214,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "C0415") => (RuleGroup::Preview, rules::pylint::rules::ImportOutsideTopLevel),
(Pylint, "C2401") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiName),
(Pylint, "C2403") => (RuleGroup::Preview, rules::pylint::rules::NonAsciiImportName),
(Pylint, "C2801") => (RuleGroup::Preview, rules::pylint::rules::UnnecessaryDunderCall),
#[allow(deprecated)]
(Pylint, "C1901") => (RuleGroup::Nursery, rules::pylint::rules::CompareToEmptyString),
(Pylint, "C3002") => (RuleGroup::Stable, rules::pylint::rules::UnnecessaryDirectLambdaCall),
@@ -275,6 +276,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "W0127") => (RuleGroup::Stable, rules::pylint::rules::SelfAssigningVariable),
(Pylint, "W0129") => (RuleGroup::Stable, rules::pylint::rules::AssertOnStringLiteral),
(Pylint, "W0131") => (RuleGroup::Stable, rules::pylint::rules::NamedExprWithoutContext),
(Pylint, "W0245") => (RuleGroup::Preview, rules::pylint::rules::SuperWithoutBrackets),
(Pylint, "W0406") => (RuleGroup::Stable, rules::pylint::rules::ImportSelf),
(Pylint, "W0602") => (RuleGroup::Stable, rules::pylint::rules::GlobalVariableNotAssigned),
(Pylint, "W0604") => (RuleGroup::Preview, rules::pylint::rules::GlobalAtModuleLevel),
@@ -625,7 +627,24 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Bandit, "321") => (RuleGroup::Stable, rules::flake8_bandit::rules::SuspiciousFTPLibUsage),
(Flake8Bandit, "323") => (RuleGroup::Stable, rules::flake8_bandit::rules::SuspiciousUnverifiedContextUsage),
(Flake8Bandit, "324") => (RuleGroup::Stable, rules::flake8_bandit::rules::HashlibInsecureHashFunction),
(Flake8Bandit, "401") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousTelnetlibImport),
(Flake8Bandit, "402") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousFtplibImport),
(Flake8Bandit, "403") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousPickleImport),
(Flake8Bandit, "404") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousSubprocessImport),
(Flake8Bandit, "405") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousXmlEtreeImport),
(Flake8Bandit, "406") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousXmlSaxImport),
(Flake8Bandit, "407") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousXmlExpatImport),
(Flake8Bandit, "408") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousXmlMinidomImport),
(Flake8Bandit, "409") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousXmlPulldomImport),
(Flake8Bandit, "410") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousLxmlImport),
(Flake8Bandit, "411") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousXmlrpcImport),
(Flake8Bandit, "412") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousHttpoxyImport),
(Flake8Bandit, "413") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousPycryptoImport),
(Flake8Bandit, "415") => (RuleGroup::Preview, rules::flake8_bandit::rules::SuspiciousPyghmiImport),
(Flake8Bandit, "501") => (RuleGroup::Stable, rules::flake8_bandit::rules::RequestWithNoCertValidation),
(Flake8Bandit, "502") => (RuleGroup::Preview, rules::flake8_bandit::rules::SslInsecureVersion),
(Flake8Bandit, "503") => (RuleGroup::Preview, rules::flake8_bandit::rules::SslWithBadDefaults),
(Flake8Bandit, "504") => (RuleGroup::Preview, rules::flake8_bandit::rules::SslWithNoVersion),
(Flake8Bandit, "505") => (RuleGroup::Preview, rules::flake8_bandit::rules::WeakCryptographicKey),
(Flake8Bandit, "506") => (RuleGroup::Stable, rules::flake8_bandit::rules::UnsafeYAMLLoad),
(Flake8Bandit, "507") => (RuleGroup::Preview, rules::flake8_bandit::rules::SSHNoHostKeyVerification),

View File

@@ -79,7 +79,7 @@ pub fn extract_directives(
NoqaMapping::default()
},
isort: if flags.intersects(Flags::ISORT) {
extract_isort_directives(lxr, locator)
extract_isort_directives(locator, indexer)
} else {
IsortDirectives::default()
},
@@ -215,15 +215,13 @@ fn extract_noqa_line_for(lxr: &[LexResult], locator: &Locator, indexer: &Indexer
}
/// Extract a set of ranges over which to disable isort.
fn extract_isort_directives(lxr: &[LexResult], locator: &Locator) -> IsortDirectives {
fn extract_isort_directives(locator: &Locator, indexer: &Indexer) -> IsortDirectives {
let mut exclusions: Vec<TextRange> = Vec::default();
let mut splits: Vec<TextSize> = Vec::default();
let mut off: Option<TextSize> = None;
for &(ref tok, range) in lxr.iter().flatten() {
let Tok::Comment(comment_text) = tok else {
continue;
};
for range in indexer.comment_ranges() {
let comment_text = locator.slice(range);
// `isort` allows for `# isort: skip` and `# isort: skip_file` to include or
// omit a space after the colon. The remaining action comments are
@@ -592,8 +590,10 @@ assert foo, \
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).exclusions,
extract_isort_directives(&locator, &indexer).exclusions,
Vec::default()
);
@@ -603,8 +603,10 @@ y = 2
# isort: on
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).exclusions,
extract_isort_directives(&locator, &indexer).exclusions,
Vec::from_iter([TextRange::new(TextSize::from(0), TextSize::from(25))])
);
@@ -616,8 +618,10 @@ y = 2
z = x + 1
# isort: on";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).exclusions,
extract_isort_directives(&locator, &indexer).exclusions,
Vec::from_iter([TextRange::new(TextSize::from(0), TextSize::from(38))])
);
@@ -626,8 +630,10 @@ x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).exclusions,
extract_isort_directives(&locator, &indexer).exclusions,
Vec::from_iter([TextRange::at(TextSize::from(0), contents.text_len())])
);
@@ -636,8 +642,10 @@ x = 1
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).exclusions,
extract_isort_directives(&locator, &indexer).exclusions,
Vec::default()
);
@@ -648,8 +656,10 @@ y = 2
# isort: skip_file
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).exclusions,
extract_isort_directives(&locator, &indexer).exclusions,
Vec::default()
);
}
@@ -660,8 +670,10 @@ z = x + 1";
y = 2
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).splits,
extract_isort_directives(&locator, &indexer).splits,
Vec::new()
);
@@ -670,8 +682,10 @@ y = 2
# isort: split
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).splits,
extract_isort_directives(&locator, &indexer).splits,
vec![TextSize::from(12)]
);
@@ -679,8 +693,10 @@ z = x + 1";
y = 2 # isort: split
z = x + 1";
let lxr: Vec<LexResult> = lexer::lex(contents, Mode::Module).collect();
let locator = Locator::new(contents);
let indexer = Indexer::from_tokens(&lxr, &locator);
assert_eq!(
extract_isort_directives(&lxr, &Locator::new(contents)).splits,
extract_isort_directives(&locator, &indexer).splits,
vec![TextSize::from(13)]
);
}

View File

@@ -39,7 +39,7 @@ impl Iterator for DocLines<'_> {
let (tok, range) = self.inner.next()?;
match tok {
Tok::Comment(..) => {
Tok::Comment => {
if at_start_of_line {
break Some(range.start());
}

View File

@@ -174,7 +174,7 @@ impl<'a> Insertion<'a> {
// Once we've seen the colon, we're looking for a newline; otherwise, there's no
// block body (e.g. `if True: pass`).
Awaiting::Newline => match tok {
Tok::Comment(..) => {}
Tok::Comment => {}
Tok::Newline => {
state = Awaiting::Indent;
}
@@ -185,7 +185,7 @@ impl<'a> Insertion<'a> {
},
// Once we've seen the newline, we're looking for the indentation of the block body.
Awaiting::Indent => match tok {
Tok::Comment(..) => {}
Tok::Comment => {}
Tok::NonLogicalNewline => {}
Tok::Indent => {
// This is like:

View File

@@ -33,11 +33,9 @@ pub(crate) struct StateMachine {
impl StateMachine {
pub(crate) fn consume(&mut self, tok: &Tok) -> bool {
match tok {
Tok::NonLogicalNewline
| Tok::Newline
| Tok::Indent
| Tok::Dedent
| Tok::Comment(..) => false,
Tok::NonLogicalNewline | Tok::Newline | Tok::Indent | Tok::Dedent | Tok::Comment => {
false
}
Tok::String { .. } => {
if matches!(

View File

@@ -148,6 +148,7 @@ pub fn check_path(
match tokens.into_ast_source(source_kind, source_type) {
Ok(python_ast) => {
let cell_offsets = source_kind.as_ipy_notebook().map(Notebook::cell_offsets);
let notebook_index = source_kind.as_ipy_notebook().map(Notebook::index);
if use_ast {
diagnostics.extend(check_ast(
&python_ast,
@@ -161,6 +162,7 @@ pub fn check_path(
package,
source_type,
cell_offsets,
notebook_index,
));
}
if use_imports {

View File

@@ -216,12 +216,7 @@ impl Display for DisplayParseError {
colon = ":".cyan(),
)?;
} else {
write!(
f,
"{header}{colon}",
header = "Failed to parse".bold(),
colon = ":".cyan(),
)?;
write!(f, "{header}", header = "Failed to parse at ".bold())?;
}
match &self.location {
ErrorLocation::File(location) => {

View File

@@ -3,7 +3,7 @@ use ruff_macros::CacheKey;
use std::fmt::{Debug, Formatter};
use std::iter::FusedIterator;
const RULESET_SIZE: usize = 12;
const RULESET_SIZE: usize = 13;
/// A set of [`Rule`]s.
///

View File

@@ -89,7 +89,7 @@ pub(crate) fn auto_return_type(function: &ast::StmtFunctionDef) -> Option<AutoPy
// if x > 0:
// return 1
// ```
if terminal == Terminal::ConditionalReturn || terminal == Terminal::None {
if terminal.has_implicit_return() {
return_type = return_type.union(ResolvedPythonType::Atom(PythonType::None));
}

View File

@@ -263,14 +263,14 @@ auto_return_type.py:82:5: ANN201 [*] Missing return type annotation for public f
83 | match x:
84 | case [1, 2, 3]:
|
= help: Add return type annotation: `str | int`
= help: Add return type annotation: `str | int | None`
Unsafe fix
79 79 | return 1
80 80 |
81 81 |
82 |-def func(x: int):
82 |+def func(x: int) -> str | int:
82 |+def func(x: int) -> str | int | None:
83 83 | match x:
84 84 | case [1, 2, 3]:
85 85 | return 1
@@ -396,14 +396,14 @@ auto_return_type.py:137:5: ANN201 [*] Missing return type annotation for public
138 | try:
139 | return 1
|
= help: Add return type annotation: `int`
= help: Add return type annotation: `int | None`
Unsafe fix
134 134 | return 2
135 135 |
136 136 |
137 |-def func(x: int):
137 |+def func(x: int) -> int:
137 |+def func(x: int) -> int | None:
138 138 | try:
139 139 | return 1
140 140 | except:
@@ -674,4 +674,99 @@ auto_return_type.py:262:5: ANN201 [*] Missing return type annotation for public
264 264 | if x > 0:
265 265 | return 1
auto_return_type.py:269:5: ANN201 [*] Missing return type annotation for public function `func`
|
269 | def func(x: int):
| ^^^^ ANN201
270 | if x > 5:
271 | raise ValueError
|
= help: Add return type annotation: `None`
Unsafe fix
266 266 | raise ValueError
267 267 |
268 268 |
269 |-def func(x: int):
269 |+def func(x: int) -> None:
270 270 | if x > 5:
271 271 | raise ValueError
272 272 | else:
auto_return_type.py:276:5: ANN201 [*] Missing return type annotation for public function `func`
|
276 | def func(x: int):
| ^^^^ ANN201
277 | if x > 5:
278 | raise ValueError
|
= help: Add return type annotation: `None`
Unsafe fix
273 273 | pass
274 274 |
275 275 |
276 |-def func(x: int):
276 |+def func(x: int) -> None:
277 277 | if x > 5:
278 278 | raise ValueError
279 279 | elif x > 10:
auto_return_type.py:283:5: ANN201 [*] Missing return type annotation for public function `func`
|
283 | def func(x: int):
| ^^^^ ANN201
284 | if x > 5:
285 | raise ValueError
|
= help: Add return type annotation: `int | None`
Unsafe fix
280 280 | pass
281 281 |
282 282 |
283 |-def func(x: int):
283 |+def func(x: int) -> int | None:
284 284 | if x > 5:
285 285 | raise ValueError
286 286 | elif x > 10:
auto_return_type.py:290:5: ANN201 [*] Missing return type annotation for public function `func`
|
290 | def func():
| ^^^^ ANN201
291 | try:
292 | return 5
|
= help: Add return type annotation: `int`
Unsafe fix
287 287 | return 5
288 288 |
289 289 |
290 |-def func():
290 |+def func() -> int:
291 291 | try:
292 292 | return 5
293 293 | except:
auto_return_type.py:299:5: ANN201 [*] Missing return type annotation for public function `func`
|
299 | def func(x: int):
| ^^^^ ANN201
300 | match x:
301 | case [1, 2, 3]:
|
= help: Add return type annotation: `str | int`
Unsafe fix
296 296 | raise ValueError
297 297 |
298 298 |
299 |-def func(x: int):
299 |+def func(x: int) -> str | int:
300 300 | match x:
301 301 | case [1, 2, 3]:
302 302 | return 1

View File

@@ -293,7 +293,7 @@ auto_return_type.py:82:5: ANN201 [*] Missing return type annotation for public f
83 | match x:
84 | case [1, 2, 3]:
|
= help: Add return type annotation: `Union[str | int]`
= help: Add return type annotation: `Union[str | int | None]`
Unsafe fix
1 |+from typing import Union
@@ -305,7 +305,7 @@ auto_return_type.py:82:5: ANN201 [*] Missing return type annotation for public f
80 81 |
81 82 |
82 |-def func(x: int):
83 |+def func(x: int) -> Union[str | int]:
83 |+def func(x: int) -> Union[str | int | None]:
83 84 | match x:
84 85 | case [1, 2, 3]:
85 86 | return 1
@@ -446,17 +446,22 @@ auto_return_type.py:137:5: ANN201 [*] Missing return type annotation for public
138 | try:
139 | return 1
|
= help: Add return type annotation: `int`
= help: Add return type annotation: `Optional[int]`
Unsafe fix
134 134 | return 2
135 135 |
136 136 |
1 |+from typing import Optional
1 2 | def func():
2 3 | return 1
3 4 |
--------------------------------------------------------------------------------
134 135 | return 2
135 136 |
136 137 |
137 |-def func(x: int):
137 |+def func(x: int) -> int:
138 138 | try:
139 139 | return 1
140 140 | except:
138 |+def func(x: int) -> Optional[int]:
138 139 | try:
139 140 | return 1
140 141 | except:
auto_return_type.py:146:5: ANN201 [*] Missing return type annotation for public function `func`
|
@@ -755,4 +760,117 @@ auto_return_type.py:262:5: ANN201 [*] Missing return type annotation for public
264 264 | if x > 0:
265 265 | return 1
auto_return_type.py:269:5: ANN201 [*] Missing return type annotation for public function `func`
|
269 | def func(x: int):
| ^^^^ ANN201
270 | if x > 5:
271 | raise ValueError
|
= help: Add return type annotation: `None`
Unsafe fix
266 266 | raise ValueError
267 267 |
268 268 |
269 |-def func(x: int):
269 |+def func(x: int) -> None:
270 270 | if x > 5:
271 271 | raise ValueError
272 272 | else:
auto_return_type.py:276:5: ANN201 [*] Missing return type annotation for public function `func`
|
276 | def func(x: int):
| ^^^^ ANN201
277 | if x > 5:
278 | raise ValueError
|
= help: Add return type annotation: `None`
Unsafe fix
273 273 | pass
274 274 |
275 275 |
276 |-def func(x: int):
276 |+def func(x: int) -> None:
277 277 | if x > 5:
278 278 | raise ValueError
279 279 | elif x > 10:
auto_return_type.py:283:5: ANN201 [*] Missing return type annotation for public function `func`
|
283 | def func(x: int):
| ^^^^ ANN201
284 | if x > 5:
285 | raise ValueError
|
= help: Add return type annotation: `Optional[int]`
Unsafe fix
214 214 | return 1
215 215 |
216 216 |
217 |-from typing import overload
217 |+from typing import overload, Optional
218 218 |
219 219 |
220 220 | @overload
--------------------------------------------------------------------------------
280 280 | pass
281 281 |
282 282 |
283 |-def func(x: int):
283 |+def func(x: int) -> Optional[int]:
284 284 | if x > 5:
285 285 | raise ValueError
286 286 | elif x > 10:
auto_return_type.py:290:5: ANN201 [*] Missing return type annotation for public function `func`
|
290 | def func():
| ^^^^ ANN201
291 | try:
292 | return 5
|
= help: Add return type annotation: `int`
Unsafe fix
287 287 | return 5
288 288 |
289 289 |
290 |-def func():
290 |+def func() -> int:
291 291 | try:
292 292 | return 5
293 293 | except:
auto_return_type.py:299:5: ANN201 [*] Missing return type annotation for public function `func`
|
299 | def func(x: int):
| ^^^^ ANN201
300 | match x:
301 | case [1, 2, 3]:
|
= help: Add return type annotation: `Union[str | int]`
Unsafe fix
214 214 | return 1
215 215 |
216 216 |
217 |-from typing import overload
217 |+from typing import overload, Union
218 218 |
219 219 |
220 220 | @overload
--------------------------------------------------------------------------------
296 296 | raise ValueError
297 297 |
298 298 |
299 |-def func(x: int):
299 |+def func(x: int) -> Union[str | int]:
300 300 | match x:
301 301 | case [1, 2, 3]:
302 302 | return 1

View File

@@ -36,6 +36,9 @@ mod tests {
#[test_case(Rule::SSHNoHostKeyVerification, Path::new("S507.py"))]
#[test_case(Rule::SnmpInsecureVersion, Path::new("S508.py"))]
#[test_case(Rule::SnmpWeakCryptography, Path::new("S509.py"))]
#[test_case(Rule::SslInsecureVersion, Path::new("S502.py"))]
#[test_case(Rule::SslWithBadDefaults, Path::new("S503.py"))]
#[test_case(Rule::SslWithNoVersion, Path::new("S504.py"))]
#[test_case(Rule::StartProcessWithAShell, Path::new("S605.py"))]
#[test_case(Rule::StartProcessWithNoShell, Path::new("S606.py"))]
#[test_case(Rule::StartProcessWithPartialPath, Path::new("S607.py"))]
@@ -45,6 +48,20 @@ mod tests {
#[test_case(Rule::SuspiciousEvalUsage, Path::new("S307.py"))]
#[test_case(Rule::SuspiciousURLOpenUsage, Path::new("S310.py"))]
#[test_case(Rule::SuspiciousTelnetUsage, Path::new("S312.py"))]
#[test_case(Rule::SuspiciousTelnetlibImport, Path::new("S401.py"))]
#[test_case(Rule::SuspiciousFtplibImport, Path::new("S402.py"))]
#[test_case(Rule::SuspiciousPickleImport, Path::new("S403.py"))]
#[test_case(Rule::SuspiciousSubprocessImport, Path::new("S404.py"))]
#[test_case(Rule::SuspiciousXmlEtreeImport, Path::new("S405.py"))]
#[test_case(Rule::SuspiciousXmlSaxImport, Path::new("S406.py"))]
#[test_case(Rule::SuspiciousXmlExpatImport, Path::new("S407.py"))]
#[test_case(Rule::SuspiciousXmlMinidomImport, Path::new("S408.py"))]
#[test_case(Rule::SuspiciousXmlPulldomImport, Path::new("S409.py"))]
#[test_case(Rule::SuspiciousLxmlImport, Path::new("S410.py"))]
#[test_case(Rule::SuspiciousXmlrpcImport, Path::new("S411.py"))]
#[test_case(Rule::SuspiciousHttpoxyImport, Path::new("S412.py"))]
#[test_case(Rule::SuspiciousPycryptoImport, Path::new("S413.py"))]
#[test_case(Rule::SuspiciousPyghmiImport, Path::new("S415.py"))]
#[test_case(Rule::TryExceptContinue, Path::new("S112.py"))]
#[test_case(Rule::TryExceptPass, Path::new("S110.py"))]
#[test_case(Rule::UnixCommandWildcardInjection, Path::new("S609.py"))]

View File

@@ -20,7 +20,11 @@ pub(crate) use shell_injection::*;
pub(crate) use snmp_insecure_version::*;
pub(crate) use snmp_weak_cryptography::*;
pub(crate) use ssh_no_host_key_verification::*;
pub(crate) use ssl_insecure_version::*;
pub(crate) use ssl_with_bad_defaults::*;
pub(crate) use ssl_with_no_version::*;
pub(crate) use suspicious_function_call::*;
pub(crate) use suspicious_imports::*;
pub(crate) use tarfile_unsafe_members::*;
pub(crate) use try_except_continue::*;
pub(crate) use try_except_pass::*;
@@ -49,7 +53,11 @@ mod shell_injection;
mod snmp_insecure_version;
mod snmp_weak_cryptography;
mod ssh_no_host_key_verification;
mod ssl_insecure_version;
mod ssl_with_bad_defaults;
mod ssl_with_no_version;
mod suspicious_function_call;
mod suspicious_imports;
mod tarfile_unsafe_members;
mod try_except_continue;
mod try_except_pass;

View File

@@ -0,0 +1,107 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, ExprCall};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for function calls with parameters that indicate the use of insecure
/// SSL and TLS protocol versions.
///
/// ## Why is this bad?
/// Several highly publicized exploitable flaws have been discovered in all
/// versions of SSL and early versions of TLS. The following versions are
/// considered insecure, and should be avoided:
/// - SSL v2
/// - SSL v3
/// - TLS v1
/// - TLS v1.1
///
/// This method supports detection on the Python's built-in `ssl` module and
/// the `pyOpenSSL` module.
///
/// ## Example
/// ```python
/// import ssl
///
/// ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1)
/// ```
///
/// Use instead:
/// ```python
/// import ssl
///
/// ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1_2)
/// ```
#[violation]
pub struct SslInsecureVersion {
protocol: String,
}
impl Violation for SslInsecureVersion {
#[derive_message_formats]
fn message(&self) -> String {
let SslInsecureVersion { protocol } = self;
format!("Call made with insecure SSL protocol: `{protocol}`")
}
}
/// S502
pub(crate) fn ssl_insecure_version(checker: &mut Checker, call: &ExprCall) {
let Some(keyword) = checker
.semantic()
.resolve_call_path(call.func.as_ref())
.and_then(|call_path| match call_path.as_slice() {
["ssl", "wrap_socket"] => Some("ssl_version"),
["OpenSSL", "SSL", "Context"] => Some("method"),
_ => None,
})
else {
return;
};
let Some(keyword) = call.arguments.find_keyword(keyword) else {
return;
};
match &keyword.value {
Expr::Name(ast::ExprName { id, .. }) => {
if is_insecure_protocol(id) {
checker.diagnostics.push(Diagnostic::new(
SslInsecureVersion {
protocol: id.to_string(),
},
keyword.range(),
));
}
}
Expr::Attribute(ast::ExprAttribute { attr, .. }) => {
if is_insecure_protocol(attr) {
checker.diagnostics.push(Diagnostic::new(
SslInsecureVersion {
protocol: attr.to_string(),
},
keyword.range(),
));
}
}
_ => {}
}
}
/// Returns `true` if the given protocol name is insecure.
fn is_insecure_protocol(name: &str) -> bool {
matches!(
name,
"PROTOCOL_SSLv2"
| "PROTOCOL_SSLv3"
| "PROTOCOL_TLSv1"
| "PROTOCOL_TLSv1_1"
| "SSLv2_METHOD"
| "SSLv23_METHOD"
| "SSLv3_METHOD"
| "TLSv1_METHOD"
| "TLSv1_1_METHOD"
)
}

View File

@@ -0,0 +1,106 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr, StmtFunctionDef};
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for function definitions with default arguments set to insecure SSL
/// and TLS protocol versions.
///
/// ## Why is this bad?
/// Several highly publicized exploitable flaws have been discovered in all
/// versions of SSL and early versions of TLS. The following versions are
/// considered insecure, and should be avoided:
/// - SSL v2
/// - SSL v3
/// - TLS v1
/// - TLS v1.1
///
/// ## Example
/// ```python
/// import ssl
///
///
/// def func(version=ssl.PROTOCOL_TLSv1):
/// ...
/// ```
///
/// Use instead:
/// ```python
/// import ssl
///
///
/// def func(version=ssl.PROTOCOL_TLSv1_2):
/// ...
/// ```
#[violation]
pub struct SslWithBadDefaults {
protocol: String,
}
impl Violation for SslWithBadDefaults {
#[derive_message_formats]
fn message(&self) -> String {
let SslWithBadDefaults { protocol } = self;
format!("Argument default set to insecure SSL protocol: `{protocol}`")
}
}
/// S503
pub(crate) fn ssl_with_bad_defaults(checker: &mut Checker, function_def: &StmtFunctionDef) {
function_def
.parameters
.posonlyargs
.iter()
.chain(
function_def
.parameters
.args
.iter()
.chain(function_def.parameters.kwonlyargs.iter()),
)
.for_each(|param| {
if let Some(default) = &param.default {
match default.as_ref() {
Expr::Name(ast::ExprName { id, range, .. }) => {
if is_insecure_protocol(id.as_str()) {
checker.diagnostics.push(Diagnostic::new(
SslWithBadDefaults {
protocol: id.to_string(),
},
*range,
));
}
}
Expr::Attribute(ast::ExprAttribute { attr, range, .. }) => {
if is_insecure_protocol(attr.as_str()) {
checker.diagnostics.push(Diagnostic::new(
SslWithBadDefaults {
protocol: attr.to_string(),
},
*range,
));
}
}
_ => {}
}
}
});
}
/// Returns `true` if the given protocol name is insecure.
fn is_insecure_protocol(name: &str) -> bool {
matches!(
name,
"PROTOCOL_SSLv2"
| "PROTOCOL_SSLv3"
| "PROTOCOL_TLSv1"
| "PROTOCOL_TLSv1_1"
| "SSLv2_METHOD"
| "SSLv23_METHOD"
| "SSLv3_METHOD"
| "TLSv1_METHOD"
| "TLSv1_1_METHOD"
)
}

View File

@@ -0,0 +1,51 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::ExprCall;
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
/// ## What it does
/// Checks for calls to `ssl.wrap_socket()` without an `ssl_version`.
///
/// ## Why is this bad?
/// This method is known to provide a default value that maximizes
/// compatibility, but permits use of insecure protocols.
///
/// ## Example
/// ```python
/// import ssl
///
/// ssl.wrap_socket()
/// ```
///
/// Use instead:
/// ```python
/// import ssl
///
/// ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1_2)
/// ```
#[violation]
pub struct SslWithNoVersion;
impl Violation for SslWithNoVersion {
#[derive_message_formats]
fn message(&self) -> String {
format!("`ssl.wrap_socket` called without an `ssl_version``")
}
}
/// S504
pub(crate) fn ssl_with_no_version(checker: &mut Checker, call: &ExprCall) {
if checker
.semantic()
.resolve_call_path(call.func.as_ref())
.is_some_and(|call_path| matches!(call_path.as_slice(), ["ssl", "wrap_socket"]))
{
if call.arguments.find_keyword("ssl_version").is_none() {
checker
.diagnostics
.push(Diagnostic::new(SslWithNoVersion, call.range()));
}
}
}

View File

@@ -0,0 +1,592 @@
//! Check for imports of or from suspicious modules.
//!
//! See: <https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html>
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Stmt};
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::registry::AsRule;
/// ## What it does
/// Checks for imports of the`telnetlib` module.
///
/// ## Why is this bad?
/// Telnet is considered insecure. Instead, use SSH or another encrypted
/// protocol.
///
/// ## Example
/// ```python
/// import telnetlib
/// ```
#[violation]
pub struct SuspiciousTelnetlibImport;
impl Violation for SuspiciousTelnetlibImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`telnetlib` and related modules are considered insecure. Use SSH or another encrypted protocol.")
}
}
/// ## What it does
/// Checks for imports of the `ftplib` module.
///
/// ## Why is this bad?
/// FTP is considered insecure. Instead, use SSH, SFTP, SCP, or another
/// encrypted protocol.
///
/// ## Example
/// ```python
/// import ftplib
/// ```
#[violation]
pub struct SuspiciousFtplibImport;
impl Violation for SuspiciousFtplibImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`ftplib` and related modules are considered insecure. Use SSH, SFTP, SCP, or another encrypted protocol.")
}
}
/// ## What it does
/// Checks for imports of the `pickle`, `cPickle`, `dill`, and `shelve` modules.
///
/// ## Why is this bad?
/// It is possible to construct malicious pickle data which will execute
/// arbitrary code during unpickling. Consider possible security implications
/// associated with these modules.
///
/// ## Example
/// ```python
/// import pickle
/// ```
/// /// ## References
/// - [Python Docs](https://docs.python.org/3/library/pickle.html)
#[violation]
pub struct SuspiciousPickleImport;
impl Violation for SuspiciousPickleImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure")
}
}
/// ## What it does
/// Checks for imports of the `subprocess` module.
///
/// ## Why is this bad?
/// It is possible to inject malicious commands into subprocess calls. Consider
/// possible security implications associated with this module.
///
/// ## Example
/// ```python
/// import subprocess
/// ```
#[violation]
pub struct SuspiciousSubprocessImport;
impl Violation for SuspiciousSubprocessImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`subprocess` module is possibly insecure")
}
}
/// ## What it does
/// Checks for imports of the `xml.etree.cElementTree` and `xml.etree.ElementTree` modules
///
/// ## Why is this bad?
/// Using various methods from these modules to parse untrusted XML data is
/// known to be vulnerable to XML attacks. Replace vulnerable imports with the
/// equivalent `defusedxml` package, or make sure `defusedxml.defuse_stdlib()` is
/// called before parsing XML data.
///
/// ## Example
/// ```python
/// import xml.etree.cElementTree
/// ```
#[violation]
pub struct SuspiciousXmlEtreeImport;
impl Violation for SuspiciousXmlEtreeImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`xml.etree` methods are vulnerable to XML attacks")
}
}
/// ## What it does
/// Checks for imports of the `xml.sax` module.
///
/// ## Why is this bad?
/// Using various methods from these modules to parse untrusted XML data is
/// known to be vulnerable to XML attacks. Replace vulnerable imports with the
/// equivalent `defusedxml` package, or make sure `defusedxml.defuse_stdlib()` is
/// called before parsing XML data.
///
/// ## Example
/// ```python
/// import xml.sax
/// ```
#[violation]
pub struct SuspiciousXmlSaxImport;
impl Violation for SuspiciousXmlSaxImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`xml.sax` methods are vulnerable to XML attacks")
}
}
/// ## What it does
/// Checks for imports of the `xml.dom.expatbuilder` module.
///
/// ## Why is this bad?
/// Using various methods from these modules to parse untrusted XML data is
/// known to be vulnerable to XML attacks. Replace vulnerable imports with the
/// equivalent `defusedxml` package, or make sure `defusedxml.defuse_stdlib()` is
/// called before parsing XML data.
///
/// ## Example
/// ```python
/// import xml.dom.expatbuilder
/// ```
#[violation]
pub struct SuspiciousXmlExpatImport;
impl Violation for SuspiciousXmlExpatImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`xml.dom.expatbuilder` is vulnerable to XML attacks")
}
}
/// ## What it does
/// Checks for imports of the `xml.dom.minidom` module.
///
/// ## Why is this bad?
/// Using various methods from these modules to parse untrusted XML data is
/// known to be vulnerable to XML attacks. Replace vulnerable imports with the
/// equivalent `defusedxml` package, or make sure `defusedxml.defuse_stdlib()` is
/// called before parsing XML data.
///
/// ## Example
/// ```python
/// import xml.dom.minidom
/// ```
#[violation]
pub struct SuspiciousXmlMinidomImport;
impl Violation for SuspiciousXmlMinidomImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`xml.dom.minidom` is vulnerable to XML attacks")
}
}
/// ## What it does
/// Checks for imports of the `xml.dom.pulldom` module.
///
/// ## Why is this bad?
/// Using various methods from these modules to parse untrusted XML data is
/// known to be vulnerable to XML attacks. Replace vulnerable imports with the
/// equivalent `defusedxml` package, or make sure `defusedxml.defuse_stdlib()` is
/// called before parsing XML data.
///
/// ## Example
/// ```python
/// import xml.dom.pulldom
/// ```
#[violation]
pub struct SuspiciousXmlPulldomImport;
impl Violation for SuspiciousXmlPulldomImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`xml.dom.pulldom` is vulnerable to XML attacks")
}
}
/// ## What it does
/// Checks for imports of the`lxml` module.
///
/// ## Why is this bad?
/// Using various methods from the `lxml` module to parse untrusted XML data is
/// known to be vulnerable to XML attacks. Replace vulnerable imports with the
/// equivalent `defusedxml` package.
///
/// ## Example
/// ```python
/// import lxml
/// ```
#[violation]
pub struct SuspiciousLxmlImport;
impl Violation for SuspiciousLxmlImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`lxml` is vulnerable to XML attacks")
}
}
/// ## What it does
/// Checks for imports of the `xmlrpc` module.
///
/// ## Why is this bad?
/// XMLRPC is a particularly dangerous XML module as it is also concerned with
/// communicating data over a network. Use the `defused.xmlrpc.monkey_patch()`
/// function to monkey-patch the `xmlrpclib` module and mitigate remote XML
/// attacks.
///
/// ## Example
/// ```python
/// import xmlrpc
/// ```
#[violation]
pub struct SuspiciousXmlrpcImport;
impl Violation for SuspiciousXmlrpcImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("XMLRPC is vulnerable to remote XML attacks")
}
}
/// ## What it does
/// Checks for imports of `wsgiref.handlers.CGIHandler` and
/// `twisted.web.twcgi.CGIScript`.
///
/// ## Why is this bad?
/// httpoxy is a set of vulnerabilities that affect application code running in
/// CGI or CGI-like environments. The use of CGI for web applications should be
/// avoided to prevent this class of attack.
///
/// ## Example
/// ```python
/// import wsgiref.handlers.CGIHandler
/// ```
///
/// ## References
/// - [httpoxy website](https://httpoxy.org/)
#[violation]
pub struct SuspiciousHttpoxyImport;
impl Violation for SuspiciousHttpoxyImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("`httpoxy` is a set of vulnerabilities that affect application code running inCGI, or CGI-like environments. The use of CGI for web applications should be avoided")
}
}
/// ## What it does
/// Checks for imports of several unsafe cryptography modules.
///
/// ## Why is this bad?
/// The `pycrypto` library is known to have a publicly disclosed buffer
/// overflow vulnerability. It is no longer actively maintained and has been
/// deprecated in favor of the `pyca/cryptography` library.
///
/// ## Example
/// ```python
/// import Crypto.Random
/// ```
///
/// ## References
/// - [Buffer Overflow Issue](https://github.com/pycrypto/pycrypto/issues/176)
#[violation]
pub struct SuspiciousPycryptoImport;
impl Violation for SuspiciousPycryptoImport {
#[derive_message_formats]
fn message(&self) -> String {
format!(
"`pycrypto` library is known to have publicly disclosed buffer overflow vulnerability"
)
}
}
/// ## What it does
/// Checks for imports of the `pyghmi` module.
///
/// ## Why is this bad?
/// `pyghmi` is an IPMI-related module, but IPMI is considered insecure.
/// Instead, use an encrypted protocol.
///
/// ## Example
/// ```python
/// import pyghmi
/// ```
///
/// ## References
/// - [Buffer Overflow Issue](https://github.com/pycrypto/pycrypto/issues/176)
#[violation]
pub struct SuspiciousPyghmiImport;
impl Violation for SuspiciousPyghmiImport {
#[derive_message_formats]
fn message(&self) -> String {
format!("An IPMI-related module is being imported. Prefer an encrypted protocol over IPMI.")
}
}
/// S401, S402, S403, S404, S405, S406, S407, S408, S409, S410, S411, S412, S413, S415
pub(crate) fn suspicious_imports(checker: &mut Checker, stmt: &Stmt) {
match stmt {
Stmt::Import(ast::StmtImport { names, .. }) => {
for name in names {
match name.name.as_str() {
"telnetlib" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousTelnetlibImport),
name.range,
),
"ftplib" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousFtplibImport),
name.range,
),
"pickle" | "cPickle" | "dill" | "shelve" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPickleImport),
name.range,
),
"subprocess" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousSubprocessImport),
name.range,
),
"xml.etree.cElementTree" | "xml.etree.ElementTree" => {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlEtreeImport),
name.range,
);
}
"xml.sax" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlSaxImport),
name.range,
),
"xml.dom.expatbuilder" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlExpatImport),
name.range,
),
"xml.dom.minidom" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlMinidomImport),
name.range,
),
"xml.dom.pulldom" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlPulldomImport),
name.range,
),
"lxml" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousLxmlImport),
name.range,
),
"xmlrpc" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlrpcImport),
name.range,
),
"Crypto.Cipher" | "Crypto.Hash" | "Crypto.IO" | "Crypto.Protocol"
| "Crypto.PublicKey" | "Crypto.Random" | "Crypto.Signature" | "Crypto.Util" => {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPycryptoImport),
name.range,
);
}
"pyghmi" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPyghmiImport),
name.range,
),
_ => {}
}
}
}
Stmt::ImportFrom(ast::StmtImportFrom { module, names, .. }) => {
let Some(identifier) = module else { return };
match identifier.as_str() {
"telnetlib" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousTelnetlibImport),
identifier.range(),
),
"ftplib" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousFtplibImport),
identifier.range(),
),
"pickle" | "cPickle" | "dill" | "shelve" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPickleImport),
identifier.range(),
),
"subprocess" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousSubprocessImport),
identifier.range(),
),
"xml.etree" => {
for name in names {
if matches!(name.name.as_str(), "cElementTree" | "ElementTree") {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlEtreeImport),
identifier.range(),
);
}
}
}
"xml.etree.cElementTree" | "xml.etree.ElementTree" => {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlEtreeImport),
identifier.range(),
);
}
"xml" => {
for name in names {
if name.name.as_str() == "sax" {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlSaxImport),
identifier.range(),
);
}
}
}
"xml.sax" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlSaxImport),
identifier.range(),
),
"xml.dom" => {
for name in names {
match name.name.as_str() {
"expatbuilder" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlExpatImport),
identifier.range(),
),
"minidom" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlMinidomImport),
identifier.range(),
),
"pulldom" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlPulldomImport),
identifier.range(),
),
_ => (),
}
}
}
"xml.dom.expatbuilder" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlExpatImport),
identifier.range(),
),
"xml.dom.minidom" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlMinidomImport),
identifier.range(),
),
"xml.dom.pulldom" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlPulldomImport),
identifier.range(),
),
"lxml" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousLxmlImport),
identifier.range(),
),
"xmlrpc" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousXmlrpcImport),
identifier.range(),
),
"wsgiref.handlers" => {
for name in names {
if name.name.as_str() == "CGIHandler" {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousHttpoxyImport),
identifier.range(),
);
}
}
}
"twisted.web.twcgi" => {
for name in names {
if name.name.as_str() == "CGIScript" {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousHttpoxyImport),
identifier.range(),
);
}
}
}
"Crypto" => {
for name in names {
if matches!(
name.name.as_str(),
"Cipher"
| "Hash"
| "IO"
| "Protocol"
| "PublicKey"
| "Random"
| "Signature"
| "Util"
) {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPycryptoImport),
identifier.range(),
);
}
}
}
"Crypto.Cipher" | "Crypto.Hash" | "Crypto.IO" | "Crypto.Protocol"
| "Crypto.PublicKey" | "Crypto.Random" | "Crypto.Signature" | "Crypto.Util" => {
check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPycryptoImport),
identifier.range(),
);
}
"pyghmi" => check_and_push_diagnostic(
checker,
DiagnosticKind::from(SuspiciousPyghmiImport),
identifier.range(),
),
_ => {}
}
}
_ => panic!("Expected Stmt::Import | Stmt::ImportFrom"),
};
}
fn check_and_push_diagnostic(
checker: &mut Checker,
diagnostic_kind: DiagnosticKind,
range: TextRange,
) {
let diagnostic = Diagnostic::new::<DiagnosticKind>(diagnostic_kind, range);
if checker.enabled(diagnostic.kind.rule()) {
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S401.py:1:8: S401 `telnetlib` and related modules are considered insecure. Use SSH or another encrypted protocol.
|
1 | import telnetlib # S401
| ^^^^^^^^^ S401
2 | from telnetlib import Telnet # S401
|
S401.py:2:6: S401 `telnetlib` and related modules are considered insecure. Use SSH or another encrypted protocol.
|
1 | import telnetlib # S401
2 | from telnetlib import Telnet # S401
| ^^^^^^^^^ S401
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S402.py:1:8: S402 `ftplib` and related modules are considered insecure. Use SSH, SFTP, SCP, or another encrypted protocol.
|
1 | import ftplib # S402
| ^^^^^^ S402
2 | from ftplib import FTP # S402
|
S402.py:2:6: S402 `ftplib` and related modules are considered insecure. Use SSH, SFTP, SCP, or another encrypted protocol.
|
1 | import ftplib # S402
2 | from ftplib import FTP # S402
| ^^^^^^ S402
|

View File

@@ -0,0 +1,78 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S403.py:1:8: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
1 | import dill # S403
| ^^^^ S403
2 | from dill import objects # S403
3 | import shelve
|
S403.py:2:6: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
1 | import dill # S403
2 | from dill import objects # S403
| ^^^^ S403
3 | import shelve
4 | from shelve import open
|
S403.py:3:8: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
1 | import dill # S403
2 | from dill import objects # S403
3 | import shelve
| ^^^^^^ S403
4 | from shelve import open
5 | import cPickle
|
S403.py:4:6: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
2 | from dill import objects # S403
3 | import shelve
4 | from shelve import open
| ^^^^^^ S403
5 | import cPickle
6 | from cPickle import load
|
S403.py:5:8: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
3 | import shelve
4 | from shelve import open
5 | import cPickle
| ^^^^^^^ S403
6 | from cPickle import load
7 | import pickle
|
S403.py:6:6: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
4 | from shelve import open
5 | import cPickle
6 | from cPickle import load
| ^^^^^^^ S403
7 | import pickle
8 | from pickle import load
|
S403.py:7:8: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
5 | import cPickle
6 | from cPickle import load
7 | import pickle
| ^^^^^^ S403
8 | from pickle import load
|
S403.py:8:6: S403 `pickle`, `cPickle`, `dill`, and `shelve` modules are possibly insecure
|
6 | from cPickle import load
7 | import pickle
8 | from pickle import load
| ^^^^^^ S403
|

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S404.py:1:8: S404 `subprocess` module is possibly insecure
|
1 | import subprocess # S404
| ^^^^^^^^^^ S404
2 | from subprocess import Popen # S404
3 | from subprocess import Popen as pop # S404
|
S404.py:2:6: S404 `subprocess` module is possibly insecure
|
1 | import subprocess # S404
2 | from subprocess import Popen # S404
| ^^^^^^^^^^ S404
3 | from subprocess import Popen as pop # S404
|
S404.py:3:6: S404 `subprocess` module is possibly insecure
|
1 | import subprocess # S404
2 | from subprocess import Popen # S404
3 | from subprocess import Popen as pop # S404
| ^^^^^^^^^^ S404
|

View File

@@ -0,0 +1,38 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S405.py:1:8: S405 `xml.etree` methods are vulnerable to XML attacks
|
1 | import xml.etree.cElementTree # S405
| ^^^^^^^^^^^^^^^^^^^^^^ S405
2 | from xml.etree import cElementTree # S405
3 | import xml.etree.ElementTree # S405
|
S405.py:2:6: S405 `xml.etree` methods are vulnerable to XML attacks
|
1 | import xml.etree.cElementTree # S405
2 | from xml.etree import cElementTree # S405
| ^^^^^^^^^ S405
3 | import xml.etree.ElementTree # S405
4 | from xml.etree import ElementTree # S405
|
S405.py:3:8: S405 `xml.etree` methods are vulnerable to XML attacks
|
1 | import xml.etree.cElementTree # S405
2 | from xml.etree import cElementTree # S405
3 | import xml.etree.ElementTree # S405
| ^^^^^^^^^^^^^^^^^^^^^ S405
4 | from xml.etree import ElementTree # S405
|
S405.py:4:6: S405 `xml.etree` methods are vulnerable to XML attacks
|
2 | from xml.etree import cElementTree # S405
3 | import xml.etree.ElementTree # S405
4 | from xml.etree import ElementTree # S405
| ^^^^^^^^^ S405
|

View File

@@ -0,0 +1,28 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S406.py:1:6: S406 `xml.sax` methods are vulnerable to XML attacks
|
1 | from xml import sax # S406
| ^^^ S406
2 | import xml.sax as xmls # S406
3 | import xml.sax # S406
|
S406.py:2:8: S406 `xml.sax` methods are vulnerable to XML attacks
|
1 | from xml import sax # S406
2 | import xml.sax as xmls # S406
| ^^^^^^^^^^^^^^^ S406
3 | import xml.sax # S406
|
S406.py:3:8: S406 `xml.sax` methods are vulnerable to XML attacks
|
1 | from xml import sax # S406
2 | import xml.sax as xmls # S406
3 | import xml.sax # S406
| ^^^^^^^ S406
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S407.py:1:6: S407 `xml.dom.expatbuilder` is vulnerable to XML attacks
|
1 | from xml.dom import expatbuilder # S407
| ^^^^^^^ S407
2 | import xml.dom.expatbuilder # S407
|
S407.py:2:8: S407 `xml.dom.expatbuilder` is vulnerable to XML attacks
|
1 | from xml.dom import expatbuilder # S407
2 | import xml.dom.expatbuilder # S407
| ^^^^^^^^^^^^^^^^^^^^ S407
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S408.py:1:6: S408 `xml.dom.minidom` is vulnerable to XML attacks
|
1 | from xml.dom.minidom import parseString # S408
| ^^^^^^^^^^^^^^^ S408
2 | import xml.dom.minidom # S408
|
S408.py:2:8: S408 `xml.dom.minidom` is vulnerable to XML attacks
|
1 | from xml.dom.minidom import parseString # S408
2 | import xml.dom.minidom # S408
| ^^^^^^^^^^^^^^^ S408
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S409.py:1:6: S409 `xml.dom.pulldom` is vulnerable to XML attacks
|
1 | from xml.dom.pulldom import parseString # S409
| ^^^^^^^^^^^^^^^ S409
2 | import xml.dom.pulldom # S409
|
S409.py:2:8: S409 `xml.dom.pulldom` is vulnerable to XML attacks
|
1 | from xml.dom.pulldom import parseString # S409
2 | import xml.dom.pulldom # S409
| ^^^^^^^^^^^^^^^ S409
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S410.py:1:8: S410 `lxml` is vulnerable to XML attacks
|
1 | import lxml # S410
| ^^^^ S410
2 | from lxml import etree # S410
|
S410.py:2:6: S410 `lxml` is vulnerable to XML attacks
|
1 | import lxml # S410
2 | from lxml import etree # S410
| ^^^^ S410
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S411.py:1:8: S411 XMLRPC is vulnerable to remote XML attacks
|
1 | import xmlrpc # S411
| ^^^^^^ S411
2 | from xmlrpc import server # S411
|
S411.py:2:6: S411 XMLRPC is vulnerable to remote XML attacks
|
1 | import xmlrpc # S411
2 | from xmlrpc import server # S411
| ^^^^^^ S411
|

View File

@@ -0,0 +1,10 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S412.py:1:6: S412 `httpoxy` is a set of vulnerabilities that affect application code running inCGI, or CGI-like environments. The use of CGI for web applications should be avoided
|
1 | from twisted.web.twcgi import CGIScript # S412
| ^^^^^^^^^^^^^^^^^ S412
|

View File

@@ -0,0 +1,38 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S413.py:1:8: S413 `pycrypto` library is known to have publicly disclosed buffer overflow vulnerability
|
1 | import Crypto.Hash # S413
| ^^^^^^^^^^^ S413
2 | from Crypto.Hash import MD2 # S413
3 | import Crypto.PublicKey # S413
|
S413.py:2:6: S413 `pycrypto` library is known to have publicly disclosed buffer overflow vulnerability
|
1 | import Crypto.Hash # S413
2 | from Crypto.Hash import MD2 # S413
| ^^^^^^^^^^^ S413
3 | import Crypto.PublicKey # S413
4 | from Crypto.PublicKey import RSA # S413
|
S413.py:3:8: S413 `pycrypto` library is known to have publicly disclosed buffer overflow vulnerability
|
1 | import Crypto.Hash # S413
2 | from Crypto.Hash import MD2 # S413
3 | import Crypto.PublicKey # S413
| ^^^^^^^^^^^^^^^^ S413
4 | from Crypto.PublicKey import RSA # S413
|
S413.py:4:6: S413 `pycrypto` library is known to have publicly disclosed buffer overflow vulnerability
|
2 | from Crypto.Hash import MD2 # S413
3 | import Crypto.PublicKey # S413
4 | from Crypto.PublicKey import RSA # S413
| ^^^^^^^^^^^^^^^^ S413
|

View File

@@ -0,0 +1,18 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S415.py:1:8: S415 An IPMI-related module is being imported. Prefer an encrypted protocol over IPMI.
|
1 | import pyghmi # S415
| ^^^^^^ S415
2 | from pyghmi import foo # S415
|
S415.py:2:6: S415 An IPMI-related module is being imported. Prefer an encrypted protocol over IPMI.
|
1 | import pyghmi # S415
2 | from pyghmi import foo # S415
| ^^^^^^ S415
|

View File

@@ -0,0 +1,72 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S502.py:6:13: S502 Call made with insecure SSL protocol: `PROTOCOL_SSLv3`
|
4 | from OpenSSL.SSL import Context
5 |
6 | wrap_socket(ssl_version=ssl.PROTOCOL_SSLv3) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S502
7 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1) # S502
8 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_SSLv2) # S502
|
S502.py:7:17: S502 Call made with insecure SSL protocol: `PROTOCOL_TLSv1`
|
6 | wrap_socket(ssl_version=ssl.PROTOCOL_SSLv3) # S502
7 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S502
8 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_SSLv2) # S502
9 | SSL.Context(method=SSL.SSLv2_METHOD) # S502
|
S502.py:8:17: S502 Call made with insecure SSL protocol: `PROTOCOL_SSLv2`
|
6 | wrap_socket(ssl_version=ssl.PROTOCOL_SSLv3) # S502
7 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1) # S502
8 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_SSLv2) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S502
9 | SSL.Context(method=SSL.SSLv2_METHOD) # S502
10 | SSL.Context(method=SSL.SSLv23_METHOD) # S502
|
S502.py:9:13: S502 Call made with insecure SSL protocol: `SSLv2_METHOD`
|
7 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1) # S502
8 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_SSLv2) # S502
9 | SSL.Context(method=SSL.SSLv2_METHOD) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^ S502
10 | SSL.Context(method=SSL.SSLv23_METHOD) # S502
11 | Context(method=SSL.SSLv3_METHOD) # S502
|
S502.py:10:13: S502 Call made with insecure SSL protocol: `SSLv23_METHOD`
|
8 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_SSLv2) # S502
9 | SSL.Context(method=SSL.SSLv2_METHOD) # S502
10 | SSL.Context(method=SSL.SSLv23_METHOD) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^^ S502
11 | Context(method=SSL.SSLv3_METHOD) # S502
12 | Context(method=SSL.TLSv1_METHOD) # S502
|
S502.py:11:9: S502 Call made with insecure SSL protocol: `SSLv3_METHOD`
|
9 | SSL.Context(method=SSL.SSLv2_METHOD) # S502
10 | SSL.Context(method=SSL.SSLv23_METHOD) # S502
11 | Context(method=SSL.SSLv3_METHOD) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^ S502
12 | Context(method=SSL.TLSv1_METHOD) # S502
|
S502.py:12:9: S502 Call made with insecure SSL protocol: `TLSv1_METHOD`
|
10 | SSL.Context(method=SSL.SSLv23_METHOD) # S502
11 | Context(method=SSL.SSLv3_METHOD) # S502
12 | Context(method=SSL.TLSv1_METHOD) # S502
| ^^^^^^^^^^^^^^^^^^^^^^^ S502
13 |
14 | wrap_socket(ssl_version=ssl.PROTOCOL_TLS_CLIENT) # OK
|

View File

@@ -0,0 +1,32 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S503.py:6:18: S503 Argument default set to insecure SSL protocol: `PROTOCOL_SSLv2`
|
6 | def func(version=ssl.PROTOCOL_SSLv2): # S503
| ^^^^^^^^^^^^^^^^^^ S503
7 | pass
|
S503.py:10:19: S503 Argument default set to insecure SSL protocol: `SSLv2_METHOD`
|
10 | def func(protocol=SSL.SSLv2_METHOD): # S503
| ^^^^^^^^^^^^^^^^ S503
11 | pass
|
S503.py:14:18: S503 Argument default set to insecure SSL protocol: `SSLv23_METHOD`
|
14 | def func(version=SSL.SSLv23_METHOD): # S503
| ^^^^^^^^^^^^^^^^^ S503
15 | pass
|
S503.py:18:19: S503 Argument default set to insecure SSL protocol: `PROTOCOL_TLSv1`
|
18 | def func(protocol=PROTOCOL_TLSv1): # S503
| ^^^^^^^^^^^^^^ S503
19 | pass
|

View File

@@ -0,0 +1,22 @@
---
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
---
S504.py:4:1: S504 `ssl.wrap_socket` called without an `ssl_version``
|
2 | from ssl import wrap_socket
3 |
4 | ssl.wrap_socket() # S504
| ^^^^^^^^^^^^^^^^^ S504
5 | wrap_socket() # S504
6 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1_2) # OK
|
S504.py:5:1: S504 `ssl.wrap_socket` called without an `ssl_version``
|
4 | ssl.wrap_socket() # S504
5 | wrap_socket() # S504
| ^^^^^^^^^^^^^ S504
6 | ssl.wrap_socket(ssl_version=ssl.PROTOCOL_TLSv1_2) # OK
|

View File

@@ -49,6 +49,31 @@ impl Violation for LoopVariableOverridesIterator {
}
}
/// B020
pub(crate) fn loop_variable_overrides_iterator(checker: &mut Checker, target: &Expr, iter: &Expr) {
let target_names = {
let mut target_finder = NameFinder::default();
target_finder.visit_expr(target);
target_finder.names
};
let iter_names = {
let mut iter_finder = NameFinder::default();
iter_finder.visit_expr(iter);
iter_finder.names
};
for (name, expr) in target_names {
if iter_names.contains_key(name) {
checker.diagnostics.push(Diagnostic::new(
LoopVariableOverridesIterator {
name: name.to_string(),
},
expr.range(),
));
}
}
}
#[derive(Default)]
struct NameFinder<'a> {
names: FxHashMap<&'a str, &'a Expr>,
@@ -97,28 +122,3 @@ where
}
}
}
/// B020
pub(crate) fn loop_variable_overrides_iterator(checker: &mut Checker, target: &Expr, iter: &Expr) {
let target_names = {
let mut target_finder = NameFinder::default();
target_finder.visit_expr(target);
target_finder.names
};
let iter_names = {
let mut iter_finder = NameFinder::default();
iter_finder.visit_expr(iter);
iter_finder.names
};
for (name, expr) in target_names {
if iter_names.contains_key(name) {
checker.diagnostics.push(Diagnostic::new(
LoopVariableOverridesIterator {
name: name.to_string(),
},
expr.range(),
));
}
}
}

View File

@@ -1,10 +1,9 @@
use rustc_hash::FxHashMap;
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::helpers;
use ruff_python_ast::helpers::NameFinder;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_ast::{helpers, visitor};
use ruff_text_size::Ranged;
use crate::checkers::ast::Checker;
@@ -78,42 +77,16 @@ impl Violation for UnusedLoopControlVariable {
}
}
/// Identify all `Expr::Name` nodes in an AST.
struct NameFinder<'a> {
/// A map from identifier to defining expression.
names: FxHashMap<&'a str, &'a Expr>,
}
impl NameFinder<'_> {
fn new() -> Self {
NameFinder {
names: FxHashMap::default(),
}
}
}
impl<'a, 'b> Visitor<'b> for NameFinder<'a>
where
'b: 'a,
{
fn visit_expr(&mut self, expr: &'a Expr) {
if let Expr::Name(ast::ExprName { id, .. }) = expr {
self.names.insert(id, expr);
}
visitor::walk_expr(self, expr);
}
}
/// B007
pub(crate) fn unused_loop_control_variable(checker: &mut Checker, stmt_for: &ast::StmtFor) {
let control_names = {
let mut finder = NameFinder::new();
let mut finder = NameFinder::default();
finder.visit_expr(stmt_for.target.as_ref());
finder.names
};
let used_names = {
let mut finder = NameFinder::new();
let mut finder = NameFinder::default();
for stmt in &stmt_for.body {
finder.visit_stmt(stmt);
}

View File

@@ -242,7 +242,7 @@ pub(crate) fn trailing_commas(
.flatten()
.filter_map(|spanned @ (tok, tok_range)| match tok {
// Completely ignore comments -- they just interfere with the logic.
Tok::Comment(_) => None,
Tok::Comment => None,
// F-strings are handled as `String` token type with the complete range
// of the outermost f-string. This means that the expression inside the
// f-string is not checked for trailing commas.

View File

@@ -1,11 +1,13 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_diagnostics::{Applicability, Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast as ast;
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::identifier::Identifier;
use ruff_python_semantic::SemanticModel;
use ruff_text_size::{Ranged, TextRange};
use crate::checkers::ast::Checker;
use crate::importer::ImportRequest;
/// ## What it does
/// Checks for simple `__iter__` methods that return `Generator`, and for
@@ -48,20 +50,40 @@ use crate::checkers::ast::Checker;
/// def __iter__(self) -> Iterator[str]:
/// yield from "abdefg"
/// ```
///
/// ## Fix safety
/// This rule tries hard to avoid false-positive errors, and the rule's fix
/// should always be safe for `.pyi` stub files. However, there is a slightly
/// higher chance that a false positive might be emitted by this rule when
/// applied to runtime Python (`.py` files). As such, the fix is marked as
/// unsafe for any `__iter__` or `__aiter__` method in a `.py` file that has
/// more than two statements (including docstrings) in its body.
#[violation]
pub struct GeneratorReturnFromIterMethod {
better_return_type: String,
method_name: String,
return_type: Iterator,
method: Method,
}
impl Violation for GeneratorReturnFromIterMethod {
const FIX_AVAILABILITY: FixAvailability = FixAvailability::Sometimes;
#[derive_message_formats]
fn message(&self) -> String {
let GeneratorReturnFromIterMethod {
better_return_type,
method_name,
return_type,
method,
} = self;
format!("Use `{better_return_type}` as the return value for simple `{method_name}` methods")
format!("Use `{return_type}` as the return value for simple `{method}` methods")
}
fn fix_title(&self) -> Option<String> {
let GeneratorReturnFromIterMethod {
return_type,
method,
} = self;
Some(format!(
"Convert the return annotation of your `{method}` method to `{return_type}`"
))
}
}
@@ -76,12 +98,6 @@ pub(crate) fn bad_generator_return_type(
let name = function_def.name.as_str();
let better_return_type = match name {
"__iter__" => "Iterator",
"__aiter__" => "AsyncIterator",
_ => return,
};
let semantic = checker.semantic();
if !semantic.current_scope().kind.is_class() {
@@ -106,44 +122,68 @@ pub(crate) fn bad_generator_return_type(
_ => return,
};
if !semantic
.resolve_call_path(map_subscript(returns))
.is_some_and(|call_path| {
matches!(
(name, call_path.as_slice()),
(
"__iter__",
["typing" | "typing_extensions", "Generator"]
| ["collections", "abc", "Generator"]
) | (
"__aiter__",
["typing" | "typing_extensions", "AsyncGenerator"]
| ["collections", "abc", "AsyncGenerator"]
)
)
})
{
return;
// Determine the module from which the existing annotation is imported (e.g., `typing` or
// `collections.abc`)
let (method, module, member) = {
let Some(call_path) = semantic.resolve_call_path(map_subscript(returns)) else {
return;
};
match (name, call_path.as_slice()) {
("__iter__", ["typing", "Generator"]) => {
(Method::Iter, Module::Typing, Generator::Generator)
}
("__aiter__", ["typing", "AsyncGenerator"]) => {
(Method::AIter, Module::Typing, Generator::AsyncGenerator)
}
("__iter__", ["typing_extensions", "Generator"]) => {
(Method::Iter, Module::TypingExtensions, Generator::Generator)
}
("__aiter__", ["typing_extensions", "AsyncGenerator"]) => (
Method::AIter,
Module::TypingExtensions,
Generator::AsyncGenerator,
),
("__iter__", ["collections", "abc", "Generator"]) => {
(Method::Iter, Module::CollectionsAbc, Generator::Generator)
}
("__aiter__", ["collections", "abc", "AsyncGenerator"]) => (
Method::AIter,
Module::CollectionsAbc,
Generator::AsyncGenerator,
),
_ => return,
}
};
// `Generator` allows three type parameters; `AsyncGenerator` allows two.
// If type parameters are present,
// Check that all parameters except the first one are either `typing.Any` or `None`;
// if not, don't emit the diagnostic
if let ast::Expr::Subscript(ast::ExprSubscript { slice, .. }) = returns {
let ast::Expr::Tuple(ast::ExprTuple { elts, .. }) = slice.as_ref() else {
return;
};
if matches!(
(name, &elts[..]),
("__iter__", [_, _, _]) | ("__aiter__", [_, _])
) {
if !&elts.iter().skip(1).all(|elt| is_any_or_none(elt, semantic)) {
return;
// check that all parameters except the first one are either `typing.Any` or `None`:
// - if so, collect information on the first parameter for use in the rule's autofix;
// - if not, don't emit the diagnostic
let yield_type_info = match returns {
ast::Expr::Subscript(ast::ExprSubscript { slice, .. }) => match slice.as_ref() {
ast::Expr::Tuple(slice_tuple @ ast::ExprTuple { .. }) => {
if !slice_tuple
.elts
.iter()
.skip(1)
.all(|elt| is_any_or_none(elt, semantic))
{
return;
}
let yield_type = match (name, slice_tuple.elts.as_slice()) {
("__iter__", [yield_type, _, _]) => yield_type,
("__aiter__", [yield_type, _]) => yield_type,
_ => return,
};
Some(YieldTypeInfo {
expr: yield_type,
range: slice_tuple.range,
})
}
} else {
return;
}
_ => return,
},
_ => None,
};
// For .py files (runtime Python!),
@@ -157,9 +197,7 @@ pub(crate) fn bad_generator_return_type(
ast::Stmt::Pass(_) => continue,
ast::Stmt::Return(ast::StmtReturn { value, .. }) => {
if let Some(ret_val) = value {
if yield_encountered
&& !matches!(ret_val.as_ref(), ast::Expr::NoneLiteral(_))
{
if yield_encountered && !ret_val.is_none_literal_expr() {
return;
}
}
@@ -176,16 +214,151 @@ pub(crate) fn bad_generator_return_type(
}
}
};
checker.diagnostics.push(Diagnostic::new(
let mut diagnostic = Diagnostic::new(
GeneratorReturnFromIterMethod {
better_return_type: better_return_type.to_string(),
method_name: name.to_string(),
return_type: member.to_iter(),
method,
},
function_def.identifier(),
));
);
if let Some(fix) = generate_fix(
function_def,
returns,
yield_type_info,
module,
member,
checker,
) {
diagnostic.set_fix(fix);
};
checker.diagnostics.push(diagnostic);
}
/// Returns `true` if the [`ast::Expr`] is a `None` literal or a `typing.Any` expression.
fn is_any_or_none(expr: &ast::Expr, semantic: &SemanticModel) -> bool {
semantic.match_typing_expr(expr, "Any") || matches!(expr, ast::Expr::NoneLiteral(_))
expr.is_none_literal_expr() || semantic.match_typing_expr(expr, "Any")
}
/// Generate a [`Fix`] to convert the return type annotation to `Iterator` or `AsyncIterator`.
fn generate_fix(
function_def: &ast::StmtFunctionDef,
returns: &ast::Expr,
yield_type_info: Option<YieldTypeInfo>,
module: Module,
member: Generator,
checker: &Checker,
) -> Option<Fix> {
let expr = map_subscript(returns);
let (import_edit, binding) = checker
.importer()
.get_or_import_symbol(
&ImportRequest::import_from(&module.to_string(), &member.to_iter().to_string()),
expr.start(),
checker.semantic(),
)
.ok()?;
let binding_edit = Edit::range_replacement(binding, expr.range());
let yield_edit = yield_type_info.map(|yield_type_info| {
Edit::range_replacement(
checker.generator().expr(yield_type_info.expr),
yield_type_info.range(),
)
});
// Mark as unsafe if it's a runtime Python file and the body has more than one statement in it.
let applicability = if checker.source_type.is_stub() || function_def.body.len() == 1 {
Applicability::Safe
} else {
Applicability::Unsafe
};
Some(Fix::applicable_edits(
import_edit,
std::iter::once(binding_edit).chain(yield_edit),
applicability,
))
}
#[derive(Debug)]
struct YieldTypeInfo<'a> {
expr: &'a ast::Expr,
range: TextRange,
}
impl Ranged for YieldTypeInfo<'_> {
fn range(&self) -> TextRange {
self.range
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
enum Module {
Typing,
TypingExtensions,
CollectionsAbc,
}
impl std::fmt::Display for Module {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Module::Typing => write!(f, "typing"),
Module::TypingExtensions => write!(f, "typing_extensions"),
Module::CollectionsAbc => write!(f, "collections.abc"),
}
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
enum Method {
Iter,
AIter,
}
impl std::fmt::Display for Method {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Method::Iter => write!(f, "__iter__"),
Method::AIter => write!(f, "__aiter__"),
}
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
enum Generator {
Generator,
AsyncGenerator,
}
impl std::fmt::Display for Generator {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Generator::Generator => write!(f, "Generator"),
Generator::AsyncGenerator => write!(f, "AsyncGenerator"),
}
}
}
impl Generator {
fn to_iter(self) -> Iterator {
match self {
Generator::Generator => Iterator::Iterator,
Generator::AsyncGenerator => Iterator::AsyncIterator,
}
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
enum Iterator {
Iterator,
AsyncIterator,
}
impl std::fmt::Display for Iterator {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Iterator::Iterator => write!(f, "Iterator"),
Iterator::AsyncIterator => write!(f, "AsyncIterator"),
}
}
}

View File

@@ -1,7 +1,7 @@
use ast::Operator;
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr};
use ruff_python_ast::helpers::pep_604_union;
use ruff_python_ast::{self as ast, Expr, ExprContext};
use ruff_python_semantic::analyze::typing::traverse_union;
use ruff_text_size::{Ranged, TextRange};
@@ -48,61 +48,12 @@ impl Violation for UnnecessaryLiteralUnion {
}
}
fn concatenate_bin_ors(exprs: Vec<&Expr>) -> Expr {
let mut exprs = exprs.into_iter();
let first = exprs.next().unwrap();
exprs.fold((*first).clone(), |acc, expr| {
Expr::BinOp(ast::ExprBinOp {
left: Box::new(acc),
op: Operator::BitOr,
right: Box::new((*expr).clone()),
range: TextRange::default(),
})
})
}
fn make_union(subscript: &ast::ExprSubscript, exprs: Vec<&Expr>) -> Expr {
Expr::Subscript(ast::ExprSubscript {
value: subscript.value.clone(),
slice: Box::new(Expr::Tuple(ast::ExprTuple {
elts: exprs.into_iter().map(|expr| (*expr).clone()).collect(),
range: TextRange::default(),
ctx: ast::ExprContext::Load,
})),
range: TextRange::default(),
ctx: ast::ExprContext::Load,
})
}
fn make_literal_expr(subscript: Option<Expr>, exprs: Vec<&Expr>) -> Expr {
let use_subscript = if let subscript @ Some(_) = subscript {
subscript.unwrap().clone()
} else {
Expr::Name(ast::ExprName {
id: "Literal".to_string(),
range: TextRange::default(),
ctx: ast::ExprContext::Load,
})
};
Expr::Subscript(ast::ExprSubscript {
value: Box::new(use_subscript),
slice: Box::new(Expr::Tuple(ast::ExprTuple {
elts: exprs.into_iter().map(|expr| (*expr).clone()).collect(),
range: TextRange::default(),
ctx: ast::ExprContext::Load,
})),
range: TextRange::default(),
ctx: ast::ExprContext::Load,
})
}
/// PYI030
pub(crate) fn unnecessary_literal_union<'a>(checker: &mut Checker, expr: &'a Expr) {
let mut literal_exprs = Vec::new();
let mut other_exprs = Vec::new();
// for the sake of consistency and correctness, we'll use the first `Literal` subscript attribute
// to construct the fix
// For the sake of consistency, use the first `Literal` subscript to construct the fix.
let mut literal_subscript = None;
let mut total_literals = 0;
@@ -113,7 +64,7 @@ pub(crate) fn unnecessary_literal_union<'a>(checker: &mut Checker, expr: &'a Exp
total_literals += 1;
if literal_subscript.is_none() {
literal_subscript = Some(*value.clone());
literal_subscript = Some(value.as_ref());
}
// flatten already-unioned literals to later union again
@@ -147,48 +98,70 @@ pub(crate) fn unnecessary_literal_union<'a>(checker: &mut Checker, expr: &'a Exp
return;
}
// Raise a violation if more than one.
if total_literals > 1 {
let literal_members: Vec<String> = literal_exprs
.clone()
.into_iter()
.map(|expr| checker.locator().slice(expr).to_string())
.collect();
let mut diagnostic = Diagnostic::new(
UnnecessaryLiteralUnion {
members: literal_members.clone(),
},
expr.range(),
);
if checker.settings.preview.is_enabled() {
let literals =
make_literal_expr(literal_subscript, literal_exprs.into_iter().collect());
if other_exprs.is_empty() {
// if the union is only literals, we just replace the whole thing with a single literal
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
checker.generator().expr(&literals),
expr.range(),
)));
} else {
let mut expr_vec: Vec<&Expr> = other_exprs.clone().into_iter().collect();
expr_vec.insert(0, &literals);
let content = if let Some(subscript) = union_subscript {
checker.generator().expr(&make_union(subscript, expr_vec))
} else {
checker.generator().expr(&concatenate_bin_ors(expr_vec))
};
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
content,
expr.range(),
)));
}
}
checker.diagnostics.push(diagnostic);
// If there are no literal members, we don't need to do anything.
let Some(literal_subscript) = literal_subscript else {
return;
};
if total_literals == 0 || total_literals == 1 {
return;
}
let mut diagnostic = Diagnostic::new(
UnnecessaryLiteralUnion {
members: literal_exprs
.iter()
.map(|expr| checker.locator().slice(expr).to_string())
.collect(),
},
expr.range(),
);
if checker.settings.preview.is_enabled() {
let literal = Expr::Subscript(ast::ExprSubscript {
value: Box::new(literal_subscript.clone()),
slice: Box::new(Expr::Tuple(ast::ExprTuple {
elts: literal_exprs.into_iter().cloned().collect(),
range: TextRange::default(),
ctx: ExprContext::Load,
})),
range: TextRange::default(),
ctx: ExprContext::Load,
});
if other_exprs.is_empty() {
// if the union is only literals, we just replace the whole thing with a single literal
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
checker.generator().expr(&literal),
expr.range(),
)));
} else {
let elts: Vec<Expr> = std::iter::once(literal)
.chain(other_exprs.into_iter().cloned())
.collect();
let content = if let Some(union) = union_subscript {
checker
.generator()
.expr(&Expr::Subscript(ast::ExprSubscript {
value: union.value.clone(),
slice: Box::new(Expr::Tuple(ast::ExprTuple {
elts,
range: TextRange::default(),
ctx: ExprContext::Load,
})),
range: TextRange::default(),
ctx: ExprContext::Load,
}))
} else {
checker.generator().expr(&pep_604_union(&elts))
};
diagnostic.set_fix(Fix::safe_edit(Edit::range_replacement(
content,
expr.range(),
)));
}
}
checker.diagnostics.push(diagnostic);
}

View File

@@ -1,6 +1,7 @@
use ast::{ExprContext, Operator};
use ast::ExprContext;
use ruff_diagnostics::{Diagnostic, Edit, Fix, FixAvailability, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::pep_604_union;
use ruff_python_ast::{self as ast, Expr};
use ruff_python_semantic::analyze::typing::traverse_union;
use ruff_text_size::{Ranged, TextRange};
@@ -50,19 +51,6 @@ impl Violation for UnnecessaryTypeUnion {
}
}
fn concatenate_bin_ors(exprs: Vec<&Expr>) -> Expr {
let mut exprs = exprs.into_iter();
let first = exprs.next().unwrap();
exprs.fold((*first).clone(), |acc, expr| {
Expr::BinOp(ast::ExprBinOp {
left: Box::new(acc),
op: Operator::BitOr,
right: Box::new((*expr).clone()),
range: TextRange::default(),
})
})
}
/// PYI055
pub(crate) fn unnecessary_type_union<'a>(checker: &mut Checker, union: &'a Expr) {
// The `|` operator isn't always safe to allow to runtime-evaluated annotations.
@@ -95,7 +83,7 @@ pub(crate) fn unnecessary_type_union<'a>(checker: &mut Checker, union: &'a Expr)
.resolve_call_path(unwrapped.value.as_ref())
.is_some_and(|call_path| matches!(call_path.as_slice(), ["" | "builtins", "type"]))
{
type_exprs.push(&unwrapped.slice);
type_exprs.push(unwrapped.slice.as_ref());
} else {
other_exprs.push(expr);
}
@@ -108,7 +96,7 @@ pub(crate) fn unnecessary_type_union<'a>(checker: &mut Checker, union: &'a Expr)
let type_members: Vec<String> = type_exprs
.clone()
.into_iter()
.map(|type_expr| checker.locator().slice(type_expr.as_ref()).to_string())
.map(|type_expr| checker.locator().slice(type_expr).to_string())
.collect();
let mut diagnostic = Diagnostic::new(
@@ -171,31 +159,25 @@ pub(crate) fn unnecessary_type_union<'a>(checker: &mut Checker, union: &'a Expr)
checker.generator().expr(&union)
}
} else {
let types = &Expr::Subscript(ast::ExprSubscript {
let elts: Vec<Expr> = type_exprs.into_iter().cloned().collect();
let types = Expr::Subscript(ast::ExprSubscript {
value: Box::new(Expr::Name(ast::ExprName {
id: "type".into(),
ctx: ExprContext::Load,
range: TextRange::default(),
})),
slice: Box::new(concatenate_bin_ors(
type_exprs
.clone()
.into_iter()
.map(std::convert::AsRef::as_ref)
.collect(),
)),
slice: Box::new(pep_604_union(&elts)),
ctx: ExprContext::Load,
range: TextRange::default(),
});
if other_exprs.is_empty() {
checker.generator().expr(types)
checker.generator().expr(&types)
} else {
let mut exprs = Vec::new();
exprs.push(types);
exprs.extend(other_exprs);
checker.generator().expr(&concatenate_bin_ors(exprs))
let elts: Vec<Expr> = std::iter::once(types)
.chain(other_exprs.into_iter().cloned())
.collect();
checker.generator().expr(&pep_604_union(&elts))
}
};

View File

@@ -1,5 +1,6 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::map_subscript;
use ruff_python_ast::{self as ast, Expr, Stmt};
use ruff_python_semantic::Scope;
use ruff_text_size::Ranged;
@@ -243,11 +244,11 @@ pub(crate) fn unused_private_protocol(
continue;
};
if !class_def
.bases()
.iter()
.any(|base| checker.semantic().match_typing_expr(base, "Protocol"))
{
if !class_def.bases().iter().any(|base| {
checker
.semantic()
.match_typing_expr(map_subscript(base), "Protocol")
}) {
continue;
}

View File

@@ -15,4 +15,11 @@ PYI046.py:9:7: PYI046 Private protocol `_Bar` is never used
10 | bar: int
|
PYI046.py:16:7: PYI046 Private protocol `_Baz` is never used
|
16 | class _Baz(Protocol[_T]):
| ^^^^ PYI046
17 | x: _T
|

View File

@@ -15,4 +15,11 @@ PYI046.pyi:9:7: PYI046 Private protocol `_Bar` is never used
10 | bar: int
|
PYI046.pyi:16:7: PYI046 Private protocol `_Baz` is never used
|
16 | class _Baz(Protocol[_T]):
| ^^^^ PYI046
17 | x: _T
|

View File

@@ -1,57 +1,164 @@
---
source: crates/ruff_linter/src/rules/flake8_pyi/mod.rs
---
PYI058.py:7:9: PYI058 Use `Iterator` as the return value for simple `__iter__` methods
PYI058.py:5:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
6 | class IteratorReturningSimpleGenerator1:
7 | def __iter__(self) -> Generator: # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
8 | return (x for x in range(42))
4 | class IteratorReturningSimpleGenerator1:
5 | def __iter__(self) -> Generator:
| ^^^^^^^^ PYI058
6 | ... # PYI058 (use `Iterator`)
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
PYI058.py:11:9: PYI058 Use `Iterator` as the return value for simple `__iter__` methods
|
10 | class IteratorReturningSimpleGenerator2:
11 | def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
12 | """Fully documented, because I'm a runtime function!"""
13 | yield from "abcdefg"
|
Safe fix
1 |+from collections.abc import Iterator
1 2 | def scope():
2 3 | from collections.abc import Generator
3 4 |
4 5 | class IteratorReturningSimpleGenerator1:
5 |- def __iter__(self) -> Generator:
6 |+ def __iter__(self) -> Iterator:
6 7 | ... # PYI058 (use `Iterator`)
7 8 |
8 9 |
PYI058.py:17:9: PYI058 Use `Iterator` as the return value for simple `__iter__` methods
PYI058.py:13:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
16 | class IteratorReturningSimpleGenerator3:
17 | def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
18 | yield "a"
19 | yield "b"
12 | class IteratorReturningSimpleGenerator2:
13 | def __iter__(self) -> typing.Generator:
| ^^^^^^^^ PYI058
14 | ... # PYI058 (use `Iterator`)
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
PYI058.py:24:9: PYI058 Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
23 | class AsyncIteratorReturningSimpleAsyncGenerator1:
24 | def __aiter__(self) -> typing.AsyncGenerator: pass # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
25 |
26 | class AsyncIteratorReturningSimpleAsyncGenerator2:
|
Safe fix
10 10 | import typing
11 11 |
12 12 | class IteratorReturningSimpleGenerator2:
13 |- def __iter__(self) -> typing.Generator:
13 |+ def __iter__(self) -> typing.Iterator:
14 14 | ... # PYI058 (use `Iterator`)
15 15 |
16 16 |
PYI058.py:27:9: PYI058 Use `AsyncIterator` as the return value for simple `__aiter__` methods
PYI058.py:21:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
26 | class AsyncIteratorReturningSimpleAsyncGenerator2:
27 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, Any]: ... # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
28 |
29 | class AsyncIteratorReturningSimpleAsyncGenerator3:
20 | class IteratorReturningSimpleGenerator3:
21 | def __iter__(self) -> collections.abc.Generator:
| ^^^^^^^^ PYI058
22 | ... # PYI058 (use `Iterator`)
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
PYI058.py:30:9: PYI058 Use `AsyncIterator` as the return value for simple `__aiter__` methods
Safe fix
18 18 | import collections.abc
19 19 |
20 20 | class IteratorReturningSimpleGenerator3:
21 |- def __iter__(self) -> collections.abc.Generator:
21 |+ def __iter__(self) -> collections.abc.Iterator:
22 22 | ... # PYI058 (use `Iterator`)
23 23 |
24 24 |
PYI058.py:30:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
29 | class AsyncIteratorReturningSimpleAsyncGenerator3:
30 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: pass # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
31 |
32 | class CorrectIterator:
29 | class IteratorReturningSimpleGenerator4:
30 | def __iter__(self, /) -> collections.abc.Generator[str, Any, None]:
| ^^^^^^^^ PYI058
31 | ... # PYI058 (use `Iterator`)
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
Safe fix
27 27 | from typing import Any
28 28 |
29 29 | class IteratorReturningSimpleGenerator4:
30 |- def __iter__(self, /) -> collections.abc.Generator[str, Any, None]:
30 |+ def __iter__(self, /) -> collections.abc.Iterator[str]:
31 31 | ... # PYI058 (use `Iterator`)
32 32 |
33 33 |
PYI058.py:39:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
38 | class IteratorReturningSimpleGenerator5:
39 | def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]:
| ^^^^^^^^ PYI058
40 | ... # PYI058 (use `Iterator`)
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
Safe fix
36 36 | import typing
37 37 |
38 38 | class IteratorReturningSimpleGenerator5:
39 |- def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]:
39 |+ def __iter__(self, /) -> collections.abc.Iterator[str]:
40 40 | ... # PYI058 (use `Iterator`)
41 41 |
42 42 |
PYI058.py:47:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
46 | class IteratorReturningSimpleGenerator6:
47 | def __iter__(self, /) -> Generator[str, None, None]:
| ^^^^^^^^ PYI058
48 | ... # PYI058 (use `Iterator`)
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
Safe fix
1 |+from collections.abc import Iterator
1 2 | def scope():
2 3 | from collections.abc import Generator
3 4 |
--------------------------------------------------------------------------------
44 45 | from collections.abc import Generator
45 46 |
46 47 | class IteratorReturningSimpleGenerator6:
47 |- def __iter__(self, /) -> Generator[str, None, None]:
48 |+ def __iter__(self, /) -> Iterator[str]:
48 49 | ... # PYI058 (use `Iterator`)
49 50 |
50 51 |
PYI058.py:55:13: PYI058 [*] Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
54 | class AsyncIteratorReturningSimpleAsyncGenerator1:
55 | def __aiter__(
| ^^^^^^^^^ PYI058
56 | self,
57 | ) -> typing_extensions.AsyncGenerator:
|
= help: Convert the return annotation of your `__aiter__` method to `AsyncIterator`
Safe fix
54 54 | class AsyncIteratorReturningSimpleAsyncGenerator1:
55 55 | def __aiter__(
56 56 | self,
57 |- ) -> typing_extensions.AsyncGenerator:
57 |+ ) -> typing_extensions.AsyncIterator:
58 58 | ... # PYI058 (Use `AsyncIterator`)
59 59 |
60 60 |
PYI058.py:73:13: PYI058 [*] Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
72 | class AsyncIteratorReturningSimpleAsyncGenerator3:
73 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]:
| ^^^^^^^^^ PYI058
74 | ... # PYI058 (Use `AsyncIterator`)
|
= help: Convert the return annotation of your `__aiter__` method to `AsyncIterator`
Safe fix
70 70 | import collections.abc
71 71 |
72 72 | class AsyncIteratorReturningSimpleAsyncGenerator3:
73 |- def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]:
73 |+ def __aiter__(self, /) -> collections.abc.AsyncIterator[str]:
74 74 | ... # PYI058 (Use `AsyncIterator`)
75 75 |
76 76 |

View File

@@ -1,58 +1,190 @@
---
source: crates/ruff_linter/src/rules/flake8_pyi/mod.rs
---
PYI058.pyi:7:9: PYI058 Use `Iterator` as the return value for simple `__iter__` methods
PYI058.pyi:5:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
6 | class IteratorReturningSimpleGenerator1:
7 | def __iter__(self) -> Generator: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
8 |
9 | class IteratorReturningSimpleGenerator2:
4 | class IteratorReturningSimpleGenerator1:
5 | def __iter__(self) -> Generator: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
6 |
7 | def scope():
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
PYI058.pyi:10:9: PYI058 Use `Iterator` as the return value for simple `__iter__` methods
|
9 | class IteratorReturningSimpleGenerator2:
10 | def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
11 |
12 | class IteratorReturningSimpleGenerator3:
|
Safe fix
1 |+from collections.abc import Iterator
1 2 | def scope():
2 3 | from collections.abc import Generator
3 4 |
4 5 | class IteratorReturningSimpleGenerator1:
5 |- def __iter__(self) -> Generator: ... # PYI058 (use `Iterator`)
6 |+ def __iter__(self) -> Iterator: ... # PYI058 (use `Iterator`)
6 7 |
7 8 | def scope():
8 9 | import typing
PYI058.pyi:13:9: PYI058 Use `Iterator` as the return value for simple `__iter__` methods
PYI058.pyi:11:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
12 | class IteratorReturningSimpleGenerator3:
13 | def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
14 |
15 | class AsyncIteratorReturningSimpleAsyncGenerator1:
10 | class IteratorReturningSimpleGenerator2:
11 | def __iter__(self) -> typing.Generator: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
12 |
13 | def scope():
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
PYI058.pyi:16:9: PYI058 Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
15 | class AsyncIteratorReturningSimpleAsyncGenerator1:
16 | def __aiter__(self) -> typing.AsyncGenerator: ... # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
17 |
18 | class AsyncIteratorReturningSimpleAsyncGenerator2:
|
Safe fix
8 8 | import typing
9 9 |
10 10 | class IteratorReturningSimpleGenerator2:
11 |- def __iter__(self) -> typing.Generator: ... # PYI058 (use `Iterator`)
11 |+ def __iter__(self) -> typing.Iterator: ... # PYI058 (use `Iterator`)
12 12 |
13 13 | def scope():
14 14 | import collections.abc
PYI058.pyi:19:9: PYI058 Use `AsyncIterator` as the return value for simple `__aiter__` methods
PYI058.pyi:17:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
18 | class AsyncIteratorReturningSimpleAsyncGenerator2:
19 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, Any]: ... # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
20 |
21 | class AsyncIteratorReturningSimpleAsyncGenerator3:
16 | class IteratorReturningSimpleGenerator3:
17 | def __iter__(self) -> collections.abc.Generator: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
18 |
19 | def scope():
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
PYI058.pyi:22:9: PYI058 Use `AsyncIterator` as the return value for simple `__aiter__` methods
Safe fix
14 14 | import collections.abc
15 15 |
16 16 | class IteratorReturningSimpleGenerator3:
17 |- def __iter__(self) -> collections.abc.Generator: ... # PYI058 (use `Iterator`)
17 |+ def __iter__(self) -> collections.abc.Iterator: ... # PYI058 (use `Iterator`)
18 18 |
19 19 | def scope():
20 20 | import collections.abc
PYI058.pyi:24:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
21 | class AsyncIteratorReturningSimpleAsyncGenerator3:
22 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: ... # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
23 |
24 | class CorrectIterator:
23 | class IteratorReturningSimpleGenerator4:
24 | def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
25 |
26 | def scope():
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
Safe fix
21 21 | from typing import Any
22 22 |
23 23 | class IteratorReturningSimpleGenerator4:
24 |- def __iter__(self, /) -> collections.abc.Generator[str, Any, None]: ... # PYI058 (use `Iterator`)
24 |+ def __iter__(self, /) -> collections.abc.Iterator[str]: ... # PYI058 (use `Iterator`)
25 25 |
26 26 | def scope():
27 27 | import collections.abc
PYI058.pyi:31:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
30 | class IteratorReturningSimpleGenerator5:
31 | def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
32 |
33 | def scope():
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
Safe fix
28 28 | import typing
29 29 |
30 30 | class IteratorReturningSimpleGenerator5:
31 |- def __iter__(self, /) -> collections.abc.Generator[str, None, typing.Any]: ... # PYI058 (use `Iterator`)
31 |+ def __iter__(self, /) -> collections.abc.Iterator[str]: ... # PYI058 (use `Iterator`)
32 32 |
33 33 | def scope():
34 34 | from collections.abc import Generator
PYI058.pyi:37:13: PYI058 [*] Use `Iterator` as the return value for simple `__iter__` methods
|
36 | class IteratorReturningSimpleGenerator6:
37 | def __iter__(self, /) -> Generator[str, None, None]: ... # PYI058 (use `Iterator`)
| ^^^^^^^^ PYI058
38 |
39 | def scope():
|
= help: Convert the return annotation of your `__iter__` method to `Iterator`
Safe fix
1 |+from collections.abc import Iterator
1 2 | def scope():
2 3 | from collections.abc import Generator
3 4 |
--------------------------------------------------------------------------------
34 35 | from collections.abc import Generator
35 36 |
36 37 | class IteratorReturningSimpleGenerator6:
37 |- def __iter__(self, /) -> Generator[str, None, None]: ... # PYI058 (use `Iterator`)
38 |+ def __iter__(self, /) -> Iterator[str]: ... # PYI058 (use `Iterator`)
38 39 |
39 40 | def scope():
40 41 | import typing_extensions
PYI058.pyi:43:13: PYI058 [*] Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
42 | class AsyncIteratorReturningSimpleAsyncGenerator1:
43 | def __aiter__(self,) -> typing_extensions.AsyncGenerator: ... # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
44 |
45 | def scope():
|
= help: Convert the return annotation of your `__aiter__` method to `AsyncIterator`
Safe fix
40 40 | import typing_extensions
41 41 |
42 42 | class AsyncIteratorReturningSimpleAsyncGenerator1:
43 |- def __aiter__(self,) -> typing_extensions.AsyncGenerator: ... # PYI058 (Use `AsyncIterator`)
43 |+ def __aiter__(self,) -> typing_extensions.AsyncIterator: ... # PYI058 (Use `AsyncIterator`)
44 44 |
45 45 | def scope():
46 46 | import collections.abc
PYI058.pyi:49:13: PYI058 [*] Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
48 | class AsyncIteratorReturningSimpleAsyncGenerator3:
49 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]:
| ^^^^^^^^^ PYI058
50 | ... # PYI058 (Use `AsyncIterator`)
|
= help: Convert the return annotation of your `__aiter__` method to `AsyncIterator`
Safe fix
46 46 | import collections.abc
47 47 |
48 48 | class AsyncIteratorReturningSimpleAsyncGenerator3:
49 |- def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]:
49 |+ def __aiter__(self, /) -> collections.abc.AsyncIterator[str]:
50 50 | ... # PYI058 (Use `AsyncIterator`)
51 51 |
52 52 | def scope():
PYI058.pyi:56:13: PYI058 [*] Use `AsyncIterator` as the return value for simple `__aiter__` methods
|
55 | class AsyncIteratorReturningSimpleAsyncGenerator3:
56 | def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: ... # PYI058 (Use `AsyncIterator`)
| ^^^^^^^^^ PYI058
57 |
58 | def scope():
|
= help: Convert the return annotation of your `__aiter__` method to `AsyncIterator`
Safe fix
53 53 | import collections.abc
54 54 |
55 55 | class AsyncIteratorReturningSimpleAsyncGenerator3:
56 |- def __aiter__(self, /) -> collections.abc.AsyncGenerator[str, None]: ... # PYI058 (Use `AsyncIterator`)
56 |+ def __aiter__(self, /) -> collections.abc.AsyncIterator[str]: ... # PYI058 (Use `AsyncIterator`)
57 57 |
58 58 | def scope():
59 59 | from typing import Iterator

View File

@@ -461,7 +461,7 @@ pub(crate) fn check_string_quotes(
// range to the sequence.
sequence.push(fstring_range_builder.finish());
}
Tok::Comment(..) | Tok::NonLogicalNewline => continue,
Tok::Comment | Tok::NonLogicalNewline => continue,
_ => {
// Otherwise, consume the sequence.
if !sequence.is_empty() {

View File

@@ -56,8 +56,10 @@ impl Violation for RelativeImports {
#[derive_message_formats]
fn message(&self) -> String {
match self.strictness {
Strictness::Parents => format!("Relative imports from parent modules are banned"),
Strictness::All => format!("Relative imports are banned"),
Strictness::Parents => {
format!("Prefer absolute imports over relative imports from parent modules")
}
Strictness::All => format!("Prefer absolute imports over relative imports"),
}
}

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_tidy_imports/mod.rs
---
TID252.py:7:1: TID252 Relative imports are banned
TID252.py:7:1: TID252 Prefer absolute imports over relative imports
|
6 | # TID252
7 | from . import sibling
@@ -11,7 +11,7 @@ TID252.py:7:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:8:1: TID252 Relative imports are banned
TID252.py:8:1: TID252 Prefer absolute imports over relative imports
|
6 | # TID252
7 | from . import sibling
@@ -22,7 +22,7 @@ TID252.py:8:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:9:1: TID252 Relative imports are banned
TID252.py:9:1: TID252 Prefer absolute imports over relative imports
|
7 | from . import sibling
8 | from .sibling import example
@@ -33,7 +33,7 @@ TID252.py:9:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:10:1: TID252 Relative imports are banned
TID252.py:10:1: TID252 Prefer absolute imports over relative imports
|
8 | from .sibling import example
9 | from .. import parent
@@ -44,7 +44,7 @@ TID252.py:10:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:11:1: TID252 Relative imports are banned
TID252.py:11:1: TID252 Prefer absolute imports over relative imports
|
9 | from .. import parent
10 | from ..parent import example
@@ -55,7 +55,7 @@ TID252.py:11:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:12:1: TID252 Relative imports are banned
TID252.py:12:1: TID252 Prefer absolute imports over relative imports
|
10 | from ..parent import example
11 | from ... import grandparent
@@ -66,7 +66,7 @@ TID252.py:12:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:13:1: TID252 Relative imports are banned
TID252.py:13:1: TID252 Prefer absolute imports over relative imports
|
11 | from ... import grandparent
12 | from ...grandparent import example
@@ -77,7 +77,7 @@ TID252.py:13:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:14:1: TID252 Relative imports are banned
TID252.py:14:1: TID252 Prefer absolute imports over relative imports
|
12 | from ...grandparent import example
13 | from .parent import hello
@@ -90,7 +90,7 @@ TID252.py:14:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:17:1: TID252 Relative imports are banned
TID252.py:17:1: TID252 Prefer absolute imports over relative imports
|
15 | parent import \
16 | hello_world
@@ -104,7 +104,7 @@ TID252.py:17:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:21:1: TID252 Relative imports are banned
TID252.py:21:1: TID252 Prefer absolute imports over relative imports
|
19 | import \
20 | world_hello
@@ -115,7 +115,7 @@ TID252.py:21:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:22:1: TID252 Relative imports are banned
TID252.py:22:1: TID252 Prefer absolute imports over relative imports
|
20 | world_hello
21 | from ..... import ultragrantparent
@@ -126,7 +126,7 @@ TID252.py:22:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:23:1: TID252 Relative imports are banned
TID252.py:23:1: TID252 Prefer absolute imports over relative imports
|
21 | from ..... import ultragrantparent
22 | from ...... import ultragrantparent
@@ -137,7 +137,7 @@ TID252.py:23:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:24:1: TID252 Relative imports are banned
TID252.py:24:1: TID252 Prefer absolute imports over relative imports
|
22 | from ...... import ultragrantparent
23 | from ....... import ultragrantparent
@@ -148,7 +148,7 @@ TID252.py:24:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:25:1: TID252 Relative imports are banned
TID252.py:25:1: TID252 Prefer absolute imports over relative imports
|
23 | from ....... import ultragrantparent
24 | from ......... import ultragrantparent
@@ -159,7 +159,7 @@ TID252.py:25:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:26:1: TID252 Relative imports are banned
TID252.py:26:1: TID252 Prefer absolute imports over relative imports
|
24 | from ......... import ultragrantparent
25 | from ........................... import ultragrantparent
@@ -170,7 +170,7 @@ TID252.py:26:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:27:1: TID252 Relative imports are banned
TID252.py:27:1: TID252 Prefer absolute imports over relative imports
|
25 | from ........................... import ultragrantparent
26 | from .....parent import ultragrantparent
@@ -180,7 +180,7 @@ TID252.py:27:1: TID252 Relative imports are banned
|
= help: Replace relative imports with absolute imports
TID252.py:28:1: TID252 Relative imports are banned
TID252.py:28:1: TID252 Prefer absolute imports over relative imports
|
26 | from .....parent import ultragrantparent
27 | from .........parent import ultragrantparent

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_tidy_imports/mod.rs
---
TID252.py:9:1: TID252 Relative imports from parent modules are banned
TID252.py:9:1: TID252 Prefer absolute imports over relative imports from parent modules
|
7 | from . import sibling
8 | from .sibling import example
@@ -12,7 +12,7 @@ TID252.py:9:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:10:1: TID252 Relative imports from parent modules are banned
TID252.py:10:1: TID252 Prefer absolute imports over relative imports from parent modules
|
8 | from .sibling import example
9 | from .. import parent
@@ -23,7 +23,7 @@ TID252.py:10:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:11:1: TID252 Relative imports from parent modules are banned
TID252.py:11:1: TID252 Prefer absolute imports over relative imports from parent modules
|
9 | from .. import parent
10 | from ..parent import example
@@ -34,7 +34,7 @@ TID252.py:11:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:12:1: TID252 Relative imports from parent modules are banned
TID252.py:12:1: TID252 Prefer absolute imports over relative imports from parent modules
|
10 | from ..parent import example
11 | from ... import grandparent
@@ -45,7 +45,7 @@ TID252.py:12:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:17:1: TID252 Relative imports from parent modules are banned
TID252.py:17:1: TID252 Prefer absolute imports over relative imports from parent modules
|
15 | parent import \
16 | hello_world
@@ -59,7 +59,7 @@ TID252.py:17:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:21:1: TID252 Relative imports from parent modules are banned
TID252.py:21:1: TID252 Prefer absolute imports over relative imports from parent modules
|
19 | import \
20 | world_hello
@@ -70,7 +70,7 @@ TID252.py:21:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:22:1: TID252 Relative imports from parent modules are banned
TID252.py:22:1: TID252 Prefer absolute imports over relative imports from parent modules
|
20 | world_hello
21 | from ..... import ultragrantparent
@@ -81,7 +81,7 @@ TID252.py:22:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:23:1: TID252 Relative imports from parent modules are banned
TID252.py:23:1: TID252 Prefer absolute imports over relative imports from parent modules
|
21 | from ..... import ultragrantparent
22 | from ...... import ultragrantparent
@@ -92,7 +92,7 @@ TID252.py:23:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:24:1: TID252 Relative imports from parent modules are banned
TID252.py:24:1: TID252 Prefer absolute imports over relative imports from parent modules
|
22 | from ...... import ultragrantparent
23 | from ....... import ultragrantparent
@@ -103,7 +103,7 @@ TID252.py:24:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:25:1: TID252 Relative imports from parent modules are banned
TID252.py:25:1: TID252 Prefer absolute imports over relative imports from parent modules
|
23 | from ....... import ultragrantparent
24 | from ......... import ultragrantparent
@@ -114,7 +114,7 @@ TID252.py:25:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:26:1: TID252 Relative imports from parent modules are banned
TID252.py:26:1: TID252 Prefer absolute imports over relative imports from parent modules
|
24 | from ......... import ultragrantparent
25 | from ........................... import ultragrantparent
@@ -125,7 +125,7 @@ TID252.py:26:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:27:1: TID252 Relative imports from parent modules are banned
TID252.py:27:1: TID252 Prefer absolute imports over relative imports from parent modules
|
25 | from ........................... import ultragrantparent
26 | from .....parent import ultragrantparent
@@ -135,7 +135,7 @@ TID252.py:27:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
TID252.py:28:1: TID252 Relative imports from parent modules are banned
TID252.py:28:1: TID252 Prefer absolute imports over relative imports from parent modules
|
26 | from .....parent import ultragrantparent
27 | from .........parent import ultragrantparent

View File

@@ -1,7 +1,7 @@
---
source: crates/ruff_linter/src/rules/flake8_tidy_imports/mod.rs
---
application.py:5:1: TID252 Relative imports from parent modules are banned
application.py:5:1: TID252 Prefer absolute imports over relative imports from parent modules
|
3 | import attrs
4 |
@@ -12,7 +12,7 @@ application.py:5:1: TID252 Relative imports from parent modules are banned
|
= help: Replace relative imports from parent modules with absolute imports
application.py:6:1: TID252 [*] Relative imports from parent modules are banned
application.py:6:1: TID252 [*] Prefer absolute imports over relative imports from parent modules
|
5 | from ....import unknown
6 | from ..protocol import commands, definitions, responses
@@ -32,7 +32,7 @@ application.py:6:1: TID252 [*] Relative imports from parent modules are banned
8 8 | from .. import server
9 9 | from . import logger, models
application.py:6:1: TID252 [*] Relative imports from parent modules are banned
application.py:6:1: TID252 [*] Prefer absolute imports over relative imports from parent modules
|
5 | from ....import unknown
6 | from ..protocol import commands, definitions, responses
@@ -52,7 +52,7 @@ application.py:6:1: TID252 [*] Relative imports from parent modules are banned
8 8 | from .. import server
9 9 | from . import logger, models
application.py:6:1: TID252 [*] Relative imports from parent modules are banned
application.py:6:1: TID252 [*] Prefer absolute imports over relative imports from parent modules
|
5 | from ....import unknown
6 | from ..protocol import commands, definitions, responses
@@ -72,7 +72,7 @@ application.py:6:1: TID252 [*] Relative imports from parent modules are banned
8 8 | from .. import server
9 9 | from . import logger, models
application.py:7:1: TID252 [*] Relative imports from parent modules are banned
application.py:7:1: TID252 [*] Prefer absolute imports over relative imports from parent modules
|
5 | from ....import unknown
6 | from ..protocol import commands, definitions, responses
@@ -93,7 +93,7 @@ application.py:7:1: TID252 [*] Relative imports from parent modules are banned
9 9 | from . import logger, models
10 10 | from ..protocol.UpperCaseModule import some_function
application.py:8:1: TID252 [*] Relative imports from parent modules are banned
application.py:8:1: TID252 [*] Prefer absolute imports over relative imports from parent modules
|
6 | from ..protocol import commands, definitions, responses
7 | from ..server import example
@@ -113,7 +113,7 @@ application.py:8:1: TID252 [*] Relative imports from parent modules are banned
9 9 | from . import logger, models
10 10 | from ..protocol.UpperCaseModule import some_function
application.py:10:1: TID252 [*] Relative imports from parent modules are banned
application.py:10:1: TID252 [*] Prefer absolute imports over relative imports from parent modules
|
8 | from .. import server
9 | from . import logger, models

View File

@@ -26,7 +26,7 @@ pub(super) fn trailing_comma(
if count == 1 {
if matches!(
tok,
Tok::NonLogicalNewline | Tok::Indent | Tok::Dedent | Tok::Comment(_)
Tok::NonLogicalNewline | Tok::Indent | Tok::Dedent | Tok::Comment
) {
continue;
} else if matches!(tok, Tok::Comma) {

View File

@@ -239,7 +239,7 @@ pub(crate) fn compound_statements(
semi = Some((range.start(), range.end()));
allow_ellipsis = false;
}
Tok::Comment(..) | Tok::Indent | Tok::Dedent | Tok::NonLogicalNewline => {}
Tok::Comment | Tok::Indent | Tok::Dedent | Tok::NonLogicalNewline => {}
_ => {
if let Some((start, end)) = semi {
diagnostics.push(Diagnostic::new(
@@ -347,7 +347,7 @@ fn has_non_trivia_tokens_till<'a>(
}
if !matches!(
tok,
Tok::Newline | Tok::Comment(_) | Tok::EndOfFile | Tok::NonLogicalNewline
Tok::Newline | Tok::Comment | Tok::EndOfFile | Tok::NonLogicalNewline
) {
return true;
}

View File

@@ -65,6 +65,7 @@ impl Violation for MultipleImportsOnOneLine {
/// ```
///
/// [PEP 8]: https://peps.python.org/pep-0008/#imports
/// [preview]: https://docs.astral.sh/ruff/preview/
#[violation]
pub struct ModuleImportNotAtTopOfFile {
source_type: PySourceType,

View File

@@ -200,42 +200,40 @@ pub(crate) fn literal_comparisons(checker: &mut Checker, compare: &ast::ExprComp
continue;
}
let Some(op) = EqCmpOp::try_from(*op) else {
continue;
};
if checker.enabled(Rule::NoneComparison) && next.is_none_literal_expr() {
match op {
EqCmpOp::Eq => {
let diagnostic = Diagnostic::new(NoneComparison(op), next.range());
bad_ops.insert(index, CmpOp::Is);
diagnostics.push(diagnostic);
}
EqCmpOp::NotEq => {
let diagnostic = Diagnostic::new(NoneComparison(op), next.range());
bad_ops.insert(index, CmpOp::IsNot);
diagnostics.push(diagnostic);
}
}
}
if checker.enabled(Rule::TrueFalseComparison) {
if let Expr::BooleanLiteral(ast::ExprBooleanLiteral { value, .. }) = next {
if let Some(op) = EqCmpOp::try_from(*op) {
if checker.enabled(Rule::NoneComparison) && next.is_none_literal_expr() {
match op {
EqCmpOp::Eq => {
let diagnostic =
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
let diagnostic = Diagnostic::new(NoneComparison(op), next.range());
bad_ops.insert(index, CmpOp::Is);
diagnostics.push(diagnostic);
}
EqCmpOp::NotEq => {
let diagnostic =
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
let diagnostic = Diagnostic::new(NoneComparison(op), next.range());
bad_ops.insert(index, CmpOp::IsNot);
diagnostics.push(diagnostic);
}
}
}
if checker.enabled(Rule::TrueFalseComparison) {
if let Expr::BooleanLiteral(ast::ExprBooleanLiteral { value, .. }) = next {
match op {
EqCmpOp::Eq => {
let diagnostic =
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
bad_ops.insert(index, CmpOp::Is);
diagnostics.push(diagnostic);
}
EqCmpOp::NotEq => {
let diagnostic =
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
bad_ops.insert(index, CmpOp::IsNot);
diagnostics.push(diagnostic);
}
}
}
}
}
comparator = next;

View File

@@ -48,9 +48,6 @@ pub(crate) fn no_newline_at_end_of_file(
}
if !source.ends_with(['\n', '\r']) {
// Note: if `lines.last()` is `None`, then `contents` is empty (and so we don't
// want to raise W292 anyway).
// Both locations are at the end of the file (and thus the same).
let range = TextRange::empty(locator.contents().text_len());
let mut diagnostic = Diagnostic::new(MissingNewlineAtEndOfFile, range);
@@ -60,5 +57,6 @@ pub(crate) fn no_newline_at_end_of_file(
)));
return Some(diagnostic);
}
None
}

View File

@@ -54,6 +54,7 @@ mod tests {
#[test_case(Rule::UnusedImport, Path::new("F401_19.py"))]
#[test_case(Rule::UnusedImport, Path::new("F401_20.py"))]
#[test_case(Rule::ImportShadowedByLoopVar, Path::new("F402.py"))]
#[test_case(Rule::ImportShadowedByLoopVar, Path::new("F402.ipynb"))]
#[test_case(Rule::UndefinedLocalWithImportStar, Path::new("F403.py"))]
#[test_case(Rule::LateFutureImport, Path::new("F404_0.py"))]
#[test_case(Rule::LateFutureImport, Path::new("F404_1.py"))]

View File

@@ -1,6 +1,6 @@
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_source_file::OneIndexed;
use ruff_source_file::SourceRow;
/// ## What it does
/// Checks for import bindings that are shadowed by loop variables.
@@ -32,14 +32,14 @@ use ruff_source_file::OneIndexed;
#[violation]
pub struct ImportShadowedByLoopVar {
pub(crate) name: String,
pub(crate) line: OneIndexed,
pub(crate) row: SourceRow,
}
impl Violation for ImportShadowedByLoopVar {
#[derive_message_formats]
fn message(&self) -> String {
let ImportShadowedByLoopVar { name, line } = self;
format!("Import `{name}` from line {line} shadowed by loop variable")
let ImportShadowedByLoopVar { name, row } = self;
format!("Import `{name}` from {row} shadowed by loop variable")
}
}

View File

@@ -1,6 +1,6 @@
use ruff_diagnostics::Violation;
use ruff_macros::{derive_message_formats, violation};
use ruff_source_file::OneIndexed;
use ruff_source_file::SourceRow;
/// ## What it does
/// Checks for variable definitions that redefine (or "shadow") unused
@@ -25,13 +25,13 @@ use ruff_source_file::OneIndexed;
#[violation]
pub struct RedefinedWhileUnused {
pub name: String,
pub line: OneIndexed,
pub row: SourceRow,
}
impl Violation for RedefinedWhileUnused {
#[derive_message_formats]
fn message(&self) -> String {
let RedefinedWhileUnused { name, line } = self;
format!("Redefinition of unused `{name}` from line {line}")
let RedefinedWhileUnused { name, row } = self;
format!("Redefinition of unused `{name}` from {row}")
}
}

View File

@@ -0,0 +1,23 @@
---
source: crates/ruff_linter/src/rules/pyflakes/mod.rs
---
F402.ipynb:3:5: F402 Import `os` from cell 1, line 1 shadowed by loop variable
|
1 | import os
2 | import os.path as path
3 | for os in range(3):
| ^^ F402
4 | pass
5 | for path in range(3):
|
F402.ipynb:5:5: F402 Import `path` from cell 1, line 2 shadowed by loop variable
|
3 | for os in range(3):
4 | pass
5 | for path in range(3):
| ^^^^ F402
6 | pass
|

Some files were not shown because too many files have changed in this diff Show More