Compare commits
10 Commits
zb/recursi
...
dhruv/toke
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2b47794278 | ||
|
|
e9d5ca2fe1 | ||
|
|
07057c6a35 | ||
|
|
3af851109f | ||
|
|
c0c065f1ca | ||
|
|
e837888c37 | ||
|
|
79861da8f6 | ||
|
|
035ac75fae | ||
|
|
b5cc384bb1 | ||
|
|
78ee6441a7 |
2
.gitattributes
vendored
2
.gitattributes
vendored
@@ -2,8 +2,6 @@
|
||||
|
||||
crates/ruff_linter/resources/test/fixtures/isort/line_ending_crlf.py text eol=crlf
|
||||
crates/ruff_linter/resources/test/fixtures/pycodestyle/W605_1.py text eol=crlf
|
||||
crates/ruff_linter/resources/test/fixtures/pycodestyle/W391_2.py text eol=crlf
|
||||
crates/ruff_linter/resources/test/fixtures/pycodestyle/W391_3.py text eol=crlf
|
||||
|
||||
crates/ruff_python_formatter/resources/test/fixtures/ruff/docstring_code_examples_crlf.py text eol=crlf
|
||||
crates/ruff_python_formatter/tests/snapshots/format@docstring_code_examples_crlf.py.snap text eol=crlf
|
||||
|
||||
11
.github/workflows/release.yaml
vendored
11
.github/workflows/release.yaml
vendored
@@ -28,6 +28,7 @@ env:
|
||||
CARGO_NET_RETRY: 10
|
||||
CARGO_TERM_COLOR: always
|
||||
RUSTUP_MAX_RETRIES: 10
|
||||
MATURIN_VERSION: "1.4.0"
|
||||
|
||||
jobs:
|
||||
sdist:
|
||||
@@ -44,6 +45,7 @@ jobs:
|
||||
- name: "Build sdist"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
command: sdist
|
||||
args: --out dist
|
||||
- name: "Test sdist"
|
||||
@@ -72,6 +74,7 @@ jobs:
|
||||
- name: "Build wheels - x86_64"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
target: x86_64
|
||||
args: --release --locked --out dist
|
||||
- name: "Test wheel - x86_64"
|
||||
@@ -112,6 +115,7 @@ jobs:
|
||||
- name: "Build wheels - universal2"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
args: --release --locked --target universal2-apple-darwin --out dist
|
||||
- name: "Test wheel - universal2"
|
||||
run: |
|
||||
@@ -160,6 +164,7 @@ jobs:
|
||||
- name: "Build wheels"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
target: ${{ matrix.platform.target }}
|
||||
args: --release --locked --out dist
|
||||
- name: "Test wheel"
|
||||
@@ -208,6 +213,7 @@ jobs:
|
||||
- name: "Build wheels"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
target: ${{ matrix.target }}
|
||||
manylinux: auto
|
||||
args: --release --locked --out dist
|
||||
@@ -270,6 +276,7 @@ jobs:
|
||||
- name: "Build wheels"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
target: ${{ matrix.platform.target }}
|
||||
manylinux: auto
|
||||
docker-options: ${{ matrix.platform.maturin_docker_options }}
|
||||
@@ -326,6 +333,7 @@ jobs:
|
||||
- name: "Build wheels"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
target: ${{ matrix.target }}
|
||||
manylinux: musllinux_1_2
|
||||
args: --release --locked --out dist
|
||||
@@ -381,6 +389,7 @@ jobs:
|
||||
- name: "Build wheels"
|
||||
uses: PyO3/maturin-action@v1
|
||||
with:
|
||||
maturin-version: ${{ env.MATURIN_VERSION }}
|
||||
target: ${{ matrix.platform.target }}
|
||||
manylinux: musllinux_1_2
|
||||
args: --release --locked --out dist
|
||||
@@ -517,7 +526,7 @@ jobs:
|
||||
path: binaries
|
||||
merge-multiple: true
|
||||
- name: "Publish to GitHub"
|
||||
uses: softprops/action-gh-release@v2
|
||||
uses: softprops/action-gh-release@v1
|
||||
with:
|
||||
draft: true
|
||||
files: binaries/*
|
||||
|
||||
20
CHANGELOG.md
20
CHANGELOG.md
@@ -1,25 +1,5 @@
|
||||
# Changelog
|
||||
|
||||
## 0.3.2
|
||||
|
||||
### Preview features
|
||||
|
||||
- Improve single-`with` item formatting for Python 3.8 or older ([#10276](https://github.com/astral-sh/ruff/pull/10276))
|
||||
|
||||
### Rule changes
|
||||
|
||||
- \[`pyupgrade`\] Allow fixes for f-string rule regardless of line length (`UP032`) ([#10263](https://github.com/astral-sh/ruff/pull/10263))
|
||||
- \[`pycodestyle`\] Include actual conditions in E712 diagnostics ([#10254](https://github.com/astral-sh/ruff/pull/10254))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- Fix trailing kwargs end of line comment after slash ([#10297](https://github.com/astral-sh/ruff/pull/10297))
|
||||
- Fix unstable `with` items formatting ([#10274](https://github.com/astral-sh/ruff/pull/10274))
|
||||
- Avoid repeating function calls in f-string conversions ([#10265](https://github.com/astral-sh/ruff/pull/10265))
|
||||
- Fix E203 false positive for slices in format strings ([#10280](https://github.com/astral-sh/ruff/pull/10280))
|
||||
- Fix incorrect `Parameter` range for `*args` and `**kwargs` ([#10283](https://github.com/astral-sh/ruff/pull/10283))
|
||||
- Treat `typing.Annotated` subscripts as type definitions ([#10285](https://github.com/astral-sh/ruff/pull/10285))
|
||||
|
||||
## 0.3.1
|
||||
|
||||
### Preview features
|
||||
|
||||
@@ -329,13 +329,13 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
|
||||
|
||||
### Creating a new release
|
||||
|
||||
1. Install `uv`: `curl -LsSf https://astral.sh/uv/install.sh | sh`
|
||||
1. Run `./scripts/release/bump.sh`; this command will:
|
||||
- Generate a temporary virtual environment with `rooster`
|
||||
We use an experimental in-house tool for managing releases.
|
||||
|
||||
1. Install `rooster`: `pip install git+https://github.com/zanieb/rooster@main`
|
||||
1. Run `rooster release`; this command will:
|
||||
- Generate a changelog entry in `CHANGELOG.md`
|
||||
- Update versions in `pyproject.toml` and `Cargo.toml`
|
||||
- Update references to versions in the `README.md` and documentation
|
||||
- Display contributors for the release
|
||||
1. The changelog should then be editorialized for consistency
|
||||
- Often labels will be missing from pull requests they will need to be manually organized into the proper section
|
||||
- Changes should be edited to be user-facing descriptions, avoiding internal details
|
||||
@@ -359,7 +359,7 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
|
||||
1. Open the draft release in the GitHub release section
|
||||
1. Copy the changelog for the release into the GitHub release
|
||||
- See previous releases for formatting of section headers
|
||||
1. Append the contributors from the `bump.sh` script
|
||||
1. Generate the contributor list with `rooster contributors` and add to the release notes
|
||||
1. If needed, [update the schemastore](https://github.com/astral-sh/ruff/blob/main/scripts/update_schemastore.py).
|
||||
1. One can determine if an update is needed when
|
||||
`git diff old-version-tag new-version-tag -- ruff.schema.json` returns a non-empty diff.
|
||||
|
||||
191
Cargo.lock
generated
191
Cargo.lock
generated
@@ -270,9 +270,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "chrono"
|
||||
version = "0.4.35"
|
||||
version = "0.4.34"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8eaf5903dcbc0a39312feb77df2ff4c76387d591b9fc7b04a238dcf8bb62639a"
|
||||
checksum = "5bc015644b92d5890fab7489e49d21f879d5c990186827d42ec511919404f38b"
|
||||
dependencies = [
|
||||
"android-tzdata",
|
||||
"iana-time-zone",
|
||||
@@ -309,9 +309,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "clap"
|
||||
version = "4.5.2"
|
||||
version = "4.5.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b230ab84b0ffdf890d5a10abdbc8b83ae1c4918275daea1ab8801f71536b2651"
|
||||
checksum = "c918d541ef2913577a0f9566e9ce27cb35b6df072075769e0b26cb5a554520da"
|
||||
dependencies = [
|
||||
"clap_builder",
|
||||
"clap_derive",
|
||||
@@ -319,9 +319,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "clap_builder"
|
||||
version = "4.5.2"
|
||||
version = "4.5.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ae129e2e766ae0ec03484e609954119f123cc1fe650337e155d03b022f24f7b4"
|
||||
checksum = "9f3e7391dad68afb0c2ede1bf619f579a3dc9c2ec67f089baa397123a2f3d1eb"
|
||||
dependencies = [
|
||||
"anstream",
|
||||
"anstyle",
|
||||
@@ -528,19 +528,6 @@ dependencies = [
|
||||
"itertools 0.10.5",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam"
|
||||
version = "0.8.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1137cd7e7fc0fb5d3c5a8678be38ec56e819125d8d7907411fe24ccb943faca8"
|
||||
dependencies = [
|
||||
"crossbeam-channel",
|
||||
"crossbeam-deque",
|
||||
"crossbeam-epoch",
|
||||
"crossbeam-queue",
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-channel"
|
||||
version = "0.5.12"
|
||||
@@ -569,15 +556,6 @@ dependencies = [
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-queue"
|
||||
version = "0.3.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "df0346b5d5e76ac2fe4e327c5fd1118d6be7c51dfb18f9b7922923f287471e35"
|
||||
dependencies = [
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-utils"
|
||||
version = "0.8.19"
|
||||
@@ -1177,17 +1155,11 @@ version = "1.0.10"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b1a46d1a171d865aa5f83f92695765caa047a9b4cbae2cbf37dbd613a793fd4c"
|
||||
|
||||
[[package]]
|
||||
name = "jod-thread"
|
||||
version = "0.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8b23360e99b8717f20aaa4598f5a6541efbe30630039fbc7706cf954a87947ae"
|
||||
|
||||
[[package]]
|
||||
name = "js-sys"
|
||||
version = "0.3.69"
|
||||
version = "0.3.68"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "29c15563dc2726973df627357ce0c9ddddbea194836909d655df6a75d2cf296d"
|
||||
checksum = "406cda4b368d531c842222cf9d2600a9a4acce8d29423695379c6868a143a9ee"
|
||||
dependencies = [
|
||||
"wasm-bindgen",
|
||||
]
|
||||
@@ -1355,31 +1327,6 @@ version = "0.4.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "90ed8c1e510134f979dbc4f070f87d4313098b704861a105fe34231c70a3901c"
|
||||
|
||||
[[package]]
|
||||
name = "lsp-server"
|
||||
version = "0.7.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "248f65b78f6db5d8e1b1604b4098a28b43d21a8eb1deeca22b1c421b276c7095"
|
||||
dependencies = [
|
||||
"crossbeam-channel",
|
||||
"log",
|
||||
"serde",
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lsp-types"
|
||||
version = "0.95.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "158c1911354ef73e8fe42da6b10c0484cb65c7f1007f28022e847706c1ab6984"
|
||||
dependencies = [
|
||||
"bitflags 1.3.2",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"serde_repr",
|
||||
"url",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchers"
|
||||
version = "0.1.0"
|
||||
@@ -2003,7 +1950,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"argfile",
|
||||
@@ -2035,7 +1982,6 @@ dependencies = [
|
||||
"ruff_notebook",
|
||||
"ruff_python_ast",
|
||||
"ruff_python_formatter",
|
||||
"ruff_server",
|
||||
"ruff_source_file",
|
||||
"ruff_text_size",
|
||||
"ruff_workspace",
|
||||
@@ -2050,8 +1996,6 @@ dependencies = [
|
||||
"tikv-jemallocator",
|
||||
"toml",
|
||||
"tracing",
|
||||
"tracing-subscriber",
|
||||
"tracing-tree",
|
||||
"walkdir",
|
||||
"wild",
|
||||
]
|
||||
@@ -2167,7 +2111,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff_linter"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"annotate-snippets 0.9.2",
|
||||
@@ -2352,9 +2296,11 @@ dependencies = [
|
||||
name = "ruff_python_parser"
|
||||
version = "0.0.0"
|
||||
dependencies = [
|
||||
"annotate-snippets 0.9.2",
|
||||
"anyhow",
|
||||
"bitflags 2.4.2",
|
||||
"bstr",
|
||||
"drop_bomb",
|
||||
"insta",
|
||||
"is-macro",
|
||||
"itertools 0.12.1",
|
||||
@@ -2362,6 +2308,7 @@ dependencies = [
|
||||
"lalrpop-util",
|
||||
"memchr",
|
||||
"ruff_python_ast",
|
||||
"ruff_source_file",
|
||||
"ruff_text_size",
|
||||
"rustc-hash",
|
||||
"static_assertions",
|
||||
@@ -2416,38 +2363,9 @@ dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ruff_server"
|
||||
version = "0.2.2"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"crossbeam",
|
||||
"insta",
|
||||
"jod-thread",
|
||||
"libc",
|
||||
"lsp-server",
|
||||
"lsp-types",
|
||||
"ruff_diagnostics",
|
||||
"ruff_formatter",
|
||||
"ruff_linter",
|
||||
"ruff_python_ast",
|
||||
"ruff_python_codegen",
|
||||
"ruff_python_formatter",
|
||||
"ruff_python_index",
|
||||
"ruff_python_parser",
|
||||
"ruff_source_file",
|
||||
"ruff_text_size",
|
||||
"ruff_workspace",
|
||||
"rustc-hash",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"similar",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ruff_shrinking"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"clap",
|
||||
@@ -2716,17 +2634,6 @@ dependencies = [
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_repr"
|
||||
version = "0.1.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0b2e6b945e9d3df726b65d6ee24060aff8e3533d431f677a9695db04eff9dfdb"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.52",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_spanned"
|
||||
version = "0.6.5"
|
||||
@@ -3050,6 +2957,22 @@ dependencies = [
|
||||
"tikv-jemalloc-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "time"
|
||||
version = "0.3.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cd0cbfecb4d19b5ea75bb31ad904eb5b9fa13f21079c3b92017ebdf4999a5890"
|
||||
dependencies = [
|
||||
"serde",
|
||||
"time-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "time-core"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2e153e1f1acaef8acc537e68b44906d2db6436e2b35ac2c6b42640fff91f00fd"
|
||||
|
||||
[[package]]
|
||||
name = "tiny-keccak"
|
||||
version = "2.0.2"
|
||||
@@ -3163,17 +3086,6 @@ dependencies = [
|
||||
"tracing-subscriber",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-log"
|
||||
version = "0.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f751112709b4e791d8ce53e32c4ed2d353565a795ce84da2285393f41557bdf2"
|
||||
dependencies = [
|
||||
"log",
|
||||
"once_cell",
|
||||
"tracing-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-log"
|
||||
version = "0.2.0"
|
||||
@@ -3200,19 +3112,7 @@ dependencies = [
|
||||
"thread_local",
|
||||
"tracing",
|
||||
"tracing-core",
|
||||
"tracing-log 0.2.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-tree"
|
||||
version = "0.2.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2ec6adcab41b1391b08a308cc6302b79f8095d1673f6947c2dc65ffb028b0b2d"
|
||||
dependencies = [
|
||||
"nu-ansi-term",
|
||||
"tracing-core",
|
||||
"tracing-log 0.1.4",
|
||||
"tracing-subscriber",
|
||||
"tracing-log",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3298,9 +3198,9 @@ checksum = "f962df74c8c05a667b5ee8bcf162993134c104e96440b663c8daa176dc772d8c"
|
||||
|
||||
[[package]]
|
||||
name = "unicode_names2"
|
||||
version = "1.2.2"
|
||||
version = "1.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "addeebf294df7922a1164f729fb27ebbbcea99cc32b3bf08afab62757f707677"
|
||||
checksum = "ac64ef2f016dc69dfa8283394a70b057066eb054d5fcb6b9eb17bd2ec5097211"
|
||||
dependencies = [
|
||||
"phf",
|
||||
"unicode_names2_generator",
|
||||
@@ -3308,14 +3208,15 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "unicode_names2_generator"
|
||||
version = "1.2.2"
|
||||
version = "1.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f444b8bba042fe3c1251ffaca35c603f2dc2ccc08d595c65a8c4f76f3e8426c0"
|
||||
checksum = "013f6a731e80f3930de580e55ba41dfa846de4e0fdee4a701f97989cb1597d6a"
|
||||
dependencies = [
|
||||
"getopts",
|
||||
"log",
|
||||
"phf_codegen",
|
||||
"rand",
|
||||
"time",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3454,9 +3355,9 @@ checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen"
|
||||
version = "0.2.92"
|
||||
version = "0.2.91"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4be2531df63900aeb2bca0daaaddec08491ee64ceecbee5076636a3b026795a8"
|
||||
checksum = "c1e124130aee3fb58c5bdd6b639a0509486b0338acaaae0c84a5124b0f588b7f"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"wasm-bindgen-macro",
|
||||
@@ -3464,9 +3365,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-backend"
|
||||
version = "0.2.92"
|
||||
version = "0.2.91"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "614d787b966d3989fa7bb98a654e369c762374fd3213d212cfc0251257e747da"
|
||||
checksum = "c9e7e1900c352b609c8488ad12639a311045f40a35491fb69ba8c12f758af70b"
|
||||
dependencies = [
|
||||
"bumpalo",
|
||||
"log",
|
||||
@@ -3491,9 +3392,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro"
|
||||
version = "0.2.92"
|
||||
version = "0.2.91"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a1f8823de937b71b9460c0c34e25f3da88250760bec0ebac694b49997550d726"
|
||||
checksum = "b30af9e2d358182b5c7449424f017eba305ed32a7010509ede96cdc4696c46ed"
|
||||
dependencies = [
|
||||
"quote",
|
||||
"wasm-bindgen-macro-support",
|
||||
@@ -3501,9 +3402,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro-support"
|
||||
version = "0.2.92"
|
||||
version = "0.2.91"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7"
|
||||
checksum = "642f325be6301eb8107a83d12a8ac6c1e1c54345a7ef1a9261962dfefda09e66"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -3514,9 +3415,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-shared"
|
||||
version = "0.2.92"
|
||||
version = "0.2.91"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "af190c94f2773fdb3729c55b007a722abb5384da03bc0986df4c289bf5567e96"
|
||||
checksum = "4f186bd2dcf04330886ce82d6f33dd75a7bfcf69ecf5763b89fcde53b6ac9838"
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-test"
|
||||
|
||||
16
Cargo.toml
16
Cargo.toml
@@ -21,8 +21,8 @@ bincode = { version = "1.3.3" }
|
||||
bitflags = { version = "2.4.1" }
|
||||
bstr = { version = "1.9.1" }
|
||||
cachedir = { version = "0.3.1" }
|
||||
chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
|
||||
clap = { version = "4.5.2", features = ["derive"] }
|
||||
chrono = { version = "0.4.34", default-features = false, features = ["clock"] }
|
||||
clap = { version = "4.5.1", features = ["derive"] }
|
||||
clap_complete_command = { version = "0.5.1" }
|
||||
clearscreen = { version = "2.0.0" }
|
||||
codspeed-criterion-compat = { version = "2.4.0", default-features = false }
|
||||
@@ -32,7 +32,6 @@ console_error_panic_hook = { version = "0.1.7" }
|
||||
console_log = { version = "1.0.0" }
|
||||
countme = { version = "3.0.1" }
|
||||
criterion = { version = "0.5.1", default-features = false }
|
||||
crossbeam = { version = "0.8.4" }
|
||||
dirs = { version = "5.0.0" }
|
||||
drop_bomb = { version = "0.1.5" }
|
||||
env_logger = { version = "0.10.1" }
|
||||
@@ -52,15 +51,11 @@ insta-cmd = { version = "0.4.0" }
|
||||
is-macro = { version = "0.3.5" }
|
||||
is-wsl = { version = "0.4.0" }
|
||||
itertools = { version = "0.12.1" }
|
||||
js-sys = { version = "0.3.69" }
|
||||
jod-thread = { version = "0.1.2" }
|
||||
js-sys = { version = "0.3.67" }
|
||||
lalrpop-util = { version = "0.20.0", default-features = false }
|
||||
lexical-parse-float = { version = "0.8.0", features = ["format"] }
|
||||
libc = { version = "0.2.153" }
|
||||
libcst = { version = "1.1.0", default-features = false }
|
||||
log = { version = "0.4.17" }
|
||||
lsp-server = { version = "0.7.6" }
|
||||
lsp-types = { version = "0.95.0", features = ["proposed"] }
|
||||
memchr = { version = "2.7.1" }
|
||||
mimalloc = { version = "0.1.39" }
|
||||
natord = { version = "1.0.9" }
|
||||
@@ -102,17 +97,16 @@ toml = { version = "0.8.9" }
|
||||
tracing = { version = "0.1.40" }
|
||||
tracing-indicatif = { version = "0.3.6" }
|
||||
tracing-subscriber = { version = "0.3.18", features = ["env-filter"] }
|
||||
tracing-tree = { version = "0.2.4" }
|
||||
typed-arena = { version = "2.0.2" }
|
||||
unic-ucd-category = { version = "0.9" }
|
||||
unicode-ident = { version = "1.0.12" }
|
||||
unicode-width = { version = "0.1.11" }
|
||||
unicode_names2 = { version = "1.2.2" }
|
||||
unicode_names2 = { version = "1.2.1" }
|
||||
ureq = { version = "2.9.6" }
|
||||
url = { version = "2.5.0" }
|
||||
uuid = { version = "1.6.1", features = ["v4", "fast-rng", "macro-diagnostics", "js"] }
|
||||
walkdir = { version = "2.3.2" }
|
||||
wasm-bindgen = { version = "0.2.92" }
|
||||
wasm-bindgen = { version = "0.2.84" }
|
||||
wasm-bindgen-test = { version = "0.3.40" }
|
||||
wild = { version = "2" }
|
||||
|
||||
|
||||
@@ -151,7 +151,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
|
||||
```yaml
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.3.2
|
||||
rev: v0.3.1
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
@@ -20,7 +20,6 @@ ruff_macros = { path = "../ruff_macros" }
|
||||
ruff_notebook = { path = "../ruff_notebook" }
|
||||
ruff_python_ast = { path = "../ruff_python_ast" }
|
||||
ruff_python_formatter = { path = "../ruff_python_formatter" }
|
||||
ruff_server = { path = "../ruff_server" }
|
||||
ruff_source_file = { path = "../ruff_source_file" }
|
||||
ruff_text_size = { path = "../ruff_text_size" }
|
||||
ruff_workspace = { path = "../ruff_workspace" }
|
||||
@@ -53,8 +52,6 @@ tempfile = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
toml = { workspace = true }
|
||||
tracing = { workspace = true, features = ["log"] }
|
||||
tracing-subscriber = { workspace = true, features = ["registry"]}
|
||||
tracing-tree = { workspace = true }
|
||||
walkdir = { workspace = true }
|
||||
wild = { workspace = true }
|
||||
|
||||
|
||||
@@ -126,8 +126,6 @@ pub enum Command {
|
||||
GenerateShellCompletion { shell: clap_complete_command::Shell },
|
||||
/// Run the Ruff formatter on the given files or directories.
|
||||
Format(FormatCommand),
|
||||
/// Run the language server.
|
||||
Server(ServerCommand),
|
||||
/// Display Ruff's version
|
||||
Version {
|
||||
#[arg(long, value_enum, default_value = "text")]
|
||||
@@ -496,9 +494,6 @@ pub struct FormatCommand {
|
||||
pub range: Option<FormatRange>,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, clap::Parser)]
|
||||
pub struct ServerCommand;
|
||||
|
||||
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
|
||||
pub enum HelpFormat {
|
||||
Text,
|
||||
|
||||
@@ -7,7 +7,6 @@ pub(crate) mod format;
|
||||
pub(crate) mod format_stdin;
|
||||
pub(crate) mod linter;
|
||||
pub(crate) mod rule;
|
||||
pub(crate) mod server;
|
||||
pub(crate) mod show_files;
|
||||
pub(crate) mod show_settings;
|
||||
pub(crate) mod version;
|
||||
|
||||
@@ -1,69 +0,0 @@
|
||||
use crate::ExitStatus;
|
||||
use anyhow::Result;
|
||||
use ruff_linter::logging::LogLevel;
|
||||
use ruff_server::Server;
|
||||
use tracing::{level_filters::LevelFilter, metadata::Level, subscriber::Interest, Metadata};
|
||||
use tracing_subscriber::{
|
||||
layer::{Context, Filter, SubscriberExt},
|
||||
Layer, Registry,
|
||||
};
|
||||
use tracing_tree::time::Uptime;
|
||||
|
||||
pub(crate) fn run_server(log_level: LogLevel) -> Result<ExitStatus> {
|
||||
let trace_level = if log_level == LogLevel::Verbose {
|
||||
Level::TRACE
|
||||
} else {
|
||||
Level::DEBUG
|
||||
};
|
||||
|
||||
let subscriber = Registry::default().with(
|
||||
tracing_tree::HierarchicalLayer::default()
|
||||
.with_indent_lines(true)
|
||||
.with_indent_amount(2)
|
||||
.with_bracketed_fields(true)
|
||||
.with_targets(true)
|
||||
.with_writer(|| Box::new(std::io::stderr()))
|
||||
.with_timer(Uptime::default())
|
||||
.with_filter(LoggingFilter { trace_level }),
|
||||
);
|
||||
|
||||
tracing::subscriber::set_global_default(subscriber)?;
|
||||
|
||||
let server = Server::new()?;
|
||||
|
||||
server.run().map(|()| ExitStatus::Success)
|
||||
}
|
||||
|
||||
struct LoggingFilter {
|
||||
trace_level: Level,
|
||||
}
|
||||
|
||||
impl LoggingFilter {
|
||||
fn is_enabled(&self, meta: &Metadata<'_>) -> bool {
|
||||
let filter = if meta.target().starts_with("ruff") {
|
||||
self.trace_level
|
||||
} else {
|
||||
Level::INFO
|
||||
};
|
||||
|
||||
meta.level() <= &filter
|
||||
}
|
||||
}
|
||||
|
||||
impl<S> Filter<S> for LoggingFilter {
|
||||
fn enabled(&self, meta: &Metadata<'_>, _cx: &Context<'_, S>) -> bool {
|
||||
self.is_enabled(meta)
|
||||
}
|
||||
|
||||
fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
|
||||
if self.is_enabled(meta) {
|
||||
Interest::always()
|
||||
} else {
|
||||
Interest::never()
|
||||
}
|
||||
}
|
||||
|
||||
fn max_level_hint(&self) -> Option<LevelFilter> {
|
||||
Some(LevelFilter::from_level(self.trace_level))
|
||||
}
|
||||
}
|
||||
@@ -7,7 +7,7 @@ use std::process::ExitCode;
|
||||
use std::sync::mpsc::channel;
|
||||
|
||||
use anyhow::Result;
|
||||
use args::{GlobalConfigArgs, ServerCommand};
|
||||
use args::GlobalConfigArgs;
|
||||
use clap::CommandFactory;
|
||||
use colored::Colorize;
|
||||
use log::warn;
|
||||
@@ -190,7 +190,6 @@ pub fn run(
|
||||
}
|
||||
Command::Check(args) => check(args, global_options),
|
||||
Command::Format(args) => format(args, global_options),
|
||||
Command::Server(args) => server(args, global_options.log_level()),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -204,12 +203,6 @@ fn format(args: FormatCommand, global_options: GlobalConfigArgs) -> Result<ExitS
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::needless_pass_by_value)] // TODO: remove once we start taking arguments from here
|
||||
fn server(args: ServerCommand, log_level: LogLevel) -> Result<ExitStatus> {
|
||||
let ServerCommand {} = args;
|
||||
commands::server::run_server(log_level)
|
||||
}
|
||||
|
||||
pub fn check(args: CheckCommand, global_options: GlobalConfigArgs) -> Result<ExitStatus> {
|
||||
let (cli, config_arguments) = args.partition(global_options)?;
|
||||
|
||||
|
||||
@@ -523,7 +523,7 @@ from module import =
|
||||
----- stdout -----
|
||||
|
||||
----- stderr -----
|
||||
error: Failed to parse main.py:2:20: Unexpected token '='
|
||||
error: Failed to parse main.py:2:20: Unexpected token =
|
||||
"###);
|
||||
|
||||
Ok(())
|
||||
|
||||
@@ -728,11 +728,11 @@ fn stdin_parse_error() {
|
||||
success: false
|
||||
exit_code: 1
|
||||
----- stdout -----
|
||||
-:1:17: E999 SyntaxError: Unexpected token '='
|
||||
-:1:17: E999 SyntaxError: Unexpected token =
|
||||
Found 1 error.
|
||||
|
||||
----- stderr -----
|
||||
error: Failed to parse at 1:17: Unexpected token '='
|
||||
error: Failed to parse at 1:17: Unexpected token =
|
||||
"###);
|
||||
}
|
||||
|
||||
|
||||
@@ -52,6 +52,7 @@ file_resolver.exclude = [
|
||||
file_resolver.extend_exclude = [
|
||||
"crates/ruff_linter/resources/",
|
||||
"crates/ruff_python_formatter/resources/",
|
||||
"crates/ruff_python_parser/resources/",
|
||||
]
|
||||
file_resolver.force_exclude = false
|
||||
file_resolver.include = [
|
||||
@@ -241,22 +242,7 @@ linter.flake8_gettext.functions_names = [
|
||||
ngettext,
|
||||
]
|
||||
linter.flake8_implicit_str_concat.allow_multiline = true
|
||||
linter.flake8_import_conventions.aliases = {
|
||||
altair = alt,
|
||||
holoviews = hv,
|
||||
matplotlib = mpl,
|
||||
matplotlib.pyplot = plt,
|
||||
networkx = nx,
|
||||
numpy = np,
|
||||
pandas = pd,
|
||||
panel = pn,
|
||||
plotly.express = px,
|
||||
polars = pl,
|
||||
pyarrow = pa,
|
||||
seaborn = sns,
|
||||
tensorflow = tf,
|
||||
tkinter = tk,
|
||||
}
|
||||
linter.flake8_import_conventions.aliases = {"matplotlib": "mpl", "matplotlib.pyplot": "plt", "pandas": "pd", "seaborn": "sns", "tensorflow": "tf", "networkx": "nx", "plotly.express": "px", "polars": "pl", "numpy": "np", "panel": "pn", "pyarrow": "pa", "altair": "alt", "tkinter": "tk", "holoviews": "hv"}
|
||||
linter.flake8_import_conventions.banned_aliases = {}
|
||||
linter.flake8_import_conventions.banned_from = []
|
||||
linter.flake8_pytest_style.fixture_parentheses = true
|
||||
|
||||
@@ -545,10 +545,6 @@ impl PrintedRange {
|
||||
&self.code
|
||||
}
|
||||
|
||||
pub fn into_code(self) -> String {
|
||||
self.code
|
||||
}
|
||||
|
||||
/// The range the formatted code corresponds to in the source document.
|
||||
pub fn source_range(&self) -> TextRange {
|
||||
self.source_range
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "ruff_linter"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
publish = false
|
||||
authors = { workspace = true }
|
||||
edition = { workspace = true }
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
import os
|
||||
import random
|
||||
|
||||
import a_lib
|
||||
|
||||
# OK
|
||||
random.SystemRandom()
|
||||
|
||||
# Errors
|
||||
random.Random()
|
||||
random.random()
|
||||
random.randrange()
|
||||
random.randint()
|
||||
random.choice()
|
||||
random.choices()
|
||||
random.uniform()
|
||||
random.triangular()
|
||||
random.randbytes()
|
||||
|
||||
# Unrelated
|
||||
os.urandom()
|
||||
a_lib.random()
|
||||
@@ -1,47 +1,52 @@
|
||||
import crypt
|
||||
import hashlib
|
||||
from hashlib import new as hashlib_new
|
||||
from hashlib import sha1 as hashlib_sha1
|
||||
|
||||
# Errors
|
||||
# Invalid
|
||||
|
||||
hashlib.new('md5')
|
||||
|
||||
hashlib.new('md4', b'test')
|
||||
|
||||
hashlib.new(name='md5', data=b'test')
|
||||
|
||||
hashlib.new('MD4', data=b'test')
|
||||
|
||||
hashlib.new('sha1')
|
||||
|
||||
hashlib.new('sha1', data=b'test')
|
||||
|
||||
hashlib.new('sha', data=b'test')
|
||||
|
||||
hashlib.new(name='SHA', data=b'test')
|
||||
|
||||
hashlib.sha(data=b'test')
|
||||
|
||||
hashlib.md5()
|
||||
|
||||
hashlib_new('sha1')
|
||||
|
||||
hashlib_sha1('sha1')
|
||||
|
||||
# usedforsecurity arg only available in Python 3.9+
|
||||
hashlib.new('sha1', usedforsecurity=True)
|
||||
|
||||
crypt.crypt("test", salt=crypt.METHOD_CRYPT)
|
||||
crypt.crypt("test", salt=crypt.METHOD_MD5)
|
||||
crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
|
||||
crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
||||
# Valid
|
||||
|
||||
crypt.mksalt(crypt.METHOD_CRYPT)
|
||||
crypt.mksalt(crypt.METHOD_MD5)
|
||||
crypt.mksalt(crypt.METHOD_BLOWFISH)
|
||||
|
||||
# OK
|
||||
hashlib.new('sha256')
|
||||
|
||||
hashlib.new('SHA512')
|
||||
|
||||
hashlib.sha256(data=b'test')
|
||||
|
||||
# usedforsecurity arg only available in Python 3.9+
|
||||
hashlib_new(name='sha1', usedforsecurity=False)
|
||||
|
||||
# usedforsecurity arg only available in Python 3.9+
|
||||
hashlib_sha1(name='sha1', usedforsecurity=False)
|
||||
|
||||
# usedforsecurity arg only available in Python 3.9+
|
||||
hashlib.md4(usedforsecurity=False)
|
||||
|
||||
# usedforsecurity arg only available in Python 3.9+
|
||||
hashlib.new(name='sha256', usedforsecurity=False)
|
||||
|
||||
crypt.crypt("test")
|
||||
crypt.crypt("test", salt=crypt.METHOD_SHA256)
|
||||
crypt.crypt("test", salt=crypt.METHOD_SHA512)
|
||||
|
||||
crypt.mksalt()
|
||||
crypt.mksalt(crypt.METHOD_SHA256)
|
||||
crypt.mksalt(crypt.METHOD_SHA512)
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
import commands
|
||||
import popen2
|
||||
@@ -17,8 +16,6 @@ popen2.Popen3("true")
|
||||
popen2.Popen4("true")
|
||||
commands.getoutput("true")
|
||||
commands.getstatusoutput("true")
|
||||
subprocess.getoutput("true")
|
||||
subprocess.getstatusoutput("true")
|
||||
|
||||
|
||||
# Check command argument looks unsafe.
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
from django.contrib.auth.models import User
|
||||
|
||||
# Errors
|
||||
User.objects.filter(username='admin').extra(dict(could_be='insecure'))
|
||||
User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
|
||||
User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
||||
User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
||||
User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
|
||||
User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
|
||||
|
||||
query = '"username") AS "username", * FROM "auth_user" WHERE 1=1 OR "username"=? --'
|
||||
User.objects.filter(username='admin').extra(select={'test': query})
|
||||
|
||||
where_var = ['1=1) OR 1=1 AND (1=1']
|
||||
User.objects.filter(username='admin').extra(where=where_var)
|
||||
|
||||
where_str = '1=1) OR 1=1 AND (1=1'
|
||||
User.objects.filter(username='admin').extra(where=[where_str])
|
||||
|
||||
tables_var = ['django_content_type" WHERE "auth_user"."username"="admin']
|
||||
User.objects.all().extra(tables=tables_var).distinct()
|
||||
|
||||
tables_str = 'django_content_type" WHERE "auth_user"."username"="admin'
|
||||
User.objects.all().extra(tables=[tables_str]).distinct()
|
||||
|
||||
# OK
|
||||
User.objects.filter(username='admin').extra(
|
||||
select={'test': 'secure'},
|
||||
where=['secure'],
|
||||
tables=['secure']
|
||||
)
|
||||
User.objects.filter(username='admin').extra({'test': 'secure'})
|
||||
User.objects.filter(username='admin').extra(select={'test': 'secure'})
|
||||
User.objects.filter(username='admin').extra(where=['secure'])
|
||||
@@ -14,6 +14,9 @@ reversed(sorted(x, reverse=not x))
|
||||
reversed(sorted(i for i in range(42)))
|
||||
reversed(sorted((i for i in range(42)), reverse=True))
|
||||
|
||||
# Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
reversed(sorted([1, 2, 3], reverse=False or True))
|
||||
reversed(sorted([1, 2, 3], reverse=(False or True)))
|
||||
|
||||
def reversed(*args, **kwargs):
|
||||
return None
|
||||
|
||||
|
||||
reversed(sorted(x, reverse=True))
|
||||
|
||||
@@ -40,7 +40,4 @@ f"\'normal\' {f"\'nested\' {"other"} 'single quotes'"} normal" # Q004
|
||||
|
||||
# Make sure we do not unescape quotes
|
||||
this_is_fine = "This is an \\'escaped\\' quote"
|
||||
this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
|
||||
|
||||
# Invalid escapes in bytestrings are also triggered:
|
||||
x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
|
||||
this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
|
||||
|
||||
@@ -153,9 +153,3 @@ ham[lower +1 :, "columnname"]
|
||||
|
||||
#: E203:1:13
|
||||
ham[lower + 1 :, "columnname"]
|
||||
|
||||
#: Okay
|
||||
f"{ham[lower +1 :, "columnname"]}"
|
||||
|
||||
#: E203:1:13
|
||||
f"{ham[lower + 1 :, "columnname"]}"
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
a = (1 or)
|
||||
@@ -1,88 +0,0 @@
|
||||
a = 2 + 2
|
||||
|
||||
a = (2 + 2)
|
||||
|
||||
a = 2 + \
|
||||
3 \
|
||||
+ 4
|
||||
|
||||
a = (3 -\
|
||||
2 + \
|
||||
7)
|
||||
|
||||
z = 5 + \
|
||||
(3 -\
|
||||
2 + \
|
||||
7) + \
|
||||
4
|
||||
|
||||
b = [2 +
|
||||
2]
|
||||
|
||||
b = [
|
||||
2 + 4 + 5 + \
|
||||
44 \
|
||||
- 5
|
||||
]
|
||||
|
||||
c = (True and
|
||||
False \
|
||||
or False \
|
||||
and True \
|
||||
)
|
||||
|
||||
c = (True and
|
||||
False)
|
||||
|
||||
d = True and \
|
||||
False or \
|
||||
False \
|
||||
and not True
|
||||
|
||||
|
||||
s = {
|
||||
'x': 2 + \
|
||||
2
|
||||
}
|
||||
|
||||
|
||||
s = {
|
||||
'x': 2 +
|
||||
2
|
||||
}
|
||||
|
||||
|
||||
x = {2 + 4 \
|
||||
+ 3}
|
||||
|
||||
y = (
|
||||
2 + 2 # \
|
||||
+ 3 # \
|
||||
+ 4 \
|
||||
+ 3
|
||||
)
|
||||
|
||||
|
||||
x = """
|
||||
(\\
|
||||
)
|
||||
"""
|
||||
|
||||
|
||||
("""hello \
|
||||
""")
|
||||
|
||||
("hello \
|
||||
")
|
||||
|
||||
|
||||
x = "abc" \
|
||||
"xyz"
|
||||
|
||||
x = ("abc" \
|
||||
"xyz")
|
||||
|
||||
|
||||
def foo():
|
||||
x = (a + \
|
||||
2)
|
||||
@@ -1,14 +0,0 @@
|
||||
# Unix style
|
||||
def foo() -> None:
|
||||
pass
|
||||
|
||||
|
||||
def bar() -> None:
|
||||
pass
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
foo()
|
||||
bar()
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
# Unix style
|
||||
def foo() -> None:
|
||||
pass
|
||||
|
||||
|
||||
def bar() -> None:
|
||||
pass
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
foo()
|
||||
bar()
|
||||
@@ -1,17 +0,0 @@
|
||||
# Windows style
|
||||
def foo() -> None:
|
||||
pass
|
||||
|
||||
|
||||
def bar() -> None:
|
||||
pass
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
foo()
|
||||
bar()
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
# Windows style
|
||||
def foo() -> None:
|
||||
pass
|
||||
|
||||
|
||||
def bar() -> None:
|
||||
pass
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
foo()
|
||||
bar()
|
||||
@@ -1,5 +0,0 @@
|
||||
# This is fine
|
||||
def foo():
|
||||
pass
|
||||
|
||||
# Some comment
|
||||
@@ -1,7 +0,0 @@
|
||||
"""Test: ensure that we treat strings in `typing.Annotation` as type definitions."""
|
||||
|
||||
from pathlib import Path
|
||||
from re import RegexFlag
|
||||
from typing import Annotated
|
||||
|
||||
p: Annotated["Path", int] = 1
|
||||
@@ -1,16 +0,0 @@
|
||||
"""Test case: strings used within calls within type annotations."""
|
||||
|
||||
from typing import Callable
|
||||
|
||||
import bpy
|
||||
from mypy_extensions import VarArg
|
||||
|
||||
class LightShow(bpy.types.Operator):
|
||||
label = "Create Character"
|
||||
name = "lightshow.letter_creation"
|
||||
|
||||
filepath: bpy.props.StringProperty(subtype="FILE_PATH") # OK
|
||||
|
||||
|
||||
def f(x: Callable[[VarArg("os")], None]): # F821
|
||||
pass
|
||||
@@ -1,44 +0,0 @@
|
||||
"""Tests for constructs allowed in `.pyi` stub files but not at runtime"""
|
||||
|
||||
from typing import Optional, TypeAlias, Union
|
||||
|
||||
__version__: str
|
||||
__author__: str
|
||||
|
||||
# Forward references:
|
||||
MaybeCStr: TypeAlias = Optional[CStr] # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
MaybeCStr2: TypeAlias = Optional["CStr"] # always okay
|
||||
CStr: TypeAlias = Union[C, str] # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
CStr2: TypeAlias = Union["C", str] # always okay
|
||||
|
||||
# References to a class from inside the class:
|
||||
class C:
|
||||
other: C = ... # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
other2: "C" = ... # always okay
|
||||
def from_str(self, s: str) -> C: ... # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
def from_str2(self, s: str) -> "C": ... # always okay
|
||||
|
||||
# Circular references:
|
||||
class A:
|
||||
foo: B # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
foo2: "B" # always okay
|
||||
bar: dict[str, B] # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
bar2: dict[str, "A"] # always okay
|
||||
|
||||
class B:
|
||||
foo: A # always okay
|
||||
bar: dict[str, A] # always okay
|
||||
|
||||
class Leaf: ...
|
||||
class Tree(list[Tree | Leaf]): ... # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
class Tree2(list["Tree | Leaf"]): ... # always okay
|
||||
|
||||
# Annotations are treated as assignments in .pyi files, but not in .py files
|
||||
class MyClass:
|
||||
foo: int
|
||||
bar = foo # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
bar = "foo" # always okay
|
||||
|
||||
baz: MyClass
|
||||
eggs = baz # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
eggs = "baz" # always okay
|
||||
@@ -1,44 +0,0 @@
|
||||
"""Tests for constructs allowed in `.pyi` stub files but not at runtime"""
|
||||
|
||||
from typing import Optional, TypeAlias, Union
|
||||
|
||||
__version__: str
|
||||
__author__: str
|
||||
|
||||
# Forward references:
|
||||
MaybeCStr: TypeAlias = Optional[CStr] # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
MaybeCStr2: TypeAlias = Optional["CStr"] # always okay
|
||||
CStr: TypeAlias = Union[C, str] # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
CStr2: TypeAlias = Union["C", str] # always okay
|
||||
|
||||
# References to a class from inside the class:
|
||||
class C:
|
||||
other: C = ... # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
other2: "C" = ... # always okay
|
||||
def from_str(self, s: str) -> C: ... # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
def from_str2(self, s: str) -> "C": ... # always okay
|
||||
|
||||
# Circular references:
|
||||
class A:
|
||||
foo: B # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
foo2: "B" # always okay
|
||||
bar: dict[str, B] # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
bar2: dict[str, "A"] # always okay
|
||||
|
||||
class B:
|
||||
foo: A # always okay
|
||||
bar: dict[str, A] # always okay
|
||||
|
||||
class Leaf: ...
|
||||
class Tree(list[Tree | Leaf]): ... # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
class Tree2(list["Tree | Leaf"]): ... # always okay
|
||||
|
||||
# Annotations are treated as assignments in .pyi files, but not in .py files
|
||||
class MyClass:
|
||||
foo: int
|
||||
bar = foo # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
bar = "foo" # always okay
|
||||
|
||||
baz: MyClass
|
||||
eggs = baz # valid in a `.pyi` stub file, not in a `.py` runtime file
|
||||
eggs = "baz" # always okay
|
||||
@@ -1,48 +0,0 @@
|
||||
"""Tests for constructs allowed when `__future__` annotations are enabled but not otherwise"""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Optional, TypeAlias, Union
|
||||
|
||||
__version__: str
|
||||
__author__: str
|
||||
|
||||
# References to a class from inside the class:
|
||||
class C:
|
||||
other: C = ... # valid when `__future__.annotations are enabled
|
||||
other2: "C" = ... # always okay
|
||||
def from_str(self, s: str) -> C: ... # valid when `__future__.annotations are enabled
|
||||
def from_str2(self, s: str) -> "C": ... # always okay
|
||||
|
||||
# Circular references:
|
||||
class A:
|
||||
foo: B # valid when `__future__.annotations are enabled
|
||||
foo2: "B" # always okay
|
||||
bar: dict[str, B] # valid when `__future__.annotations are enabled
|
||||
bar2: dict[str, "A"] # always okay
|
||||
|
||||
class B:
|
||||
foo: A # always okay
|
||||
bar: dict[str, A] # always okay
|
||||
|
||||
# Annotations are treated as assignments in .pyi files, but not in .py files
|
||||
class MyClass:
|
||||
foo: int
|
||||
bar = foo # Still invalid even when `__future__.annotations` are enabled
|
||||
bar = "foo" # always okay
|
||||
|
||||
baz: MyClass
|
||||
eggs = baz # Still invalid even when `__future__.annotations` are enabled
|
||||
eggs = "baz" # always okay
|
||||
|
||||
# Forward references:
|
||||
MaybeDStr: TypeAlias = Optional[DStr] # Still invalid even when `__future__.annotations` are enabled
|
||||
MaybeDStr2: TypeAlias = Optional["DStr"] # always okay
|
||||
DStr: TypeAlias = Union[D, str] # Still invalid even when `__future__.annotations` are enabled
|
||||
DStr2: TypeAlias = Union["D", str] # always okay
|
||||
|
||||
class D: ...
|
||||
|
||||
# More circular references
|
||||
class Leaf: ...
|
||||
class Tree(list[Tree | Leaf]): ... # Still invalid even when `__future__.annotations` are enabled
|
||||
class Tree2(list["Tree | Leaf"]): ... # always okay
|
||||
@@ -1,10 +0,0 @@
|
||||
"""Test: inner class annotation."""
|
||||
|
||||
class RandomClass:
|
||||
def bad_func(self) -> InnerClass: ... # F821
|
||||
def good_func(self) -> OuterClass.InnerClass: ... # Okay
|
||||
|
||||
class OuterClass:
|
||||
class InnerClass: ...
|
||||
|
||||
def good_func(self) -> InnerClass: ... # Okay
|
||||
@@ -1,4 +0,0 @@
|
||||
a = 1
|
||||
b: int # Considered a binding in a `.pyi` stub file, not in a `.py` runtime file
|
||||
|
||||
__all__ = ["a", "b", "c"] # c is flagged as missing; b is not
|
||||
@@ -54,15 +54,3 @@ class StudentE(StudentD):
|
||||
|
||||
def setup(self):
|
||||
pass
|
||||
|
||||
|
||||
class StudentF(object):
|
||||
__slots__ = ("name", "__dict__")
|
||||
|
||||
def __init__(self, name, middle_name):
|
||||
self.name = name
|
||||
self.middle_name = middle_name # [assigning-non-slot]
|
||||
self.setup()
|
||||
|
||||
def setup(self):
|
||||
pass
|
||||
|
||||
@@ -427,7 +427,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
|
||||
pyupgrade::rules::format_literals(checker, call, &summary);
|
||||
}
|
||||
if checker.enabled(Rule::FString) {
|
||||
pyupgrade::rules::f_strings(checker, call, &summary);
|
||||
pyupgrade::rules::f_strings(checker, call, &summary, value);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -632,9 +632,6 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
|
||||
]) {
|
||||
flake8_bandit::rules::shell_injection(checker, call);
|
||||
}
|
||||
if checker.enabled(Rule::DjangoExtra) {
|
||||
flake8_bandit::rules::django_extra(checker, call);
|
||||
}
|
||||
if checker.enabled(Rule::DjangoRawSql) {
|
||||
flake8_bandit::rules::django_raw_sql(checker, call);
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ use crate::rules::{flake8_builtins, pep8_naming, pycodestyle};
|
||||
pub(crate) fn parameter(parameter: &Parameter, checker: &mut Checker) {
|
||||
if checker.enabled(Rule::AmbiguousVariableName) {
|
||||
if let Some(diagnostic) =
|
||||
pycodestyle::rules::ambiguous_variable_name(¶meter.name, parameter.name.range())
|
||||
pycodestyle::rules::ambiguous_variable_name(¶meter.name, parameter.range())
|
||||
{
|
||||
checker.diagnostics.push(diagnostic);
|
||||
}
|
||||
|
||||
@@ -938,7 +938,6 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
&& !self.semantic.in_deferred_type_definition()
|
||||
&& self.semantic.in_type_definition()
|
||||
&& self.semantic.future_annotations()
|
||||
&& (self.semantic.in_typing_only_annotation() || self.source_type.is_stub())
|
||||
{
|
||||
if let Expr::StringLiteral(ast::ExprStringLiteral { value, .. }) = expr {
|
||||
self.visit.string_type_definitions.push((
|
||||
@@ -1349,7 +1348,7 @@ impl<'a> Visitor<'a> for Checker<'a> {
|
||||
{
|
||||
let mut iter = elts.iter();
|
||||
if let Some(expr) = iter.next() {
|
||||
self.visit_type_definition(expr);
|
||||
self.visit_expr(expr);
|
||||
}
|
||||
for expr in iter {
|
||||
self.visit_non_type_definition(expr);
|
||||
@@ -1840,13 +1839,11 @@ impl<'a> Checker<'a> {
|
||||
flags.insert(BindingFlags::UNPACKED_ASSIGNMENT);
|
||||
}
|
||||
|
||||
// Match the left-hand side of an annotated assignment without a value,
|
||||
// like `x` in `x: int`. N.B. In stub files, these should be viewed
|
||||
// as assignments on par with statements such as `x: int = 5`.
|
||||
// Match the left-hand side of an annotated assignment, like `x` in `x: int`.
|
||||
if matches!(
|
||||
parent,
|
||||
Stmt::AnnAssign(ast::StmtAnnAssign { value: None, .. })
|
||||
) && !(self.semantic.in_annotation() || self.source_type.is_stub())
|
||||
) && !self.semantic.in_annotation()
|
||||
{
|
||||
self.add_binding(id, expr.range(), BindingKind::Annotation, flags);
|
||||
return;
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
use crate::line_width::IndentWidth;
|
||||
use ruff_diagnostics::Diagnostic;
|
||||
use ruff_python_codegen::Stylist;
|
||||
use ruff_python_index::Indexer;
|
||||
use ruff_python_parser::lexer::LexResult;
|
||||
use ruff_python_parser::TokenKind;
|
||||
use ruff_source_file::Locator;
|
||||
@@ -10,8 +9,8 @@ use ruff_text_size::{Ranged, TextRange};
|
||||
use crate::registry::AsRule;
|
||||
use crate::rules::pycodestyle::rules::logical_lines::{
|
||||
extraneous_whitespace, indentation, missing_whitespace, missing_whitespace_after_keyword,
|
||||
missing_whitespace_around_operator, redundant_backslash, space_after_comma,
|
||||
space_around_operator, whitespace_around_keywords, whitespace_around_named_parameter_equals,
|
||||
missing_whitespace_around_operator, space_after_comma, space_around_operator,
|
||||
whitespace_around_keywords, whitespace_around_named_parameter_equals,
|
||||
whitespace_before_comment, whitespace_before_parameters, LogicalLines, TokenFlags,
|
||||
};
|
||||
use crate::settings::LinterSettings;
|
||||
@@ -36,7 +35,6 @@ pub(crate) fn expand_indent(line: &str, indent_width: IndentWidth) -> usize {
|
||||
pub(crate) fn check_logical_lines(
|
||||
tokens: &[LexResult],
|
||||
locator: &Locator,
|
||||
indexer: &Indexer,
|
||||
stylist: &Stylist,
|
||||
settings: &LinterSettings,
|
||||
) -> Vec<Diagnostic> {
|
||||
@@ -75,7 +73,6 @@ pub(crate) fn check_logical_lines(
|
||||
|
||||
if line.flags().contains(TokenFlags::BRACKET) {
|
||||
whitespace_before_parameters(&line, &mut context);
|
||||
redundant_backslash(&line, locator, indexer, &mut context);
|
||||
}
|
||||
|
||||
// Extract the indentation level.
|
||||
|
||||
@@ -203,10 +203,6 @@ pub(crate) fn check_tokens(
|
||||
flake8_fixme::rules::todos(&mut diagnostics, &todo_comments);
|
||||
}
|
||||
|
||||
if settings.rules.enabled(Rule::TooManyNewlinesAtEndOfFile) {
|
||||
pycodestyle::rules::too_many_newlines_at_end_of_file(&mut diagnostics, tokens);
|
||||
}
|
||||
|
||||
diagnostics.retain(|diagnostic| settings.rules.enabled(diagnostic.kind.rule()));
|
||||
|
||||
diagnostics
|
||||
|
||||
@@ -146,7 +146,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
|
||||
(Pycodestyle, "E401") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleImportsOnOneLine),
|
||||
(Pycodestyle, "E402") => (RuleGroup::Stable, rules::pycodestyle::rules::ModuleImportNotAtTopOfFile),
|
||||
(Pycodestyle, "E501") => (RuleGroup::Stable, rules::pycodestyle::rules::LineTooLong),
|
||||
(Pycodestyle, "E502") => (RuleGroup::Preview, rules::pycodestyle::rules::logical_lines::RedundantBackslash),
|
||||
(Pycodestyle, "E701") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleStatementsOnOneLineColon),
|
||||
(Pycodestyle, "E702") => (RuleGroup::Stable, rules::pycodestyle::rules::MultipleStatementsOnOneLineSemicolon),
|
||||
(Pycodestyle, "E703") => (RuleGroup::Stable, rules::pycodestyle::rules::UselessSemicolon),
|
||||
@@ -168,7 +167,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
|
||||
(Pycodestyle, "W291") => (RuleGroup::Stable, rules::pycodestyle::rules::TrailingWhitespace),
|
||||
(Pycodestyle, "W292") => (RuleGroup::Stable, rules::pycodestyle::rules::MissingNewlineAtEndOfFile),
|
||||
(Pycodestyle, "W293") => (RuleGroup::Stable, rules::pycodestyle::rules::BlankLineWithWhitespace),
|
||||
(Pycodestyle, "W391") => (RuleGroup::Preview, rules::pycodestyle::rules::TooManyNewlinesAtEndOfFile),
|
||||
(Pycodestyle, "W505") => (RuleGroup::Stable, rules::pycodestyle::rules::DocLineTooLong),
|
||||
(Pycodestyle, "W605") => (RuleGroup::Stable, rules::pycodestyle::rules::InvalidEscapeSequence),
|
||||
|
||||
@@ -682,7 +680,6 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
|
||||
(Flake8Bandit, "607") => (RuleGroup::Stable, rules::flake8_bandit::rules::StartProcessWithPartialPath),
|
||||
(Flake8Bandit, "608") => (RuleGroup::Stable, rules::flake8_bandit::rules::HardcodedSQLExpression),
|
||||
(Flake8Bandit, "609") => (RuleGroup::Stable, rules::flake8_bandit::rules::UnixCommandWildcardInjection),
|
||||
(Flake8Bandit, "610") => (RuleGroup::Preview, rules::flake8_bandit::rules::DjangoExtra),
|
||||
(Flake8Bandit, "611") => (RuleGroup::Stable, rules::flake8_bandit::rules::DjangoRawSql),
|
||||
(Flake8Bandit, "612") => (RuleGroup::Stable, rules::flake8_bandit::rules::LoggingConfigInsecureListen),
|
||||
(Flake8Bandit, "701") => (RuleGroup::Stable, rules::flake8_bandit::rules::Jinja2AutoescapeFalse),
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
use libcst_native::{
|
||||
Expression, LeftParen, Name, ParenthesizableWhitespace, ParenthesizedNode, RightParen,
|
||||
SimpleWhitespace, UnaryOperation,
|
||||
Expression, Name, ParenthesizableWhitespace, SimpleWhitespace, UnaryOperation,
|
||||
};
|
||||
|
||||
/// Return a [`ParenthesizableWhitespace`] containing a single space.
|
||||
@@ -25,7 +24,6 @@ pub(crate) fn negate<'a>(expression: &Expression<'a>) -> Expression<'a> {
|
||||
}
|
||||
}
|
||||
|
||||
// If the expression is `True` or `False`, return the opposite.
|
||||
if let Expression::Name(ref expression) = expression {
|
||||
match expression.value {
|
||||
"True" => {
|
||||
@@ -46,32 +44,11 @@ pub(crate) fn negate<'a>(expression: &Expression<'a>) -> Expression<'a> {
|
||||
}
|
||||
}
|
||||
|
||||
// If the expression is higher precedence than the unary `not`, we need to wrap it in
|
||||
// parentheses.
|
||||
//
|
||||
// For example: given `a and b`, we need to return `not (a and b)`, rather than `not a and b`.
|
||||
//
|
||||
// See: <https://docs.python.org/3/reference/expressions.html#operator-precedence>
|
||||
let needs_parens = matches!(
|
||||
expression,
|
||||
Expression::BooleanOperation(_)
|
||||
| Expression::IfExp(_)
|
||||
| Expression::Lambda(_)
|
||||
| Expression::NamedExpr(_)
|
||||
);
|
||||
let has_parens = !expression.lpar().is_empty() && !expression.rpar().is_empty();
|
||||
// Otherwise, wrap in a `not` operator.
|
||||
Expression::UnaryOperation(Box::new(UnaryOperation {
|
||||
operator: libcst_native::UnaryOp::Not {
|
||||
whitespace_after: space(),
|
||||
},
|
||||
expression: Box::new(if needs_parens && !has_parens {
|
||||
expression
|
||||
.clone()
|
||||
.with_parens(LeftParen::default(), RightParen::default())
|
||||
} else {
|
||||
expression.clone()
|
||||
}),
|
||||
expression: Box::new(expression.clone()),
|
||||
lpar: vec![],
|
||||
rpar: vec![],
|
||||
}))
|
||||
|
||||
@@ -131,7 +131,10 @@ fn extract_noqa_line_for(lxr: &[LexResult], locator: &Locator, indexer: &Indexer
|
||||
|
||||
// For multi-line strings, we expect `noqa` directives on the last line of the
|
||||
// string.
|
||||
Tok::String { kind, .. } if kind.is_triple_quoted() => {
|
||||
Tok::String {
|
||||
triple_quoted: true,
|
||||
..
|
||||
} => {
|
||||
if locator.contains_line_break(*range) {
|
||||
string_mappings.push(TextRange::new(
|
||||
locator.line_start(range.start()),
|
||||
|
||||
@@ -418,6 +418,29 @@ pub(crate) fn fits(
|
||||
all_lines_fit(fix, node, locator, line_length.value() as usize, tab_size)
|
||||
}
|
||||
|
||||
/// Returns `true` if the fix fits within the maximum configured line length, or produces lines that
|
||||
/// are shorter than the maximum length of the existing AST node.
|
||||
pub(crate) fn fits_or_shrinks(
|
||||
fix: &str,
|
||||
node: AnyNodeRef,
|
||||
locator: &Locator,
|
||||
line_length: LineLength,
|
||||
tab_size: IndentWidth,
|
||||
) -> bool {
|
||||
// Use the larger of the line length limit, or the longest line in the existing AST node.
|
||||
let line_length = std::iter::once(line_length.value() as usize)
|
||||
.chain(
|
||||
locator
|
||||
.slice(locator.lines_range(node.range()))
|
||||
.universal_newlines()
|
||||
.map(|line| LineWidthBuilder::new(tab_size).add_str(&line).get()),
|
||||
)
|
||||
.max()
|
||||
.unwrap_or(line_length.value() as usize);
|
||||
|
||||
all_lines_fit(fix, node, locator, line_length, tab_size)
|
||||
}
|
||||
|
||||
/// Returns `true` if all lines in the fix are shorter than the given line length.
|
||||
fn all_lines_fit(
|
||||
fix: &str,
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
//! and subject to change drastically.
|
||||
//!
|
||||
//! [Ruff]: https://github.com/astral-sh/ruff
|
||||
#![recursion_limit = "256"]
|
||||
|
||||
#[cfg(feature = "clap")]
|
||||
pub use registry::clap_completion::RuleParser;
|
||||
|
||||
@@ -132,7 +132,7 @@ pub fn check_path(
|
||||
.any(|rule_code| rule_code.lint_source().is_logical_lines())
|
||||
{
|
||||
diagnostics.extend(crate::checkers::logical_lines::check_logical_lines(
|
||||
&tokens, locator, indexer, stylist, settings,
|
||||
&tokens, locator, stylist, settings,
|
||||
));
|
||||
}
|
||||
|
||||
|
||||
@@ -194,7 +194,7 @@ impl DisplayParseError {
|
||||
// Translate the byte offset to a location in the originating source.
|
||||
let location =
|
||||
if let Some(jupyter_index) = source_kind.as_ipy_notebook().map(Notebook::index) {
|
||||
let source_location = source_code.source_location(error.offset);
|
||||
let source_location = source_code.source_location(error.location.start());
|
||||
|
||||
ErrorLocation::Cell(
|
||||
jupyter_index
|
||||
@@ -208,7 +208,7 @@ impl DisplayParseError {
|
||||
},
|
||||
)
|
||||
} else {
|
||||
ErrorLocation::File(source_code.source_location(error.offset))
|
||||
ErrorLocation::File(source_code.source_location(error.location.start()))
|
||||
};
|
||||
|
||||
Self {
|
||||
@@ -275,27 +275,7 @@ impl<'a> DisplayParseErrorType<'a> {
|
||||
|
||||
impl Display for DisplayParseErrorType<'_> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
match self.0 {
|
||||
ParseErrorType::Eof => write!(f, "Expected token but reached end of file."),
|
||||
ParseErrorType::ExtraToken(ref tok) => write!(
|
||||
f,
|
||||
"Got extraneous token: {tok}",
|
||||
tok = TruncateAtNewline(&tok)
|
||||
),
|
||||
ParseErrorType::InvalidToken => write!(f, "Got invalid token"),
|
||||
ParseErrorType::UnrecognizedToken(ref tok, ref expected) => {
|
||||
if let Some(expected) = expected.as_ref() {
|
||||
write!(
|
||||
f,
|
||||
"Expected '{expected}', but got {tok}",
|
||||
tok = TruncateAtNewline(&tok)
|
||||
)
|
||||
} else {
|
||||
write!(f, "Unexpected token {tok}", tok = TruncateAtNewline(&tok))
|
||||
}
|
||||
}
|
||||
ParseErrorType::Lexical(ref error) => write!(f, "{error}"),
|
||||
}
|
||||
write!(f, "{}", TruncateAtNewline(&self.0))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -300,7 +300,6 @@ impl Rule {
|
||||
| Rule::SingleLineImplicitStringConcatenation
|
||||
| Rule::TabIndentation
|
||||
| Rule::TooManyBlankLines
|
||||
| Rule::TooManyNewlinesAtEndOfFile
|
||||
| Rule::TrailingCommaOnBareTuple
|
||||
| Rule::TypeCommentInStub
|
||||
| Rule::UselessSemicolon
|
||||
@@ -328,7 +327,6 @@ impl Rule {
|
||||
| Rule::NoSpaceAfterBlockComment
|
||||
| Rule::NoSpaceAfterInlineComment
|
||||
| Rule::OverIndented
|
||||
| Rule::RedundantBackslash
|
||||
| Rule::TabAfterComma
|
||||
| Rule::TabAfterKeyword
|
||||
| Rule::TabAfterOperator
|
||||
|
||||
@@ -48,7 +48,6 @@ mod tests {
|
||||
#[test_case(Rule::SuspiciousEvalUsage, Path::new("S307.py"))]
|
||||
#[test_case(Rule::SuspiciousMarkSafeUsage, Path::new("S308.py"))]
|
||||
#[test_case(Rule::SuspiciousURLOpenUsage, Path::new("S310.py"))]
|
||||
#[test_case(Rule::SuspiciousNonCryptographicRandomUsage, Path::new("S311.py"))]
|
||||
#[test_case(Rule::SuspiciousTelnetUsage, Path::new("S312.py"))]
|
||||
#[test_case(Rule::SuspiciousTelnetlibImport, Path::new("S401.py"))]
|
||||
#[test_case(Rule::SuspiciousFtplibImport, Path::new("S402.py"))]
|
||||
@@ -69,7 +68,6 @@ mod tests {
|
||||
#[test_case(Rule::UnixCommandWildcardInjection, Path::new("S609.py"))]
|
||||
#[test_case(Rule::UnsafeYAMLLoad, Path::new("S506.py"))]
|
||||
#[test_case(Rule::WeakCryptographicKey, Path::new("S505.py"))]
|
||||
#[test_case(Rule::DjangoExtra, Path::new("S610.py"))]
|
||||
#[test_case(Rule::DjangoRawSql, Path::new("S611.py"))]
|
||||
#[test_case(Rule::TarfileUnsafeMembers, Path::new("S202.py"))]
|
||||
fn rules(rule_code: Rule, path: &Path) -> Result<()> {
|
||||
|
||||
@@ -1,81 +0,0 @@
|
||||
use ruff_diagnostics::{Diagnostic, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::{self as ast, Expr, ExprAttribute, ExprDict, ExprList};
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::checkers::ast::Checker;
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for uses of Django's `extra` function.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// Django's `extra` function can be used to execute arbitrary SQL queries,
|
||||
/// which can in turn lead to SQL injection vulnerabilities.
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// from django.contrib.auth.models import User
|
||||
///
|
||||
/// User.objects.all().extra(select={"test": "%secure" % "nos"})
|
||||
/// ```
|
||||
///
|
||||
/// ## References
|
||||
/// - [Django documentation: SQL injection protection](https://docs.djangoproject.com/en/dev/topics/security/#sql-injection-protection)
|
||||
/// - [Common Weakness Enumeration: CWE-89](https://cwe.mitre.org/data/definitions/89.html)
|
||||
#[violation]
|
||||
pub struct DjangoExtra;
|
||||
|
||||
impl Violation for DjangoExtra {
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
format!("Use of Django `extra` can lead to SQL injection vulnerabilities")
|
||||
}
|
||||
}
|
||||
|
||||
/// S610
|
||||
pub(crate) fn django_extra(checker: &mut Checker, call: &ast::ExprCall) {
|
||||
let Expr::Attribute(ExprAttribute { attr, .. }) = call.func.as_ref() else {
|
||||
return;
|
||||
};
|
||||
|
||||
if attr.as_str() != "extra" {
|
||||
return;
|
||||
}
|
||||
|
||||
if is_call_insecure(call) {
|
||||
checker
|
||||
.diagnostics
|
||||
.push(Diagnostic::new(DjangoExtra, call.arguments.range()));
|
||||
}
|
||||
}
|
||||
|
||||
fn is_call_insecure(call: &ast::ExprCall) -> bool {
|
||||
for (argument_name, position) in [("select", 0), ("where", 1), ("tables", 3)] {
|
||||
if let Some(argument) = call.arguments.find_argument(argument_name, position) {
|
||||
match argument_name {
|
||||
"select" => match argument {
|
||||
Expr::Dict(ExprDict { keys, values, .. }) => {
|
||||
if !keys.iter().flatten().all(Expr::is_string_literal_expr) {
|
||||
return true;
|
||||
}
|
||||
if !values.iter().all(Expr::is_string_literal_expr) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
_ => return true,
|
||||
},
|
||||
"where" | "tables" => match argument {
|
||||
Expr::List(ExprList { elts, .. }) => {
|
||||
if !elts.iter().all(Expr::is_string_literal_expr) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
_ => return true,
|
||||
},
|
||||
_ => (),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
@@ -9,8 +9,7 @@ use crate::checkers::ast::Checker;
|
||||
use super::super::helpers::string_literal;
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for uses of weak or broken cryptographic hash functions in
|
||||
/// `hashlib` and `crypt` libraries.
|
||||
/// Checks for uses of weak or broken cryptographic hash functions.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// Weak or broken cryptographic hash functions may be susceptible to
|
||||
@@ -44,134 +43,68 @@ use super::super::helpers::string_literal;
|
||||
///
|
||||
/// ## References
|
||||
/// - [Python documentation: `hashlib` — Secure hashes and message digests](https://docs.python.org/3/library/hashlib.html)
|
||||
/// - [Python documentation: `crypt` — Function to check Unix passwords](https://docs.python.org/3/library/crypt.html)
|
||||
/// - [Common Weakness Enumeration: CWE-327](https://cwe.mitre.org/data/definitions/327.html)
|
||||
/// - [Common Weakness Enumeration: CWE-328](https://cwe.mitre.org/data/definitions/328.html)
|
||||
/// - [Common Weakness Enumeration: CWE-916](https://cwe.mitre.org/data/definitions/916.html)
|
||||
#[violation]
|
||||
pub struct HashlibInsecureHashFunction {
|
||||
library: String,
|
||||
string: String,
|
||||
}
|
||||
|
||||
impl Violation for HashlibInsecureHashFunction {
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
let HashlibInsecureHashFunction { library, string } = self;
|
||||
format!("Probable use of insecure hash functions in `{library}`: `{string}`")
|
||||
let HashlibInsecureHashFunction { string } = self;
|
||||
format!("Probable use of insecure hash functions in `hashlib`: `{string}`")
|
||||
}
|
||||
}
|
||||
|
||||
/// S324
|
||||
pub(crate) fn hashlib_insecure_hash_functions(checker: &mut Checker, call: &ast::ExprCall) {
|
||||
if let Some(weak_hash_call) = checker
|
||||
if let Some(hashlib_call) = checker
|
||||
.semantic()
|
||||
.resolve_qualified_name(&call.func)
|
||||
.and_then(|qualified_name| match qualified_name.segments() {
|
||||
["hashlib", "new"] => Some(WeakHashCall::Hashlib {
|
||||
call: HashlibCall::New,
|
||||
}),
|
||||
["hashlib", "md4"] => Some(WeakHashCall::Hashlib {
|
||||
call: HashlibCall::WeakHash("md4"),
|
||||
}),
|
||||
["hashlib", "md5"] => Some(WeakHashCall::Hashlib {
|
||||
call: HashlibCall::WeakHash("md5"),
|
||||
}),
|
||||
["hashlib", "sha"] => Some(WeakHashCall::Hashlib {
|
||||
call: HashlibCall::WeakHash("sha"),
|
||||
}),
|
||||
["hashlib", "sha1"] => Some(WeakHashCall::Hashlib {
|
||||
call: HashlibCall::WeakHash("sha1"),
|
||||
}),
|
||||
["crypt", "crypt" | "mksalt"] => Some(WeakHashCall::Crypt),
|
||||
["hashlib", "new"] => Some(HashlibCall::New),
|
||||
["hashlib", "md4"] => Some(HashlibCall::WeakHash("md4")),
|
||||
["hashlib", "md5"] => Some(HashlibCall::WeakHash("md5")),
|
||||
["hashlib", "sha"] => Some(HashlibCall::WeakHash("sha")),
|
||||
["hashlib", "sha1"] => Some(HashlibCall::WeakHash("sha1")),
|
||||
_ => None,
|
||||
})
|
||||
{
|
||||
match weak_hash_call {
|
||||
WeakHashCall::Hashlib { call: hashlib_call } => {
|
||||
detect_insecure_hashlib_calls(checker, call, hashlib_call);
|
||||
}
|
||||
WeakHashCall::Crypt => detect_insecure_crypt_calls(checker, call),
|
||||
if !is_used_for_security(&call.arguments) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn detect_insecure_hashlib_calls(
|
||||
checker: &mut Checker,
|
||||
call: &ast::ExprCall,
|
||||
hashlib_call: HashlibCall,
|
||||
) {
|
||||
if !is_used_for_security(&call.arguments) {
|
||||
return;
|
||||
}
|
||||
|
||||
match hashlib_call {
|
||||
HashlibCall::New => {
|
||||
let Some(name_arg) = call.arguments.find_argument("name", 0) else {
|
||||
return;
|
||||
};
|
||||
let Some(hash_func_name) = string_literal(name_arg) else {
|
||||
return;
|
||||
};
|
||||
|
||||
// `hashlib.new` accepts both lowercase and uppercase names for hash
|
||||
// functions.
|
||||
if matches!(
|
||||
hash_func_name,
|
||||
"md4" | "md5" | "sha" | "sha1" | "MD4" | "MD5" | "SHA" | "SHA1"
|
||||
) {
|
||||
match hashlib_call {
|
||||
HashlibCall::New => {
|
||||
if let Some(name_arg) = call.arguments.find_argument("name", 0) {
|
||||
if let Some(hash_func_name) = string_literal(name_arg) {
|
||||
// `hashlib.new` accepts both lowercase and uppercase names for hash
|
||||
// functions.
|
||||
if matches!(
|
||||
hash_func_name,
|
||||
"md4" | "md5" | "sha" | "sha1" | "MD4" | "MD5" | "SHA" | "SHA1"
|
||||
) {
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
HashlibInsecureHashFunction {
|
||||
string: hash_func_name.to_string(),
|
||||
},
|
||||
name_arg.range(),
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
HashlibCall::WeakHash(func_name) => {
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
HashlibInsecureHashFunction {
|
||||
library: "hashlib".to_string(),
|
||||
string: hash_func_name.to_string(),
|
||||
string: (*func_name).to_string(),
|
||||
},
|
||||
name_arg.range(),
|
||||
call.func.range(),
|
||||
));
|
||||
}
|
||||
}
|
||||
HashlibCall::WeakHash(func_name) => {
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
HashlibInsecureHashFunction {
|
||||
library: "hashlib".to_string(),
|
||||
string: (*func_name).to_string(),
|
||||
},
|
||||
call.func.range(),
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn detect_insecure_crypt_calls(checker: &mut Checker, call: &ast::ExprCall) {
|
||||
let Some(method) = checker
|
||||
.semantic()
|
||||
.resolve_qualified_name(&call.func)
|
||||
.and_then(|qualified_name| match qualified_name.segments() {
|
||||
["crypt", "crypt"] => Some(("salt", 1)),
|
||||
["crypt", "mksalt"] => Some(("method", 0)),
|
||||
_ => None,
|
||||
})
|
||||
.and_then(|(argument_name, position)| {
|
||||
call.arguments.find_argument(argument_name, position)
|
||||
})
|
||||
else {
|
||||
return;
|
||||
};
|
||||
|
||||
let Some(qualified_name) = checker.semantic().resolve_qualified_name(method) else {
|
||||
return;
|
||||
};
|
||||
|
||||
if matches!(
|
||||
qualified_name.segments(),
|
||||
["crypt", "METHOD_CRYPT" | "METHOD_MD5" | "METHOD_BLOWFISH"]
|
||||
) {
|
||||
checker.diagnostics.push(Diagnostic::new(
|
||||
HashlibInsecureHashFunction {
|
||||
library: "crypt".to_string(),
|
||||
string: qualified_name.to_string(),
|
||||
},
|
||||
method.range(),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -181,13 +114,7 @@ fn is_used_for_security(arguments: &Arguments) -> bool {
|
||||
.map_or(true, |keyword| !is_const_false(&keyword.value))
|
||||
}
|
||||
|
||||
#[derive(Debug, Copy, Clone)]
|
||||
enum WeakHashCall {
|
||||
Hashlib { call: HashlibCall },
|
||||
Crypt,
|
||||
}
|
||||
|
||||
#[derive(Debug, Copy, Clone)]
|
||||
#[derive(Debug)]
|
||||
enum HashlibCall {
|
||||
New,
|
||||
WeakHash(&'static str),
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
pub(crate) use assert_used::*;
|
||||
pub(crate) use bad_file_permissions::*;
|
||||
pub(crate) use django_extra::*;
|
||||
pub(crate) use django_raw_sql::*;
|
||||
pub(crate) use exec_used::*;
|
||||
pub(crate) use flask_debug_true::*;
|
||||
@@ -34,7 +33,6 @@ pub(crate) use weak_cryptographic_key::*;
|
||||
|
||||
mod assert_used;
|
||||
mod bad_file_permissions;
|
||||
mod django_extra;
|
||||
mod django_raw_sql;
|
||||
mod exec_used;
|
||||
mod flask_debug_true;
|
||||
|
||||
@@ -433,7 +433,6 @@ fn get_call_kind(func: &Expr, semantic: &SemanticModel) -> Option<CallKind> {
|
||||
"Popen" | "call" | "check_call" | "check_output" | "run" => {
|
||||
Some(CallKind::Subprocess)
|
||||
}
|
||||
"getoutput" | "getstatusoutput" => Some(CallKind::Shell),
|
||||
_ => None,
|
||||
},
|
||||
"popen2" => match submodule {
|
||||
|
||||
@@ -867,7 +867,7 @@ pub(crate) fn suspicious_function_call(checker: &mut Checker, call: &ExprCall) {
|
||||
["urllib", "request", "URLopener" | "FancyURLopener"] |
|
||||
["six", "moves", "urllib", "request", "URLopener" | "FancyURLopener"] => Some(SuspiciousURLOpenUsage.into()),
|
||||
// NonCryptographicRandom
|
||||
["random", "Random" | "random" | "randrange" | "randint" | "choice" | "choices" | "uniform" | "triangular" | "randbytes"] => Some(SuspiciousNonCryptographicRandomUsage.into()),
|
||||
["random", "random" | "randrange" | "randint" | "choice" | "choices" | "uniform" | "triangular"] => Some(SuspiciousNonCryptographicRandomUsage.into()),
|
||||
// UnverifiedContext
|
||||
["ssl", "_create_unverified_context"] => Some(SuspiciousUnverifiedContextUsage.into()),
|
||||
// XMLCElementTree
|
||||
|
||||
@@ -1,90 +0,0 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
|
||||
---
|
||||
S311.py:10:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
9 | # Errors
|
||||
10 | random.Random()
|
||||
| ^^^^^^^^^^^^^^^ S311
|
||||
11 | random.random()
|
||||
12 | random.randrange()
|
||||
|
|
||||
|
||||
S311.py:11:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
9 | # Errors
|
||||
10 | random.Random()
|
||||
11 | random.random()
|
||||
| ^^^^^^^^^^^^^^^ S311
|
||||
12 | random.randrange()
|
||||
13 | random.randint()
|
||||
|
|
||||
|
||||
S311.py:12:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
10 | random.Random()
|
||||
11 | random.random()
|
||||
12 | random.randrange()
|
||||
| ^^^^^^^^^^^^^^^^^^ S311
|
||||
13 | random.randint()
|
||||
14 | random.choice()
|
||||
|
|
||||
|
||||
S311.py:13:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
11 | random.random()
|
||||
12 | random.randrange()
|
||||
13 | random.randint()
|
||||
| ^^^^^^^^^^^^^^^^ S311
|
||||
14 | random.choice()
|
||||
15 | random.choices()
|
||||
|
|
||||
|
||||
S311.py:14:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
12 | random.randrange()
|
||||
13 | random.randint()
|
||||
14 | random.choice()
|
||||
| ^^^^^^^^^^^^^^^ S311
|
||||
15 | random.choices()
|
||||
16 | random.uniform()
|
||||
|
|
||||
|
||||
S311.py:15:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
13 | random.randint()
|
||||
14 | random.choice()
|
||||
15 | random.choices()
|
||||
| ^^^^^^^^^^^^^^^^ S311
|
||||
16 | random.uniform()
|
||||
17 | random.triangular()
|
||||
|
|
||||
|
||||
S311.py:16:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
14 | random.choice()
|
||||
15 | random.choices()
|
||||
16 | random.uniform()
|
||||
| ^^^^^^^^^^^^^^^^ S311
|
||||
17 | random.triangular()
|
||||
18 | random.randbytes()
|
||||
|
|
||||
|
||||
S311.py:17:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
15 | random.choices()
|
||||
16 | random.uniform()
|
||||
17 | random.triangular()
|
||||
| ^^^^^^^^^^^^^^^^^^^ S311
|
||||
18 | random.randbytes()
|
||||
|
|
||||
|
||||
S311.py:18:1: S311 Standard pseudo-random generators are not suitable for cryptographic purposes
|
||||
|
|
||||
16 | random.uniform()
|
||||
17 | random.triangular()
|
||||
18 | random.randbytes()
|
||||
| ^^^^^^^^^^^^^^^^^^ S311
|
||||
19 |
|
||||
20 | # Unrelated
|
||||
|
|
||||
@@ -3,195 +3,131 @@ source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
|
||||
---
|
||||
S324.py:7:13: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
||||
|
|
||||
6 | # Errors
|
||||
5 | # Invalid
|
||||
6 |
|
||||
7 | hashlib.new('md5')
|
||||
| ^^^^^ S324
|
||||
8 | hashlib.new('md4', b'test')
|
||||
9 | hashlib.new(name='md5', data=b'test')
|
||||
8 |
|
||||
9 | hashlib.new('md4', b'test')
|
||||
|
|
||||
|
||||
S324.py:8:13: S324 Probable use of insecure hash functions in `hashlib`: `md4`
|
||||
|
|
||||
6 | # Errors
|
||||
7 | hashlib.new('md5')
|
||||
8 | hashlib.new('md4', b'test')
|
||||
| ^^^^^ S324
|
||||
9 | hashlib.new(name='md5', data=b'test')
|
||||
10 | hashlib.new('MD4', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:9:18: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
||||
S324.py:9:13: S324 Probable use of insecure hash functions in `hashlib`: `md4`
|
||||
|
|
||||
7 | hashlib.new('md5')
|
||||
8 | hashlib.new('md4', b'test')
|
||||
9 | hashlib.new(name='md5', data=b'test')
|
||||
| ^^^^^ S324
|
||||
10 | hashlib.new('MD4', data=b'test')
|
||||
11 | hashlib.new('sha1')
|
||||
|
|
||||
|
||||
S324.py:10:13: S324 Probable use of insecure hash functions in `hashlib`: `MD4`
|
||||
|
|
||||
8 | hashlib.new('md4', b'test')
|
||||
9 | hashlib.new(name='md5', data=b'test')
|
||||
10 | hashlib.new('MD4', data=b'test')
|
||||
8 |
|
||||
9 | hashlib.new('md4', b'test')
|
||||
| ^^^^^ S324
|
||||
11 | hashlib.new('sha1')
|
||||
12 | hashlib.new('sha1', data=b'test')
|
||||
10 |
|
||||
11 | hashlib.new(name='md5', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:11:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
S324.py:11:18: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
||||
|
|
||||
9 | hashlib.new(name='md5', data=b'test')
|
||||
10 | hashlib.new('MD4', data=b'test')
|
||||
11 | hashlib.new('sha1')
|
||||
| ^^^^^^ S324
|
||||
12 | hashlib.new('sha1', data=b'test')
|
||||
13 | hashlib.new('sha', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:12:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
10 | hashlib.new('MD4', data=b'test')
|
||||
11 | hashlib.new('sha1')
|
||||
12 | hashlib.new('sha1', data=b'test')
|
||||
| ^^^^^^ S324
|
||||
13 | hashlib.new('sha', data=b'test')
|
||||
14 | hashlib.new(name='SHA', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:13:13: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
||||
|
|
||||
11 | hashlib.new('sha1')
|
||||
12 | hashlib.new('sha1', data=b'test')
|
||||
13 | hashlib.new('sha', data=b'test')
|
||||
| ^^^^^ S324
|
||||
14 | hashlib.new(name='SHA', data=b'test')
|
||||
15 | hashlib.sha(data=b'test')
|
||||
|
|
||||
|
||||
S324.py:14:18: S324 Probable use of insecure hash functions in `hashlib`: `SHA`
|
||||
|
|
||||
12 | hashlib.new('sha1', data=b'test')
|
||||
13 | hashlib.new('sha', data=b'test')
|
||||
14 | hashlib.new(name='SHA', data=b'test')
|
||||
9 | hashlib.new('md4', b'test')
|
||||
10 |
|
||||
11 | hashlib.new(name='md5', data=b'test')
|
||||
| ^^^^^ S324
|
||||
15 | hashlib.sha(data=b'test')
|
||||
16 | hashlib.md5()
|
||||
12 |
|
||||
13 | hashlib.new('MD4', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:15:1: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
||||
S324.py:13:13: S324 Probable use of insecure hash functions in `hashlib`: `MD4`
|
||||
|
|
||||
13 | hashlib.new('sha', data=b'test')
|
||||
14 | hashlib.new(name='SHA', data=b'test')
|
||||
15 | hashlib.sha(data=b'test')
|
||||
| ^^^^^^^^^^^ S324
|
||||
16 | hashlib.md5()
|
||||
17 | hashlib_new('sha1')
|
||||
11 | hashlib.new(name='md5', data=b'test')
|
||||
12 |
|
||||
13 | hashlib.new('MD4', data=b'test')
|
||||
| ^^^^^ S324
|
||||
14 |
|
||||
15 | hashlib.new('sha1')
|
||||
|
|
||||
|
||||
S324.py:16:1: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
||||
S324.py:15:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
14 | hashlib.new(name='SHA', data=b'test')
|
||||
15 | hashlib.sha(data=b'test')
|
||||
16 | hashlib.md5()
|
||||
| ^^^^^^^^^^^ S324
|
||||
17 | hashlib_new('sha1')
|
||||
18 | hashlib_sha1('sha1')
|
||||
13 | hashlib.new('MD4', data=b'test')
|
||||
14 |
|
||||
15 | hashlib.new('sha1')
|
||||
| ^^^^^^ S324
|
||||
16 |
|
||||
17 | hashlib.new('sha1', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:17:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
15 | hashlib.sha(data=b'test')
|
||||
16 | hashlib.md5()
|
||||
17 | hashlib_new('sha1')
|
||||
15 | hashlib.new('sha1')
|
||||
16 |
|
||||
17 | hashlib.new('sha1', data=b'test')
|
||||
| ^^^^^^ S324
|
||||
18 | hashlib_sha1('sha1')
|
||||
19 | # usedforsecurity arg only available in Python 3.9+
|
||||
18 |
|
||||
19 | hashlib.new('sha', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:18:1: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
S324.py:19:13: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
||||
|
|
||||
16 | hashlib.md5()
|
||||
17 | hashlib_new('sha1')
|
||||
18 | hashlib_sha1('sha1')
|
||||
17 | hashlib.new('sha1', data=b'test')
|
||||
18 |
|
||||
19 | hashlib.new('sha', data=b'test')
|
||||
| ^^^^^ S324
|
||||
20 |
|
||||
21 | hashlib.new(name='SHA', data=b'test')
|
||||
|
|
||||
|
||||
S324.py:21:18: S324 Probable use of insecure hash functions in `hashlib`: `SHA`
|
||||
|
|
||||
19 | hashlib.new('sha', data=b'test')
|
||||
20 |
|
||||
21 | hashlib.new(name='SHA', data=b'test')
|
||||
| ^^^^^ S324
|
||||
22 |
|
||||
23 | hashlib.sha(data=b'test')
|
||||
|
|
||||
|
||||
S324.py:23:1: S324 Probable use of insecure hash functions in `hashlib`: `sha`
|
||||
|
|
||||
21 | hashlib.new(name='SHA', data=b'test')
|
||||
22 |
|
||||
23 | hashlib.sha(data=b'test')
|
||||
| ^^^^^^^^^^^ S324
|
||||
24 |
|
||||
25 | hashlib.md5()
|
||||
|
|
||||
|
||||
S324.py:25:1: S324 Probable use of insecure hash functions in `hashlib`: `md5`
|
||||
|
|
||||
23 | hashlib.sha(data=b'test')
|
||||
24 |
|
||||
25 | hashlib.md5()
|
||||
| ^^^^^^^^^^^ S324
|
||||
26 |
|
||||
27 | hashlib_new('sha1')
|
||||
|
|
||||
|
||||
S324.py:27:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
25 | hashlib.md5()
|
||||
26 |
|
||||
27 | hashlib_new('sha1')
|
||||
| ^^^^^^ S324
|
||||
28 |
|
||||
29 | hashlib_sha1('sha1')
|
||||
|
|
||||
|
||||
S324.py:29:1: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
27 | hashlib_new('sha1')
|
||||
28 |
|
||||
29 | hashlib_sha1('sha1')
|
||||
| ^^^^^^^^^^^^ S324
|
||||
19 | # usedforsecurity arg only available in Python 3.9+
|
||||
20 | hashlib.new('sha1', usedforsecurity=True)
|
||||
|
|
||||
|
||||
S324.py:20:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
18 | hashlib_sha1('sha1')
|
||||
19 | # usedforsecurity arg only available in Python 3.9+
|
||||
20 | hashlib.new('sha1', usedforsecurity=True)
|
||||
| ^^^^^^ S324
|
||||
21 |
|
||||
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
|
||||
|
|
||||
|
||||
S324.py:22:26: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_CRYPT`
|
||||
|
|
||||
20 | hashlib.new('sha1', usedforsecurity=True)
|
||||
21 |
|
||||
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
|
||||
| ^^^^^^^^^^^^^^^^^^ S324
|
||||
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
|
||||
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
|
||||
|
|
||||
|
||||
S324.py:23:26: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_MD5`
|
||||
|
|
||||
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
|
||||
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
|
||||
| ^^^^^^^^^^^^^^^^ S324
|
||||
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
|
||||
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
||||
|
|
||||
|
||||
S324.py:24:26: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_BLOWFISH`
|
||||
|
|
||||
22 | crypt.crypt("test", salt=crypt.METHOD_CRYPT)
|
||||
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
|
||||
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ S324
|
||||
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
||||
|
|
||||
|
||||
S324.py:25:21: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_BLOWFISH`
|
||||
|
|
||||
23 | crypt.crypt("test", salt=crypt.METHOD_MD5)
|
||||
24 | crypt.crypt("test", salt=crypt.METHOD_BLOWFISH)
|
||||
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ S324
|
||||
26 |
|
||||
27 | crypt.mksalt(crypt.METHOD_CRYPT)
|
||||
|
|
||||
|
||||
S324.py:27:14: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_CRYPT`
|
||||
|
|
||||
25 | crypt.crypt("test", crypt.METHOD_BLOWFISH)
|
||||
26 |
|
||||
27 | crypt.mksalt(crypt.METHOD_CRYPT)
|
||||
| ^^^^^^^^^^^^^^^^^^ S324
|
||||
28 | crypt.mksalt(crypt.METHOD_MD5)
|
||||
29 | crypt.mksalt(crypt.METHOD_BLOWFISH)
|
||||
|
|
||||
|
||||
S324.py:28:14: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_MD5`
|
||||
|
|
||||
27 | crypt.mksalt(crypt.METHOD_CRYPT)
|
||||
28 | crypt.mksalt(crypt.METHOD_MD5)
|
||||
| ^^^^^^^^^^^^^^^^ S324
|
||||
29 | crypt.mksalt(crypt.METHOD_BLOWFISH)
|
||||
|
|
||||
|
||||
S324.py:29:14: S324 Probable use of insecure hash functions in `crypt`: `crypt.METHOD_BLOWFISH`
|
||||
|
|
||||
27 | crypt.mksalt(crypt.METHOD_CRYPT)
|
||||
28 | crypt.mksalt(crypt.METHOD_MD5)
|
||||
29 | crypt.mksalt(crypt.METHOD_BLOWFISH)
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ S324
|
||||
30 |
|
||||
31 | # OK
|
||||
31 | # usedforsecurity arg only available in Python 3.9+
|
||||
|
|
||||
|
||||
S324.py:32:13: S324 Probable use of insecure hash functions in `hashlib`: `sha1`
|
||||
|
|
||||
31 | # usedforsecurity arg only available in Python 3.9+
|
||||
32 | hashlib.new('sha1', usedforsecurity=True)
|
||||
| ^^^^^^ S324
|
||||
33 |
|
||||
34 | # Valid
|
||||
|
|
||||
|
||||
|
||||
|
||||
@@ -1,165 +1,147 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
|
||||
---
|
||||
S605.py:8:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
S605.py:7:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
6 | # Check all shell functions.
|
||||
7 | os.system("true")
|
||||
| ^^^^^^ S605
|
||||
8 | os.popen("true")
|
||||
9 | os.popen2("true")
|
||||
|
|
||||
|
||||
S605.py:8:10: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
7 | # Check all shell functions.
|
||||
8 | os.system("true")
|
||||
| ^^^^^^ S605
|
||||
9 | os.popen("true")
|
||||
10 | os.popen2("true")
|
||||
6 | # Check all shell functions.
|
||||
7 | os.system("true")
|
||||
8 | os.popen("true")
|
||||
| ^^^^^^ S605
|
||||
9 | os.popen2("true")
|
||||
10 | os.popen3("true")
|
||||
|
|
||||
|
||||
S605.py:9:10: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
S605.py:9:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
7 | # Check all shell functions.
|
||||
8 | os.system("true")
|
||||
9 | os.popen("true")
|
||||
| ^^^^^^ S605
|
||||
10 | os.popen2("true")
|
||||
11 | os.popen3("true")
|
||||
7 | os.system("true")
|
||||
8 | os.popen("true")
|
||||
9 | os.popen2("true")
|
||||
| ^^^^^^ S605
|
||||
10 | os.popen3("true")
|
||||
11 | os.popen4("true")
|
||||
|
|
||||
|
||||
S605.py:10:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
8 | os.system("true")
|
||||
9 | os.popen("true")
|
||||
10 | os.popen2("true")
|
||||
8 | os.popen("true")
|
||||
9 | os.popen2("true")
|
||||
10 | os.popen3("true")
|
||||
| ^^^^^^ S605
|
||||
11 | os.popen3("true")
|
||||
12 | os.popen4("true")
|
||||
11 | os.popen4("true")
|
||||
12 | popen2.popen2("true")
|
||||
|
|
||||
|
||||
S605.py:11:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
9 | os.popen("true")
|
||||
10 | os.popen2("true")
|
||||
11 | os.popen3("true")
|
||||
9 | os.popen2("true")
|
||||
10 | os.popen3("true")
|
||||
11 | os.popen4("true")
|
||||
| ^^^^^^ S605
|
||||
12 | os.popen4("true")
|
||||
13 | popen2.popen2("true")
|
||||
12 | popen2.popen2("true")
|
||||
13 | popen2.popen3("true")
|
||||
|
|
||||
|
||||
S605.py:12:11: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
S605.py:12:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
10 | os.popen2("true")
|
||||
11 | os.popen3("true")
|
||||
12 | os.popen4("true")
|
||||
| ^^^^^^ S605
|
||||
13 | popen2.popen2("true")
|
||||
14 | popen2.popen3("true")
|
||||
10 | os.popen3("true")
|
||||
11 | os.popen4("true")
|
||||
12 | popen2.popen2("true")
|
||||
| ^^^^^^ S605
|
||||
13 | popen2.popen3("true")
|
||||
14 | popen2.popen4("true")
|
||||
|
|
||||
|
||||
S605.py:13:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
11 | os.popen3("true")
|
||||
12 | os.popen4("true")
|
||||
13 | popen2.popen2("true")
|
||||
11 | os.popen4("true")
|
||||
12 | popen2.popen2("true")
|
||||
13 | popen2.popen3("true")
|
||||
| ^^^^^^ S605
|
||||
14 | popen2.popen3("true")
|
||||
15 | popen2.popen4("true")
|
||||
14 | popen2.popen4("true")
|
||||
15 | popen2.Popen3("true")
|
||||
|
|
||||
|
||||
S605.py:14:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
12 | os.popen4("true")
|
||||
13 | popen2.popen2("true")
|
||||
14 | popen2.popen3("true")
|
||||
12 | popen2.popen2("true")
|
||||
13 | popen2.popen3("true")
|
||||
14 | popen2.popen4("true")
|
||||
| ^^^^^^ S605
|
||||
15 | popen2.popen4("true")
|
||||
16 | popen2.Popen3("true")
|
||||
15 | popen2.Popen3("true")
|
||||
16 | popen2.Popen4("true")
|
||||
|
|
||||
|
||||
S605.py:15:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
13 | popen2.popen2("true")
|
||||
14 | popen2.popen3("true")
|
||||
15 | popen2.popen4("true")
|
||||
13 | popen2.popen3("true")
|
||||
14 | popen2.popen4("true")
|
||||
15 | popen2.Popen3("true")
|
||||
| ^^^^^^ S605
|
||||
16 | popen2.Popen3("true")
|
||||
17 | popen2.Popen4("true")
|
||||
16 | popen2.Popen4("true")
|
||||
17 | commands.getoutput("true")
|
||||
|
|
||||
|
||||
S605.py:16:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
14 | popen2.popen3("true")
|
||||
15 | popen2.popen4("true")
|
||||
16 | popen2.Popen3("true")
|
||||
14 | popen2.popen4("true")
|
||||
15 | popen2.Popen3("true")
|
||||
16 | popen2.Popen4("true")
|
||||
| ^^^^^^ S605
|
||||
17 | popen2.Popen4("true")
|
||||
18 | commands.getoutput("true")
|
||||
17 | commands.getoutput("true")
|
||||
18 | commands.getstatusoutput("true")
|
||||
|
|
||||
|
||||
S605.py:17:15: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
S605.py:17:20: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
15 | popen2.popen4("true")
|
||||
16 | popen2.Popen3("true")
|
||||
17 | popen2.Popen4("true")
|
||||
| ^^^^^^ S605
|
||||
18 | commands.getoutput("true")
|
||||
19 | commands.getstatusoutput("true")
|
||||
|
|
||||
|
||||
S605.py:18:20: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
16 | popen2.Popen3("true")
|
||||
17 | popen2.Popen4("true")
|
||||
18 | commands.getoutput("true")
|
||||
15 | popen2.Popen3("true")
|
||||
16 | popen2.Popen4("true")
|
||||
17 | commands.getoutput("true")
|
||||
| ^^^^^^ S605
|
||||
19 | commands.getstatusoutput("true")
|
||||
20 | subprocess.getoutput("true")
|
||||
18 | commands.getstatusoutput("true")
|
||||
|
|
||||
|
||||
S605.py:19:26: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
S605.py:18:26: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
17 | popen2.Popen4("true")
|
||||
18 | commands.getoutput("true")
|
||||
19 | commands.getstatusoutput("true")
|
||||
16 | popen2.Popen4("true")
|
||||
17 | commands.getoutput("true")
|
||||
18 | commands.getstatusoutput("true")
|
||||
| ^^^^^^ S605
|
||||
20 | subprocess.getoutput("true")
|
||||
21 | subprocess.getstatusoutput("true")
|
||||
|
|
||||
|
||||
S605.py:20:22: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
S605.py:23:11: S605 Starting a process with a shell, possible injection detected
|
||||
|
|
||||
18 | commands.getoutput("true")
|
||||
19 | commands.getstatusoutput("true")
|
||||
20 | subprocess.getoutput("true")
|
||||
| ^^^^^^ S605
|
||||
21 | subprocess.getstatusoutput("true")
|
||||
|
|
||||
|
||||
S605.py:21:28: S605 Starting a process with a shell: seems safe, but may be changed in the future; consider rewriting without `shell`
|
||||
|
|
||||
19 | commands.getstatusoutput("true")
|
||||
20 | subprocess.getoutput("true")
|
||||
21 | subprocess.getstatusoutput("true")
|
||||
| ^^^^^^ S605
|
||||
|
|
||||
|
||||
S605.py:26:11: S605 Starting a process with a shell, possible injection detected
|
||||
|
|
||||
24 | # Check command argument looks unsafe.
|
||||
25 | var_string = "true"
|
||||
26 | os.system(var_string)
|
||||
21 | # Check command argument looks unsafe.
|
||||
22 | var_string = "true"
|
||||
23 | os.system(var_string)
|
||||
| ^^^^^^^^^^ S605
|
||||
27 | os.system([var_string])
|
||||
28 | os.system([var_string, ""])
|
||||
24 | os.system([var_string])
|
||||
25 | os.system([var_string, ""])
|
||||
|
|
||||
|
||||
S605.py:27:11: S605 Starting a process with a shell, possible injection detected
|
||||
S605.py:24:11: S605 Starting a process with a shell, possible injection detected
|
||||
|
|
||||
25 | var_string = "true"
|
||||
26 | os.system(var_string)
|
||||
27 | os.system([var_string])
|
||||
22 | var_string = "true"
|
||||
23 | os.system(var_string)
|
||||
24 | os.system([var_string])
|
||||
| ^^^^^^^^^^^^ S605
|
||||
28 | os.system([var_string, ""])
|
||||
25 | os.system([var_string, ""])
|
||||
|
|
||||
|
||||
S605.py:28:11: S605 Starting a process with a shell, possible injection detected
|
||||
S605.py:25:11: S605 Starting a process with a shell, possible injection detected
|
||||
|
|
||||
26 | os.system(var_string)
|
||||
27 | os.system([var_string])
|
||||
28 | os.system([var_string, ""])
|
||||
23 | os.system(var_string)
|
||||
24 | os.system([var_string])
|
||||
25 | os.system([var_string, ""])
|
||||
| ^^^^^^^^^^^^^^^^ S605
|
||||
|
|
||||
|
||||
|
||||
|
||||
@@ -1,105 +0,0 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/flake8_bandit/mod.rs
|
||||
---
|
||||
S610.py:4:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
3 | # Errors
|
||||
4 | User.objects.filter(username='admin').extra(dict(could_be='insecure'))
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
|
||||
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
||||
|
|
||||
|
||||
S610.py:5:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
3 | # Errors
|
||||
4 | User.objects.filter(username='admin').extra(dict(could_be='insecure'))
|
||||
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
||||
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
||||
|
|
||||
|
||||
S610.py:6:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
4 | User.objects.filter(username='admin').extra(dict(could_be='insecure'))
|
||||
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
|
||||
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
||||
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
|
||||
|
|
||||
|
||||
S610.py:7:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
5 | User.objects.filter(username='admin').extra(select=dict(could_be='insecure'))
|
||||
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
||||
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
|
||||
9 | User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
|
||||
|
|
||||
|
||||
S610.py:8:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
6 | User.objects.filter(username='admin').extra(select={'test': '%secure' % 'nos'})
|
||||
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
||||
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
9 | User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
|
||||
|
|
||||
|
||||
S610.py:9:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
7 | User.objects.filter(username='admin').extra(select={'test': '{}secure'.format('nos')})
|
||||
8 | User.objects.filter(username='admin').extra(where=['%secure' % 'nos'])
|
||||
9 | User.objects.filter(username='admin').extra(where=['{}secure'.format('no')])
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
10 |
|
||||
11 | query = '"username") AS "username", * FROM "auth_user" WHERE 1=1 OR "username"=? --'
|
||||
|
|
||||
|
||||
S610.py:12:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
11 | query = '"username") AS "username", * FROM "auth_user" WHERE 1=1 OR "username"=? --'
|
||||
12 | User.objects.filter(username='admin').extra(select={'test': query})
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
13 |
|
||||
14 | where_var = ['1=1) OR 1=1 AND (1=1']
|
||||
|
|
||||
|
||||
S610.py:15:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
14 | where_var = ['1=1) OR 1=1 AND (1=1']
|
||||
15 | User.objects.filter(username='admin').extra(where=where_var)
|
||||
| ^^^^^^^^^^^^^^^^^ S610
|
||||
16 |
|
||||
17 | where_str = '1=1) OR 1=1 AND (1=1'
|
||||
|
|
||||
|
||||
S610.py:18:44: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
17 | where_str = '1=1) OR 1=1 AND (1=1'
|
||||
18 | User.objects.filter(username='admin').extra(where=[where_str])
|
||||
| ^^^^^^^^^^^^^^^^^^^ S610
|
||||
19 |
|
||||
20 | tables_var = ['django_content_type" WHERE "auth_user"."username"="admin']
|
||||
|
|
||||
|
||||
S610.py:21:25: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
20 | tables_var = ['django_content_type" WHERE "auth_user"."username"="admin']
|
||||
21 | User.objects.all().extra(tables=tables_var).distinct()
|
||||
| ^^^^^^^^^^^^^^^^^^^ S610
|
||||
22 |
|
||||
23 | tables_str = 'django_content_type" WHERE "auth_user"."username"="admin'
|
||||
|
|
||||
|
||||
S610.py:24:25: S610 Use of Django `extra` can lead to SQL injection vulnerabilities
|
||||
|
|
||||
23 | tables_str = 'django_content_type" WHERE "auth_user"."username"="admin'
|
||||
24 | User.objects.all().extra(tables=[tables_str]).distinct()
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ S610
|
||||
25 |
|
||||
26 | # OK
|
||||
|
|
||||
@@ -73,7 +73,7 @@ pub(crate) fn builtin_argument_shadowing(checker: &mut Checker, parameter: &Para
|
||||
BuiltinArgumentShadowing {
|
||||
name: parameter.name.to_string(),
|
||||
},
|
||||
parameter.name.range(),
|
||||
parameter.range(),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -243,7 +243,7 @@ pub(crate) fn trailing_commas(
|
||||
// F-strings are handled as `String` token type with the complete range
|
||||
// of the outermost f-string. This means that the expression inside the
|
||||
// f-string is not checked for trailing commas.
|
||||
Tok::FStringStart(_) => {
|
||||
Tok::FStringStart => {
|
||||
fstrings = fstrings.saturating_add(1);
|
||||
None
|
||||
}
|
||||
|
||||
@@ -205,7 +205,7 @@ C413.py:14:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
||||
14 |+sorted((i for i in range(42)), reverse=True)
|
||||
15 15 | reversed(sorted((i for i in range(42)), reverse=True))
|
||||
16 16 |
|
||||
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
17 17 |
|
||||
|
||||
C413.py:15:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
||||
|
|
||||
@@ -213,8 +213,6 @@ C413.py:15:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
||||
14 | reversed(sorted(i for i in range(42)))
|
||||
15 | reversed(sorted((i for i in range(42)), reverse=True))
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C413
|
||||
16 |
|
||||
17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
|
|
||||
= help: Remove unnecessary `reversed` call
|
||||
|
||||
@@ -225,38 +223,7 @@ C413.py:15:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
||||
15 |-reversed(sorted((i for i in range(42)), reverse=True))
|
||||
15 |+sorted((i for i in range(42)), reverse=False)
|
||||
16 16 |
|
||||
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
18 18 | reversed(sorted([1, 2, 3], reverse=False or True))
|
||||
17 17 |
|
||||
18 18 | def reversed(*args, **kwargs):
|
||||
|
||||
C413.py:18:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
||||
|
|
||||
17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
18 | reversed(sorted([1, 2, 3], reverse=False or True))
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C413
|
||||
19 | reversed(sorted([1, 2, 3], reverse=(False or True)))
|
||||
|
|
||||
= help: Remove unnecessary `reversed` call
|
||||
|
||||
ℹ Unsafe fix
|
||||
15 15 | reversed(sorted((i for i in range(42)), reverse=True))
|
||||
16 16 |
|
||||
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
18 |-reversed(sorted([1, 2, 3], reverse=False or True))
|
||||
18 |+sorted([1, 2, 3], reverse=not (False or True))
|
||||
19 19 | reversed(sorted([1, 2, 3], reverse=(False or True)))
|
||||
|
||||
C413.py:19:1: C413 [*] Unnecessary `reversed` call around `sorted()`
|
||||
|
|
||||
17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
18 | reversed(sorted([1, 2, 3], reverse=False or True))
|
||||
19 | reversed(sorted([1, 2, 3], reverse=(False or True)))
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C413
|
||||
|
|
||||
= help: Remove unnecessary `reversed` call
|
||||
|
||||
ℹ Unsafe fix
|
||||
16 16 |
|
||||
17 17 | # Regression test for: https://github.com/astral-sh/ruff/issues/10335
|
||||
18 18 | reversed(sorted([1, 2, 3], reverse=False or True))
|
||||
19 |-reversed(sorted([1, 2, 3], reverse=(False or True)))
|
||||
19 |+sorted([1, 2, 3], reverse=not (False or True))
|
||||
|
||||
@@ -110,7 +110,7 @@ pub(crate) fn implicit(
|
||||
{
|
||||
let (a_range, b_range) = match (a_tok, b_tok) {
|
||||
(Tok::String { .. }, Tok::String { .. }) => (*a_range, *b_range),
|
||||
(Tok::String { .. }, Tok::FStringStart(_)) => {
|
||||
(Tok::String { .. }, Tok::FStringStart) => {
|
||||
match indexer.fstring_ranges().innermost(b_range.start()) {
|
||||
Some(b_range) => (*a_range, b_range),
|
||||
None => continue,
|
||||
@@ -122,7 +122,7 @@ pub(crate) fn implicit(
|
||||
None => continue,
|
||||
}
|
||||
}
|
||||
(Tok::FStringEnd, Tok::FStringStart(_)) => {
|
||||
(Tok::FStringEnd, Tok::FStringStart) => {
|
||||
match (
|
||||
indexer.fstring_ranges().innermost(a_range.start()),
|
||||
indexer.fstring_ranges().innermost(b_range.start()),
|
||||
|
||||
@@ -11,7 +11,7 @@ mod tests {
|
||||
|
||||
use crate::assert_messages;
|
||||
use crate::registry::Rule;
|
||||
use crate::rules::flake8_import_conventions::settings::{default_aliases, BannedAliases};
|
||||
use crate::rules::flake8_import_conventions::settings::default_aliases;
|
||||
use crate::settings::LinterSettings;
|
||||
use crate::test::test_path;
|
||||
|
||||
@@ -57,20 +57,17 @@ mod tests {
|
||||
banned_aliases: FxHashMap::from_iter([
|
||||
(
|
||||
"typing".to_string(),
|
||||
BannedAliases::from_iter(["t".to_string(), "ty".to_string()]),
|
||||
vec!["t".to_string(), "ty".to_string()],
|
||||
),
|
||||
(
|
||||
"numpy".to_string(),
|
||||
BannedAliases::from_iter(["nmp".to_string(), "npy".to_string()]),
|
||||
vec!["nmp".to_string(), "npy".to_string()],
|
||||
),
|
||||
(
|
||||
"tensorflow.keras.backend".to_string(),
|
||||
BannedAliases::from_iter(["K".to_string()]),
|
||||
),
|
||||
(
|
||||
"torch.nn.functional".to_string(),
|
||||
BannedAliases::from_iter(["F".to_string()]),
|
||||
vec!["K".to_string()],
|
||||
),
|
||||
("torch.nn.functional".to_string(), vec!["F".to_string()]),
|
||||
]),
|
||||
banned_from: FxHashSet::default(),
|
||||
},
|
||||
|
||||
@@ -1,12 +1,10 @@
|
||||
use ruff_python_ast::Stmt;
|
||||
use rustc_hash::FxHashMap;
|
||||
|
||||
use ruff_diagnostics::{Diagnostic, Violation};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::Stmt;
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
use crate::rules::flake8_import_conventions::settings::BannedAliases;
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for imports that use non-standard naming conventions, like
|
||||
/// `import tensorflow.keras.backend as K`.
|
||||
@@ -51,7 +49,7 @@ pub(crate) fn banned_import_alias(
|
||||
stmt: &Stmt,
|
||||
name: &str,
|
||||
asname: &str,
|
||||
banned_conventions: &FxHashMap<String, BannedAliases>,
|
||||
banned_conventions: &FxHashMap<String, Vec<String>>,
|
||||
) -> Option<Diagnostic> {
|
||||
if let Some(banned_aliases) = banned_conventions.get(name) {
|
||||
if banned_aliases
|
||||
|
||||
@@ -1,13 +1,10 @@
|
||||
//! Settings for import conventions.
|
||||
|
||||
use rustc_hash::{FxHashMap, FxHashSet};
|
||||
use std::fmt::{Display, Formatter};
|
||||
|
||||
use rustc_hash::{FxHashMap, FxHashSet};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use ruff_macros::CacheKey;
|
||||
|
||||
use crate::display_settings;
|
||||
use ruff_macros::CacheKey;
|
||||
|
||||
const CONVENTIONAL_ALIASES: &[(&str, &str)] = &[
|
||||
("altair", "alt"),
|
||||
@@ -26,41 +23,10 @@ const CONVENTIONAL_ALIASES: &[(&str, &str)] = &[
|
||||
("pyarrow", "pa"),
|
||||
];
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey)]
|
||||
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
|
||||
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
|
||||
pub struct BannedAliases(Vec<String>);
|
||||
|
||||
impl Display for BannedAliases {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "[")?;
|
||||
for (i, alias) in self.0.iter().enumerate() {
|
||||
if i > 0 {
|
||||
write!(f, ", ")?;
|
||||
}
|
||||
write!(f, "{alias}")?;
|
||||
}
|
||||
write!(f, "]")
|
||||
}
|
||||
}
|
||||
|
||||
impl BannedAliases {
|
||||
/// Returns an iterator over the banned aliases.
|
||||
pub fn iter(&self) -> impl Iterator<Item = &str> {
|
||||
self.0.iter().map(String::as_str)
|
||||
}
|
||||
}
|
||||
|
||||
impl FromIterator<String> for BannedAliases {
|
||||
fn from_iter<I: IntoIterator<Item = String>>(iter: I) -> Self {
|
||||
Self(iter.into_iter().collect())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, CacheKey)]
|
||||
pub struct Settings {
|
||||
pub aliases: FxHashMap<String, String>,
|
||||
pub banned_aliases: FxHashMap<String, BannedAliases>,
|
||||
pub banned_aliases: FxHashMap<String, Vec<String>>,
|
||||
pub banned_from: FxHashSet<String>,
|
||||
}
|
||||
|
||||
@@ -87,9 +53,9 @@ impl Display for Settings {
|
||||
formatter = f,
|
||||
namespace = "linter.flake8_import_conventions",
|
||||
fields = [
|
||||
self.aliases | map,
|
||||
self.banned_aliases | map,
|
||||
self.banned_from | set,
|
||||
self.aliases | debug,
|
||||
self.banned_aliases | debug,
|
||||
self.banned_from | array,
|
||||
]
|
||||
}
|
||||
Ok(())
|
||||
|
||||
@@ -77,6 +77,8 @@ fn is_empty_or_null_fstring_element(element: &ast::FStringElement) -> bool {
|
||||
ast::FStringElement::Expression(ast::FStringExpressionElement { expression, .. }) => {
|
||||
is_empty_or_null_string(expression)
|
||||
}
|
||||
#[allow(deprecated)]
|
||||
ast::FStringElement::Invalid(_) => false,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -413,3 +413,5 @@ PT018.py:65:5: PT018 [*] Assertion should be broken down into multiple parts
|
||||
70 72 |
|
||||
71 73 | assert (not self.find_graph_output(node.output[0]) or
|
||||
72 74 | self.find_graph_input(node.input[0]))
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_ast::str::{is_triple_quote, leading_quote};
|
||||
use ruff_python_parser::lexer::LexResult;
|
||||
use ruff_python_parser::Tok;
|
||||
use ruff_source_file::Locator;
|
||||
@@ -157,7 +158,7 @@ pub(crate) fn avoidable_escaped_quote(
|
||||
// ```python
|
||||
// f'"foo" {'nested'}"
|
||||
// ```
|
||||
if matches!(tok, Tok::String { .. } | Tok::FStringStart(_)) {
|
||||
if matches!(tok, Tok::String { .. } | Tok::FStringStart) {
|
||||
if let Some(fstring_context) = fstrings.last_mut() {
|
||||
fstring_context.ignore_escaped_quotes();
|
||||
continue;
|
||||
@@ -169,13 +170,16 @@ pub(crate) fn avoidable_escaped_quote(
|
||||
Tok::String {
|
||||
value: string_contents,
|
||||
kind,
|
||||
triple_quoted,
|
||||
} => {
|
||||
if kind.is_raw_string() || kind.is_triple_quoted() {
|
||||
if kind.is_raw() || *triple_quoted {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if we're using the preferred quotation style.
|
||||
if Quote::from(kind.quote_style()) != quotes_settings.inline_quotes {
|
||||
if !leading_quote(locator.slice(tok_range)).is_some_and(|text| {
|
||||
contains_quote(text, quotes_settings.inline_quotes.as_char())
|
||||
}) {
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -188,7 +192,7 @@ pub(crate) fn avoidable_escaped_quote(
|
||||
let mut diagnostic = Diagnostic::new(AvoidableEscapedQuote, tok_range);
|
||||
let fixed_contents = format!(
|
||||
"{prefix}{quote}{value}{quote}",
|
||||
prefix = kind.prefix_str(),
|
||||
prefix = kind.as_str(),
|
||||
quote = quotes_settings.inline_quotes.opposite().as_char(),
|
||||
value = unescape_string(
|
||||
string_contents,
|
||||
@@ -202,11 +206,12 @@ pub(crate) fn avoidable_escaped_quote(
|
||||
diagnostics.push(diagnostic);
|
||||
}
|
||||
}
|
||||
Tok::FStringStart(kind) => {
|
||||
Tok::FStringStart => {
|
||||
let text = locator.slice(tok_range);
|
||||
// Check for escaped quote only if we're using the preferred quotation
|
||||
// style and it isn't a triple-quoted f-string.
|
||||
let check_for_escaped_quote = !kind.is_triple_quoted()
|
||||
&& Quote::from(kind.quote_style()) == quotes_settings.inline_quotes;
|
||||
let check_for_escaped_quote = !is_triple_quote(text)
|
||||
&& contains_quote(text, quotes_settings.inline_quotes.as_char());
|
||||
fstrings.push(FStringContext::new(
|
||||
check_for_escaped_quote,
|
||||
tok_range,
|
||||
@@ -215,8 +220,9 @@ pub(crate) fn avoidable_escaped_quote(
|
||||
}
|
||||
Tok::FStringMiddle {
|
||||
value: string_contents,
|
||||
kind,
|
||||
} if !kind.is_raw_string() => {
|
||||
is_raw,
|
||||
triple_quoted: _,
|
||||
} if !is_raw => {
|
||||
let Some(context) = fstrings.last_mut() else {
|
||||
continue;
|
||||
};
|
||||
@@ -309,12 +315,17 @@ pub(crate) fn unnecessary_escaped_quote(
|
||||
Tok::String {
|
||||
value: string_contents,
|
||||
kind,
|
||||
triple_quoted,
|
||||
} => {
|
||||
if kind.is_raw_string() || kind.is_triple_quoted() {
|
||||
if kind.is_raw() || *triple_quoted {
|
||||
continue;
|
||||
}
|
||||
|
||||
let leading = kind.quote_style();
|
||||
let leading = match leading_quote(locator.slice(tok_range)) {
|
||||
Some("\"") => Quote::Double,
|
||||
Some("'") => Quote::Single,
|
||||
_ => continue,
|
||||
};
|
||||
if !contains_escaped_quote(string_contents, leading.opposite().as_char()) {
|
||||
continue;
|
||||
}
|
||||
@@ -322,7 +333,7 @@ pub(crate) fn unnecessary_escaped_quote(
|
||||
let mut diagnostic = Diagnostic::new(UnnecessaryEscapedQuote, tok_range);
|
||||
let fixed_contents = format!(
|
||||
"{prefix}{quote}{value}{quote}",
|
||||
prefix = kind.prefix_str(),
|
||||
prefix = kind.as_str(),
|
||||
quote = leading.as_char(),
|
||||
value = unescape_string(string_contents, leading.opposite().as_char())
|
||||
);
|
||||
@@ -332,11 +343,16 @@ pub(crate) fn unnecessary_escaped_quote(
|
||||
)));
|
||||
diagnostics.push(diagnostic);
|
||||
}
|
||||
Tok::FStringStart(kind) => {
|
||||
Tok::FStringStart => {
|
||||
let text = locator.slice(tok_range);
|
||||
// Check for escaped quote only if we're using the preferred quotation
|
||||
// style and it isn't a triple-quoted f-string.
|
||||
let check_for_escaped_quote = !kind.is_triple_quoted();
|
||||
let quote_style = Quote::from(kind.quote_style());
|
||||
let check_for_escaped_quote = !is_triple_quote(text);
|
||||
let quote_style = if contains_quote(text, Quote::Single.as_char()) {
|
||||
Quote::Single
|
||||
} else {
|
||||
Quote::Double
|
||||
};
|
||||
fstrings.push(FStringContext::new(
|
||||
check_for_escaped_quote,
|
||||
tok_range,
|
||||
@@ -345,8 +361,9 @@ pub(crate) fn unnecessary_escaped_quote(
|
||||
}
|
||||
Tok::FStringMiddle {
|
||||
value: string_contents,
|
||||
kind,
|
||||
} if !kind.is_raw_string() => {
|
||||
is_raw,
|
||||
triple_quoted: _,
|
||||
} if !is_raw => {
|
||||
let Some(context) = fstrings.last_mut() else {
|
||||
continue;
|
||||
};
|
||||
|
||||
@@ -383,7 +383,7 @@ struct FStringRangeBuilder {
|
||||
impl FStringRangeBuilder {
|
||||
fn visit_token(&mut self, token: &Tok, range: TextRange) {
|
||||
match token {
|
||||
Tok::FStringStart(_) => {
|
||||
Tok::FStringStart => {
|
||||
if self.nesting == 0 {
|
||||
self.start_location = range.start();
|
||||
}
|
||||
|
||||
@@ -22,15 +22,6 @@ impl Default for Quote {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ruff_python_ast::str::QuoteStyle> for Quote {
|
||||
fn from(value: ruff_python_ast::str::QuoteStyle) -> Self {
|
||||
match value {
|
||||
ruff_python_ast::str::QuoteStyle::Double => Self::Double,
|
||||
ruff_python_ast::str::QuoteStyle::Single => Self::Single,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, CacheKey)]
|
||||
pub struct Settings {
|
||||
pub inline_quotes: Quote,
|
||||
|
||||
@@ -326,10 +326,8 @@ singles_escaped_unnecessary.py:43:26: Q004 [*] Unnecessary escape on inner quote
|
||||
|
|
||||
41 | # Make sure we do not unescape quotes
|
||||
42 | this_is_fine = "This is an \\'escaped\\' quote"
|
||||
43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
|
||||
43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
|
||||
44 |
|
||||
45 | # Invalid escapes in bytestrings are also triggered:
|
||||
|
|
||||
= help: Remove backslash
|
||||
|
||||
@@ -337,23 +335,7 @@ singles_escaped_unnecessary.py:43:26: Q004 [*] Unnecessary escape on inner quote
|
||||
40 40 |
|
||||
41 41 | # Make sure we do not unescape quotes
|
||||
42 42 | this_is_fine = "This is an \\'escaped\\' quote"
|
||||
43 |-this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
|
||||
43 |+this_should_raise_Q004 = "This is an \\'escaped\\' quote with an extra backslash" # Q004
|
||||
44 44 |
|
||||
45 45 | # Invalid escapes in bytestrings are also triggered:
|
||||
46 46 | x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
|
||||
43 |-this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash"
|
||||
43 |+this_should_raise_Q004 = "This is an \\'escaped\\' quote with an extra backslash"
|
||||
|
||||
singles_escaped_unnecessary.py:46:5: Q004 [*] Unnecessary escape on inner quote character
|
||||
|
|
||||
45 | # Invalid escapes in bytestrings are also triggered:
|
||||
46 | x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Q004
|
||||
|
|
||||
= help: Remove backslash
|
||||
|
||||
ℹ Safe fix
|
||||
43 43 | this_should_raise_Q004 = "This is an \\\'escaped\\\' quote with an extra backslash" # Q004
|
||||
44 44 |
|
||||
45 45 | # Invalid escapes in bytestrings are also triggered:
|
||||
46 |-x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce\'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
|
||||
46 |+x = b"\xe7\xeb\x0c\xa1\x1b\x83tN\xce=x\xe9\xbe\x01\xb9\x13B_\xba\xe7\x0c2\xce'rm\x0e\xcd\xe9.\xf8\xd2" # Q004
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use ast::{StringLiteralFlags, StringLiteralPrefix};
|
||||
use ruff_python_ast::{self as ast, Arguments, Expr};
|
||||
use ruff_text_size::Ranged;
|
||||
|
||||
@@ -219,13 +218,7 @@ fn check_os_environ_subscript(checker: &mut Checker, expr: &Expr) {
|
||||
);
|
||||
let node = ast::StringLiteral {
|
||||
value: capital_env_var.into_boxed_str(),
|
||||
flags: StringLiteralFlags::default().with_prefix({
|
||||
if env_var.is_unicode() {
|
||||
StringLiteralPrefix::UString
|
||||
} else {
|
||||
StringLiteralPrefix::None
|
||||
}
|
||||
}),
|
||||
unicode: env_var.is_unicode(),
|
||||
..ast::StringLiteral::default()
|
||||
};
|
||||
let new_env_var = node.into();
|
||||
|
||||
@@ -13,12 +13,6 @@ pub struct ApiBan {
|
||||
pub msg: String,
|
||||
}
|
||||
|
||||
impl Display for ApiBan {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", self.msg)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize, CacheKey, Default)]
|
||||
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
|
||||
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
|
||||
@@ -53,7 +47,7 @@ impl Display for Settings {
|
||||
namespace = "linter.flake8_tidy_imports",
|
||||
fields = [
|
||||
self.ban_relative_imports,
|
||||
self.banned_api | map,
|
||||
self.banned_api | debug,
|
||||
self.banned_module_level_imports | array,
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
use ast::FStringFlags;
|
||||
use itertools::Itertools;
|
||||
|
||||
use crate::fix::edits::pad;
|
||||
@@ -98,7 +97,6 @@ fn build_fstring(joiner: &str, joinees: &[Expr]) -> Option<Expr> {
|
||||
let node = ast::FString {
|
||||
elements: f_string_elements,
|
||||
range: TextRange::default(),
|
||||
flags: FStringFlags::default(),
|
||||
};
|
||||
Some(node.into())
|
||||
}
|
||||
|
||||
@@ -278,7 +278,7 @@ mod tests {
|
||||
use std::path::Path;
|
||||
|
||||
use anyhow::Result;
|
||||
use rustc_hash::{FxHashMap, FxHashSet};
|
||||
use rustc_hash::FxHashMap;
|
||||
use test_case::test_case;
|
||||
|
||||
use ruff_text_size::Ranged;
|
||||
@@ -495,7 +495,7 @@ mod tests {
|
||||
Path::new("isort").join(path).as_path(),
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
force_to_top: FxHashSet::from_iter([
|
||||
force_to_top: BTreeSet::from([
|
||||
"z".to_string(),
|
||||
"lib1".to_string(),
|
||||
"lib3".to_string(),
|
||||
@@ -575,10 +575,9 @@ mod tests {
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
force_single_line: true,
|
||||
single_line_exclusions: FxHashSet::from_iter([
|
||||
"os".to_string(),
|
||||
"logging.handlers".to_string(),
|
||||
]),
|
||||
single_line_exclusions: vec!["os".to_string(), "logging.handlers".to_string()]
|
||||
.into_iter()
|
||||
.collect::<BTreeSet<_>>(),
|
||||
..super::settings::Settings::default()
|
||||
},
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
@@ -637,7 +636,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
order_by_type: true,
|
||||
classes: FxHashSet::from_iter([
|
||||
classes: BTreeSet::from([
|
||||
"SVC".to_string(),
|
||||
"SELU".to_string(),
|
||||
"N_CLASS".to_string(),
|
||||
@@ -665,7 +664,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
order_by_type: true,
|
||||
constants: FxHashSet::from_iter([
|
||||
constants: BTreeSet::from([
|
||||
"Const".to_string(),
|
||||
"constant".to_string(),
|
||||
"First".to_string(),
|
||||
@@ -695,7 +694,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
order_by_type: true,
|
||||
variables: FxHashSet::from_iter([
|
||||
variables: BTreeSet::from([
|
||||
"VAR".to_string(),
|
||||
"Variable".to_string(),
|
||||
"MyVar".to_string(),
|
||||
@@ -722,7 +721,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
force_sort_within_sections: true,
|
||||
force_to_top: FxHashSet::from_iter(["z".to_string()]),
|
||||
force_to_top: BTreeSet::from(["z".to_string()]),
|
||||
..super::settings::Settings::default()
|
||||
},
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
@@ -772,7 +771,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
isort: super::settings::Settings {
|
||||
required_imports: BTreeSet::from_iter([
|
||||
required_imports: BTreeSet::from([
|
||||
"from __future__ import annotations".to_string()
|
||||
]),
|
||||
..super::settings::Settings::default()
|
||||
@@ -802,7 +801,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
isort: super::settings::Settings {
|
||||
required_imports: BTreeSet::from_iter([
|
||||
required_imports: BTreeSet::from([
|
||||
"from __future__ import annotations as _annotations".to_string(),
|
||||
]),
|
||||
..super::settings::Settings::default()
|
||||
@@ -825,7 +824,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
isort: super::settings::Settings {
|
||||
required_imports: BTreeSet::from_iter([
|
||||
required_imports: BTreeSet::from([
|
||||
"from __future__ import annotations".to_string(),
|
||||
"from __future__ import generator_stop".to_string(),
|
||||
]),
|
||||
@@ -849,7 +848,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
isort: super::settings::Settings {
|
||||
required_imports: BTreeSet::from_iter(["from __future__ import annotations, \
|
||||
required_imports: BTreeSet::from(["from __future__ import annotations, \
|
||||
generator_stop"
|
||||
.to_string()]),
|
||||
..super::settings::Settings::default()
|
||||
@@ -872,7 +871,7 @@ mod tests {
|
||||
&LinterSettings {
|
||||
src: vec![test_resource_path("fixtures/isort")],
|
||||
isort: super::settings::Settings {
|
||||
required_imports: BTreeSet::from_iter(["import os".to_string()]),
|
||||
required_imports: BTreeSet::from(["import os".to_string()]),
|
||||
..super::settings::Settings::default()
|
||||
},
|
||||
..LinterSettings::for_rule(Rule::MissingRequiredImport)
|
||||
@@ -1003,7 +1002,7 @@ mod tests {
|
||||
Path::new("isort").join(path).as_path(),
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
no_lines_before: FxHashSet::from_iter([
|
||||
no_lines_before: BTreeSet::from([
|
||||
ImportSection::Known(ImportType::Future),
|
||||
ImportSection::Known(ImportType::StandardLibrary),
|
||||
ImportSection::Known(ImportType::ThirdParty),
|
||||
@@ -1031,7 +1030,7 @@ mod tests {
|
||||
Path::new("isort").join(path).as_path(),
|
||||
&LinterSettings {
|
||||
isort: super::settings::Settings {
|
||||
no_lines_before: FxHashSet::from_iter([
|
||||
no_lines_before: BTreeSet::from([
|
||||
ImportSection::Known(ImportType::StandardLibrary),
|
||||
ImportSection::Known(ImportType::LocalFolder),
|
||||
]),
|
||||
|
||||
@@ -5,13 +5,12 @@ use std::error::Error;
|
||||
use std::fmt;
|
||||
use std::fmt::{Display, Formatter};
|
||||
|
||||
use rustc_hash::FxHashSet;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use strum::IntoEnumIterator;
|
||||
|
||||
use crate::display_settings;
|
||||
use ruff_macros::CacheKey;
|
||||
|
||||
use crate::display_settings;
|
||||
use crate::rules::isort::categorize::KnownModules;
|
||||
use crate::rules::isort::ImportType;
|
||||
|
||||
@@ -53,17 +52,17 @@ pub struct Settings {
|
||||
pub force_sort_within_sections: bool,
|
||||
pub case_sensitive: bool,
|
||||
pub force_wrap_aliases: bool,
|
||||
pub force_to_top: FxHashSet<String>,
|
||||
pub force_to_top: BTreeSet<String>,
|
||||
pub known_modules: KnownModules,
|
||||
pub detect_same_package: bool,
|
||||
pub order_by_type: bool,
|
||||
pub relative_imports_order: RelativeImportsOrder,
|
||||
pub single_line_exclusions: FxHashSet<String>,
|
||||
pub single_line_exclusions: BTreeSet<String>,
|
||||
pub split_on_trailing_comma: bool,
|
||||
pub classes: FxHashSet<String>,
|
||||
pub constants: FxHashSet<String>,
|
||||
pub variables: FxHashSet<String>,
|
||||
pub no_lines_before: FxHashSet<ImportSection>,
|
||||
pub classes: BTreeSet<String>,
|
||||
pub constants: BTreeSet<String>,
|
||||
pub variables: BTreeSet<String>,
|
||||
pub no_lines_before: BTreeSet<ImportSection>,
|
||||
pub lines_after_imports: isize,
|
||||
pub lines_between_types: usize,
|
||||
pub forced_separate: Vec<String>,
|
||||
@@ -78,23 +77,23 @@ pub struct Settings {
|
||||
impl Default for Settings {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
required_imports: BTreeSet::default(),
|
||||
required_imports: BTreeSet::new(),
|
||||
combine_as_imports: false,
|
||||
force_single_line: false,
|
||||
force_sort_within_sections: false,
|
||||
detect_same_package: true,
|
||||
case_sensitive: false,
|
||||
force_wrap_aliases: false,
|
||||
force_to_top: FxHashSet::default(),
|
||||
force_to_top: BTreeSet::new(),
|
||||
known_modules: KnownModules::default(),
|
||||
order_by_type: true,
|
||||
relative_imports_order: RelativeImportsOrder::default(),
|
||||
single_line_exclusions: FxHashSet::default(),
|
||||
single_line_exclusions: BTreeSet::new(),
|
||||
split_on_trailing_comma: true,
|
||||
classes: FxHashSet::default(),
|
||||
constants: FxHashSet::default(),
|
||||
variables: FxHashSet::default(),
|
||||
no_lines_before: FxHashSet::default(),
|
||||
classes: BTreeSet::new(),
|
||||
constants: BTreeSet::new(),
|
||||
variables: BTreeSet::new(),
|
||||
no_lines_before: BTreeSet::new(),
|
||||
lines_after_imports: -1,
|
||||
lines_between_types: 0,
|
||||
forced_separate: Vec::new(),
|
||||
@@ -114,23 +113,23 @@ impl Display for Settings {
|
||||
formatter = f,
|
||||
namespace = "linter.isort",
|
||||
fields = [
|
||||
self.required_imports | set,
|
||||
self.required_imports | array,
|
||||
self.combine_as_imports,
|
||||
self.force_single_line,
|
||||
self.force_sort_within_sections,
|
||||
self.detect_same_package,
|
||||
self.case_sensitive,
|
||||
self.force_wrap_aliases,
|
||||
self.force_to_top | set,
|
||||
self.force_to_top | array,
|
||||
self.known_modules,
|
||||
self.order_by_type,
|
||||
self.relative_imports_order,
|
||||
self.single_line_exclusions | set,
|
||||
self.single_line_exclusions | array,
|
||||
self.split_on_trailing_comma,
|
||||
self.classes | set,
|
||||
self.constants | set,
|
||||
self.variables | set,
|
||||
self.no_lines_before | set,
|
||||
self.classes | array,
|
||||
self.constants | array,
|
||||
self.variables | array,
|
||||
self.no_lines_before | array,
|
||||
self.lines_after_imports,
|
||||
self.lines_between_types,
|
||||
self.forced_separate | array,
|
||||
@@ -156,7 +155,7 @@ pub enum SettingsError {
|
||||
InvalidUserDefinedSection(glob::PatternError),
|
||||
}
|
||||
|
||||
impl Display for SettingsError {
|
||||
impl fmt::Display for SettingsError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self {
|
||||
SettingsError::InvalidKnownThirdParty(err) => {
|
||||
|
||||
@@ -14,5 +14,3 @@ bom_unsorted.py:1:1: I001 [*] Import block is un-sorted or un-formatted
|
||||
2 |-import bar
|
||||
1 |+import bar
|
||||
2 |+import foo
|
||||
|
||||
|
||||
|
||||
@@ -71,12 +71,6 @@ mod tests {
|
||||
#[test_case(Rule::IsLiteral, Path::new("constant_literals.py"))]
|
||||
#[test_case(Rule::TypeComparison, Path::new("E721.py"))]
|
||||
#[test_case(Rule::ModuleImportNotAtTopOfFile, Path::new("E402_2.py"))]
|
||||
#[test_case(Rule::RedundantBackslash, Path::new("E502.py"))]
|
||||
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_0.py"))]
|
||||
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_1.py"))]
|
||||
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_2.py"))]
|
||||
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_3.py"))]
|
||||
#[test_case(Rule::TooManyNewlinesAtEndOfFile, Path::new("W391_4.py"))]
|
||||
fn preview_rules(rule_code: Rule, path: &Path) -> Result<()> {
|
||||
let snapshot = format!(
|
||||
"preview__{}_{}",
|
||||
@@ -154,23 +148,6 @@ mod tests {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Tests the compatibility of E2 rules (E202, E225 and E275) on syntactically incorrect code.
|
||||
#[test]
|
||||
fn white_space_syntax_error_compatibility() -> Result<()> {
|
||||
let diagnostics = test_path(
|
||||
Path::new("pycodestyle").join("E2_syntax_error.py"),
|
||||
&settings::LinterSettings {
|
||||
..settings::LinterSettings::for_rules([
|
||||
Rule::MissingWhitespaceAroundOperator,
|
||||
Rule::MissingWhitespaceAfterKeyword,
|
||||
Rule::WhitespaceBeforeCloseBracket,
|
||||
])
|
||||
},
|
||||
)?;
|
||||
assert_messages!(diagnostics);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test_case(Rule::BlankLineBetweenMethods, Path::new("E30.py"))]
|
||||
#[test_case(Rule::BlankLinesTopLevel, Path::new("E30.py"))]
|
||||
#[test_case(Rule::TooManyBlankLines, Path::new("E30.py"))]
|
||||
|
||||
@@ -81,7 +81,7 @@ pub(crate) fn syntax_error(
|
||||
parse_error: &ParseError,
|
||||
locator: &Locator,
|
||||
) {
|
||||
let rest = locator.after(parse_error.offset);
|
||||
let rest = locator.after(parse_error.location.start());
|
||||
|
||||
// Try to create a non-empty range so that the diagnostic can print a caret at the
|
||||
// right position. This requires that we retrieve the next character, if any, and take its length
|
||||
@@ -95,6 +95,6 @@ pub(crate) fn syntax_error(
|
||||
SyntaxError {
|
||||
message: format!("{}", DisplayParseErrorType::new(&parse_error.error)),
|
||||
},
|
||||
TextRange::at(parse_error.offset, len),
|
||||
TextRange::at(parse_error.location.start(), len),
|
||||
));
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@ use memchr::memchr_iter;
|
||||
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_index::Indexer;
|
||||
use ruff_python_parser::Tok;
|
||||
use ruff_python_parser::{StringKind, Tok};
|
||||
use ruff_source_file::Locator;
|
||||
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
|
||||
|
||||
@@ -66,21 +66,21 @@ pub(crate) fn invalid_escape_sequence(
|
||||
token: &Tok,
|
||||
token_range: TextRange,
|
||||
) {
|
||||
let (token_source_code, string_start_location, kind) = match token {
|
||||
Tok::FStringMiddle { value, kind } => {
|
||||
if kind.is_raw_string() {
|
||||
let (token_source_code, string_start_location) = match token {
|
||||
Tok::FStringMiddle { value, is_raw, .. } => {
|
||||
if *is_raw {
|
||||
return;
|
||||
}
|
||||
let Some(range) = indexer.fstring_ranges().innermost(token_range.start()) else {
|
||||
return;
|
||||
};
|
||||
(&**value, range.start(), kind)
|
||||
(&**value, range.start())
|
||||
}
|
||||
Tok::String { kind, .. } => {
|
||||
if kind.is_raw_string() {
|
||||
if kind.is_raw() {
|
||||
return;
|
||||
}
|
||||
(locator.slice(token_range), token_range.start(), kind)
|
||||
(locator.slice(token_range), token_range.start())
|
||||
}
|
||||
_ => return,
|
||||
};
|
||||
@@ -207,7 +207,13 @@ pub(crate) fn invalid_escape_sequence(
|
||||
invalid_escape_char.range(),
|
||||
);
|
||||
|
||||
if kind.is_u_string() {
|
||||
if matches!(
|
||||
token,
|
||||
Tok::String {
|
||||
kind: StringKind::Unicode,
|
||||
..
|
||||
}
|
||||
) {
|
||||
// Replace the Unicode prefix with `r`.
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::replacement(
|
||||
"r".to_string(),
|
||||
|
||||
@@ -9,7 +9,6 @@ use ruff_text_size::Ranged;
|
||||
|
||||
use crate::checkers::ast::Checker;
|
||||
use crate::codes::Rule;
|
||||
use crate::fix::snippet::SourceCodeSnippet;
|
||||
|
||||
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
|
||||
enum EqCmpOp {
|
||||
@@ -103,52 +102,39 @@ impl AlwaysFixableViolation for NoneComparison {
|
||||
///
|
||||
/// [PEP 8]: https://peps.python.org/pep-0008/#programming-recommendations
|
||||
#[violation]
|
||||
pub struct TrueFalseComparison {
|
||||
value: bool,
|
||||
op: EqCmpOp,
|
||||
cond: Option<SourceCodeSnippet>,
|
||||
}
|
||||
pub struct TrueFalseComparison(bool, EqCmpOp);
|
||||
|
||||
impl AlwaysFixableViolation for TrueFalseComparison {
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
let TrueFalseComparison { value, op, cond } = self;
|
||||
let Some(cond) = cond else {
|
||||
return "Avoid equality comparisons to `True` or `False`".to_string();
|
||||
};
|
||||
let cond = cond.truncated_display();
|
||||
let TrueFalseComparison(value, op) = self;
|
||||
match (value, op) {
|
||||
(true, EqCmpOp::Eq) => {
|
||||
format!("Avoid equality comparisons to `True`; use `if {cond}:` for truth checks")
|
||||
format!("Avoid equality comparisons to `True`; use `if cond:` for truth checks")
|
||||
}
|
||||
(true, EqCmpOp::NotEq) => {
|
||||
format!(
|
||||
"Avoid inequality comparisons to `True`; use `if not {cond}:` for false checks"
|
||||
"Avoid inequality comparisons to `True`; use `if not cond:` for false checks"
|
||||
)
|
||||
}
|
||||
(false, EqCmpOp::Eq) => {
|
||||
format!(
|
||||
"Avoid equality comparisons to `False`; use `if not {cond}:` for false checks"
|
||||
"Avoid equality comparisons to `False`; use `if not cond:` for false checks"
|
||||
)
|
||||
}
|
||||
(false, EqCmpOp::NotEq) => {
|
||||
format!(
|
||||
"Avoid inequality comparisons to `False`; use `if {cond}:` for truth checks"
|
||||
)
|
||||
format!("Avoid inequality comparisons to `False`; use `if cond:` for truth checks")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn fix_title(&self) -> String {
|
||||
let TrueFalseComparison { value, op, cond } = self;
|
||||
let Some(cond) = cond.as_ref().and_then(|cond| cond.full_display()) else {
|
||||
return "Replace comparison".to_string();
|
||||
};
|
||||
let TrueFalseComparison(value, op) = self;
|
||||
match (value, op) {
|
||||
(true, EqCmpOp::Eq) => format!("Replace with `{cond}`"),
|
||||
(true, EqCmpOp::NotEq) => format!("Replace with `not {cond}`"),
|
||||
(false, EqCmpOp::Eq) => format!("Replace with `not {cond}`"),
|
||||
(false, EqCmpOp::NotEq) => format!("Replace with `{cond}`"),
|
||||
(true, EqCmpOp::Eq) => "Replace with `cond`".to_string(),
|
||||
(true, EqCmpOp::NotEq) => "Replace with `not cond`".to_string(),
|
||||
(false, EqCmpOp::Eq) => "Replace with `not cond`".to_string(),
|
||||
(false, EqCmpOp::NotEq) => "Replace with `cond`".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -192,35 +178,17 @@ pub(crate) fn literal_comparisons(checker: &mut Checker, compare: &ast::ExprComp
|
||||
if let Expr::BooleanLiteral(ast::ExprBooleanLiteral { value, .. }) = comparator {
|
||||
match op {
|
||||
EqCmpOp::Eq => {
|
||||
let cond = if compare.ops.len() == 1 {
|
||||
Some(SourceCodeSnippet::from_str(checker.locator().slice(next)))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let diagnostic = Diagnostic::new(
|
||||
TrueFalseComparison {
|
||||
value: *value,
|
||||
op,
|
||||
cond,
|
||||
},
|
||||
compare.range(),
|
||||
TrueFalseComparison(*value, op),
|
||||
comparator.range(),
|
||||
);
|
||||
bad_ops.insert(0, CmpOp::Is);
|
||||
diagnostics.push(diagnostic);
|
||||
}
|
||||
EqCmpOp::NotEq => {
|
||||
let cond = if compare.ops.len() == 1 {
|
||||
Some(SourceCodeSnippet::from_str(checker.locator().slice(next)))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let diagnostic = Diagnostic::new(
|
||||
TrueFalseComparison {
|
||||
value: *value,
|
||||
op,
|
||||
cond,
|
||||
},
|
||||
compare.range(),
|
||||
TrueFalseComparison(*value, op),
|
||||
comparator.range(),
|
||||
);
|
||||
bad_ops.insert(0, CmpOp::IsNot);
|
||||
diagnostics.push(diagnostic);
|
||||
@@ -263,40 +231,14 @@ pub(crate) fn literal_comparisons(checker: &mut Checker, compare: &ast::ExprComp
|
||||
if let Expr::BooleanLiteral(ast::ExprBooleanLiteral { value, .. }) = next {
|
||||
match op {
|
||||
EqCmpOp::Eq => {
|
||||
let cond = if compare.ops.len() == 1 {
|
||||
Some(SourceCodeSnippet::from_str(
|
||||
checker.locator().slice(comparator),
|
||||
))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let diagnostic = Diagnostic::new(
|
||||
TrueFalseComparison {
|
||||
value: *value,
|
||||
op,
|
||||
cond,
|
||||
},
|
||||
compare.range(),
|
||||
);
|
||||
let diagnostic =
|
||||
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
|
||||
bad_ops.insert(index, CmpOp::Is);
|
||||
diagnostics.push(diagnostic);
|
||||
}
|
||||
EqCmpOp::NotEq => {
|
||||
let cond = if compare.ops.len() == 1 {
|
||||
Some(SourceCodeSnippet::from_str(
|
||||
checker.locator().slice(comparator),
|
||||
))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let diagnostic = Diagnostic::new(
|
||||
TrueFalseComparison {
|
||||
value: *value,
|
||||
op,
|
||||
cond,
|
||||
},
|
||||
compare.range(),
|
||||
);
|
||||
let diagnostic =
|
||||
Diagnostic::new(TrueFalseComparison(*value, op), next.range());
|
||||
bad_ops.insert(index, CmpOp::IsNot);
|
||||
diagnostics.push(diagnostic);
|
||||
}
|
||||
|
||||
@@ -137,16 +137,16 @@ pub(crate) fn extraneous_whitespace(line: &LogicalLine, context: &mut LogicalLin
|
||||
match kind {
|
||||
TokenKind::FStringStart => fstrings += 1,
|
||||
TokenKind::FStringEnd => fstrings = fstrings.saturating_sub(1),
|
||||
TokenKind::Lsqb => {
|
||||
TokenKind::Lsqb if fstrings == 0 => {
|
||||
brackets.push(kind);
|
||||
}
|
||||
TokenKind::Rsqb => {
|
||||
TokenKind::Rsqb if fstrings == 0 => {
|
||||
brackets.pop();
|
||||
}
|
||||
TokenKind::Lbrace => {
|
||||
TokenKind::Lbrace if fstrings == 0 => {
|
||||
brackets.push(kind);
|
||||
}
|
||||
TokenKind::Rbrace => {
|
||||
TokenKind::Rbrace if fstrings == 0 => {
|
||||
brackets.pop();
|
||||
}
|
||||
_ => {}
|
||||
|
||||
@@ -59,13 +59,7 @@ pub(crate) fn missing_whitespace_after_keyword(
|
||||
|| tok0_kind == TokenKind::Yield && tok1_kind == TokenKind::Rpar
|
||||
|| matches!(
|
||||
tok1_kind,
|
||||
TokenKind::Colon
|
||||
| TokenKind::Newline
|
||||
| TokenKind::NonLogicalNewline
|
||||
// In the event of a syntax error, do not attempt to add a whitespace.
|
||||
| TokenKind::Rpar
|
||||
| TokenKind::Rsqb
|
||||
| TokenKind::Rbrace
|
||||
TokenKind::Colon | TokenKind::Newline | TokenKind::NonLogicalNewline
|
||||
))
|
||||
&& tok0.end() == tok1.start()
|
||||
{
|
||||
|
||||
@@ -211,21 +211,6 @@ pub(crate) fn missing_whitespace_around_operator(
|
||||
} else {
|
||||
NeedsSpace::No
|
||||
}
|
||||
} else if tokens.peek().is_some_and(|token| {
|
||||
matches!(
|
||||
token.kind(),
|
||||
TokenKind::Rpar | TokenKind::Rsqb | TokenKind::Rbrace
|
||||
)
|
||||
}) {
|
||||
// There should not be a closing bracket directly after a token, as it is a syntax
|
||||
// error. For example:
|
||||
// ```
|
||||
// 1+)
|
||||
// ```
|
||||
//
|
||||
// However, allow it in order to prevent entering an infinite loop in which E225 adds a
|
||||
// space only for E202 to remove it.
|
||||
NeedsSpace::No
|
||||
} else if is_whitespace_needed(kind) {
|
||||
NeedsSpace::Yes
|
||||
} else {
|
||||
|
||||
@@ -3,7 +3,6 @@ pub(crate) use indentation::*;
|
||||
pub(crate) use missing_whitespace::*;
|
||||
pub(crate) use missing_whitespace_after_keyword::*;
|
||||
pub(crate) use missing_whitespace_around_operator::*;
|
||||
pub(crate) use redundant_backslash::*;
|
||||
pub(crate) use space_around_operator::*;
|
||||
pub(crate) use whitespace_around_keywords::*;
|
||||
pub(crate) use whitespace_around_named_parameter_equals::*;
|
||||
@@ -26,7 +25,6 @@ mod indentation;
|
||||
mod missing_whitespace;
|
||||
mod missing_whitespace_after_keyword;
|
||||
mod missing_whitespace_around_operator;
|
||||
mod redundant_backslash;
|
||||
mod space_around_operator;
|
||||
mod whitespace_around_keywords;
|
||||
mod whitespace_around_named_parameter_equals;
|
||||
|
||||
@@ -1,92 +0,0 @@
|
||||
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_index::Indexer;
|
||||
use ruff_python_parser::TokenKind;
|
||||
use ruff_source_file::Locator;
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
use crate::checkers::logical_lines::LogicalLinesContext;
|
||||
|
||||
use super::LogicalLine;
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for redundant backslashes between brackets.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// Explicit line joins using a backslash are redundant between brackets.
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// x = (2 + \
|
||||
/// 2)
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// x = (2 +
|
||||
/// 2)
|
||||
/// ```
|
||||
///
|
||||
/// [PEP 8]: https://peps.python.org/pep-0008/#maximum-line-length
|
||||
#[violation]
|
||||
pub struct RedundantBackslash;
|
||||
|
||||
impl AlwaysFixableViolation for RedundantBackslash {
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
format!("Redundant backslash")
|
||||
}
|
||||
|
||||
fn fix_title(&self) -> String {
|
||||
"Remove redundant backslash".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// E502
|
||||
pub(crate) fn redundant_backslash(
|
||||
line: &LogicalLine,
|
||||
locator: &Locator,
|
||||
indexer: &Indexer,
|
||||
context: &mut LogicalLinesContext,
|
||||
) {
|
||||
let mut parens = 0;
|
||||
let continuation_lines = indexer.continuation_line_starts();
|
||||
let mut start_index = 0;
|
||||
|
||||
for token in line.tokens() {
|
||||
match token.kind() {
|
||||
TokenKind::Lpar | TokenKind::Lsqb | TokenKind::Lbrace => {
|
||||
if parens == 0 {
|
||||
let start = locator.line_start(token.start());
|
||||
start_index = continuation_lines
|
||||
.binary_search(&start)
|
||||
.map_or_else(|err_index| err_index, |ok_index| ok_index);
|
||||
}
|
||||
parens += 1;
|
||||
}
|
||||
TokenKind::Rpar | TokenKind::Rsqb | TokenKind::Rbrace => {
|
||||
parens -= 1;
|
||||
if parens == 0 {
|
||||
let end = locator.line_start(token.start());
|
||||
let end_index = continuation_lines
|
||||
.binary_search(&end)
|
||||
.map_or_else(|err_index| err_index, |ok_index| ok_index);
|
||||
for continuation_line in &continuation_lines[start_index..end_index] {
|
||||
let backslash_end = locator.line_end(*continuation_line);
|
||||
let backslash_start = backslash_end - TextSize::new(1);
|
||||
let mut diagnostic = Diagnostic::new(
|
||||
RedundantBackslash,
|
||||
TextRange::new(backslash_start, backslash_end),
|
||||
);
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::deletion(
|
||||
backslash_start,
|
||||
backslash_end,
|
||||
)));
|
||||
context.push_diagnostic(diagnostic);
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => continue,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -42,7 +42,7 @@ pub(crate) fn no_newline_at_end_of_file(
|
||||
) -> Option<Diagnostic> {
|
||||
let source = locator.contents();
|
||||
|
||||
// Ignore empty and BOM only files.
|
||||
// Ignore empty and BOM only files
|
||||
if source.is_empty() || source == "\u{feff}" {
|
||||
return None;
|
||||
}
|
||||
|
||||
@@ -17,7 +17,6 @@ pub(crate) use module_import_not_at_top_of_file::*;
|
||||
pub(crate) use multiple_imports_on_one_line::*;
|
||||
pub(crate) use not_tests::*;
|
||||
pub(crate) use tab_indentation::*;
|
||||
pub(crate) use too_many_newlines_at_end_of_file::*;
|
||||
pub(crate) use trailing_whitespace::*;
|
||||
pub(crate) use type_comparison::*;
|
||||
|
||||
@@ -40,6 +39,5 @@ mod module_import_not_at_top_of_file;
|
||||
mod multiple_imports_on_one_line;
|
||||
mod not_tests;
|
||||
mod tab_indentation;
|
||||
mod too_many_newlines_at_end_of_file;
|
||||
mod trailing_whitespace;
|
||||
mod type_comparison;
|
||||
|
||||
@@ -1,99 +0,0 @@
|
||||
use ruff_diagnostics::{AlwaysFixableViolation, Diagnostic, Edit, Fix};
|
||||
use ruff_macros::{derive_message_formats, violation};
|
||||
use ruff_python_parser::lexer::LexResult;
|
||||
use ruff_python_parser::Tok;
|
||||
use ruff_text_size::{TextRange, TextSize};
|
||||
|
||||
/// ## What it does
|
||||
/// Checks for files with multiple trailing blank lines.
|
||||
///
|
||||
/// ## Why is this bad?
|
||||
/// Trailing blank lines in a file are superfluous.
|
||||
///
|
||||
/// However, the last line of the file should end with a newline.
|
||||
///
|
||||
/// ## Example
|
||||
/// ```python
|
||||
/// spam(1)\n\n\n
|
||||
/// ```
|
||||
///
|
||||
/// Use instead:
|
||||
/// ```python
|
||||
/// spam(1)\n
|
||||
/// ```
|
||||
#[violation]
|
||||
pub struct TooManyNewlinesAtEndOfFile {
|
||||
num_trailing_newlines: u32,
|
||||
}
|
||||
|
||||
impl AlwaysFixableViolation for TooManyNewlinesAtEndOfFile {
|
||||
#[derive_message_formats]
|
||||
fn message(&self) -> String {
|
||||
let TooManyNewlinesAtEndOfFile {
|
||||
num_trailing_newlines,
|
||||
} = self;
|
||||
|
||||
// We expect a single trailing newline; so two trailing newlines is one too many, three
|
||||
// trailing newlines is two too many, etc.
|
||||
if *num_trailing_newlines > 2 {
|
||||
format!("Too many newlines at end of file")
|
||||
} else {
|
||||
format!("Extra newline at end of file")
|
||||
}
|
||||
}
|
||||
|
||||
fn fix_title(&self) -> String {
|
||||
let TooManyNewlinesAtEndOfFile {
|
||||
num_trailing_newlines,
|
||||
} = self;
|
||||
if *num_trailing_newlines > 2 {
|
||||
"Remove trailing newlines".to_string()
|
||||
} else {
|
||||
"Remove trailing newline".to_string()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// W391
|
||||
pub(crate) fn too_many_newlines_at_end_of_file(
|
||||
diagnostics: &mut Vec<Diagnostic>,
|
||||
lxr: &[LexResult],
|
||||
) {
|
||||
let mut num_trailing_newlines = 0u32;
|
||||
let mut start: Option<TextSize> = None;
|
||||
let mut end: Option<TextSize> = None;
|
||||
|
||||
// Count the number of trailing newlines.
|
||||
for (tok, range) in lxr.iter().rev().flatten() {
|
||||
match tok {
|
||||
Tok::NonLogicalNewline | Tok::Newline => {
|
||||
if num_trailing_newlines == 0 {
|
||||
end = Some(range.end());
|
||||
}
|
||||
start = Some(range.end());
|
||||
num_trailing_newlines += 1;
|
||||
}
|
||||
Tok::Dedent => continue,
|
||||
_ => {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if num_trailing_newlines == 0 || num_trailing_newlines == 1 {
|
||||
return;
|
||||
}
|
||||
|
||||
let range = match (start, end) {
|
||||
(Some(start), Some(end)) => TextRange::new(start, end),
|
||||
_ => return,
|
||||
};
|
||||
let mut diagnostic = Diagnostic::new(
|
||||
TooManyNewlinesAtEndOfFile {
|
||||
num_trailing_newlines,
|
||||
},
|
||||
range,
|
||||
);
|
||||
diagnostic.set_fix(Fix::safe_edit(Edit::range_deletion(range)));
|
||||
diagnostics.push(diagnostic);
|
||||
}
|
||||
@@ -251,8 +251,6 @@ E20.py:155:14: E203 [*] Whitespace before ':'
|
||||
154 | #: E203:1:13
|
||||
155 | ham[lower + 1 :, "columnname"]
|
||||
| ^^ E203
|
||||
156 |
|
||||
157 | #: Okay
|
||||
|
|
||||
= help: Remove whitespace before ':'
|
||||
|
||||
@@ -262,21 +260,3 @@ E20.py:155:14: E203 [*] Whitespace before ':'
|
||||
154 154 | #: E203:1:13
|
||||
155 |-ham[lower + 1 :, "columnname"]
|
||||
155 |+ham[lower + 1:, "columnname"]
|
||||
156 156 |
|
||||
157 157 | #: Okay
|
||||
158 158 | f"{ham[lower +1 :, "columnname"]}"
|
||||
|
||||
E20.py:161:17: E203 [*] Whitespace before ':'
|
||||
|
|
||||
160 | #: E203:1:13
|
||||
161 | f"{ham[lower + 1 :, "columnname"]}"
|
||||
| ^^ E203
|
||||
|
|
||||
= help: Remove whitespace before ':'
|
||||
|
||||
ℹ Safe fix
|
||||
158 158 | f"{ham[lower +1 :, "columnname"]}"
|
||||
159 159 |
|
||||
160 160 | #: E203:1:13
|
||||
161 |-f"{ham[lower + 1 :, "columnname"]}"
|
||||
161 |+f"{ham[lower + 1:, "columnname"]}"
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
---
|
||||
E712.py:2:4: E712 [*] Avoid equality comparisons to `True`; use `if res:` for truth checks
|
||||
E712.py:2:11: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
1 | #: E712
|
||||
2 | if res == True:
|
||||
| ^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
3 | pass
|
||||
4 | #: E712
|
||||
|
|
||||
= help: Replace with `res`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
1 1 | #: E712
|
||||
@@ -19,16 +19,16 @@ E712.py:2:4: E712 [*] Avoid equality comparisons to `True`; use `if res:` for tr
|
||||
4 4 | #: E712
|
||||
5 5 | if res != False:
|
||||
|
||||
E712.py:5:4: E712 [*] Avoid inequality comparisons to `False`; use `if res:` for truth checks
|
||||
E712.py:5:11: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` for truth checks
|
||||
|
|
||||
3 | pass
|
||||
4 | #: E712
|
||||
5 | if res != False:
|
||||
| ^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
6 | pass
|
||||
7 | #: E712
|
||||
|
|
||||
= help: Replace with `res`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
2 2 | if res == True:
|
||||
@@ -40,16 +40,16 @@ E712.py:5:4: E712 [*] Avoid inequality comparisons to `False`; use `if res:` for
|
||||
7 7 | #: E712
|
||||
8 8 | if True != res:
|
||||
|
||||
E712.py:8:4: E712 [*] Avoid inequality comparisons to `True`; use `if not res:` for false checks
|
||||
E712.py:8:4: E712 [*] Avoid inequality comparisons to `True`; use `if not cond:` for false checks
|
||||
|
|
||||
6 | pass
|
||||
7 | #: E712
|
||||
8 | if True != res:
|
||||
| ^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
9 | pass
|
||||
10 | #: E712
|
||||
|
|
||||
= help: Replace with `not res`
|
||||
= help: Replace with `not cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
5 5 | if res != False:
|
||||
@@ -61,16 +61,16 @@ E712.py:8:4: E712 [*] Avoid inequality comparisons to `True`; use `if not res:`
|
||||
10 10 | #: E712
|
||||
11 11 | if False == res:
|
||||
|
||||
E712.py:11:4: E712 [*] Avoid equality comparisons to `False`; use `if not res:` for false checks
|
||||
E712.py:11:4: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
|
||||
|
|
||||
9 | pass
|
||||
10 | #: E712
|
||||
11 | if False == res:
|
||||
| ^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
12 | pass
|
||||
13 | #: E712
|
||||
|
|
||||
= help: Replace with `not res`
|
||||
= help: Replace with `not cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
8 8 | if True != res:
|
||||
@@ -82,16 +82,16 @@ E712.py:11:4: E712 [*] Avoid equality comparisons to `False`; use `if not res:`
|
||||
13 13 | #: E712
|
||||
14 14 | if res[1] == True:
|
||||
|
||||
E712.py:14:4: E712 [*] Avoid equality comparisons to `True`; use `if res[1]:` for truth checks
|
||||
E712.py:14:14: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
12 | pass
|
||||
13 | #: E712
|
||||
14 | if res[1] == True:
|
||||
| ^^^^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
15 | pass
|
||||
16 | #: E712
|
||||
|
|
||||
= help: Replace with `res[1]`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
11 11 | if False == res:
|
||||
@@ -103,16 +103,16 @@ E712.py:14:4: E712 [*] Avoid equality comparisons to `True`; use `if res[1]:` fo
|
||||
16 16 | #: E712
|
||||
17 17 | if res[1] != False:
|
||||
|
||||
E712.py:17:4: E712 [*] Avoid inequality comparisons to `False`; use `if res[1]:` for truth checks
|
||||
E712.py:17:14: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` for truth checks
|
||||
|
|
||||
15 | pass
|
||||
16 | #: E712
|
||||
17 | if res[1] != False:
|
||||
| ^^^^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
18 | pass
|
||||
19 | #: E712
|
||||
|
|
||||
= help: Replace with `res[1]`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
14 14 | if res[1] == True:
|
||||
@@ -124,12 +124,12 @@ E712.py:17:4: E712 [*] Avoid inequality comparisons to `False`; use `if res[1]:`
|
||||
19 19 | #: E712
|
||||
20 20 | var = 1 if cond == True else -1 if cond == False else cond
|
||||
|
||||
E712.py:20:12: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
E712.py:20:20: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
18 | pass
|
||||
19 | #: E712
|
||||
20 | var = 1 if cond == True else -1 if cond == False else cond
|
||||
| ^^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
21 | #: E712
|
||||
22 | if (True) == TrueElement or x == TrueElement:
|
||||
|
|
||||
@@ -145,12 +145,12 @@ E712.py:20:12: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for
|
||||
22 22 | if (True) == TrueElement or x == TrueElement:
|
||||
23 23 | pass
|
||||
|
||||
E712.py:20:36: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
|
||||
E712.py:20:44: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
|
||||
|
|
||||
18 | pass
|
||||
19 | #: E712
|
||||
20 | var = 1 if cond == True else -1 if cond == False else cond
|
||||
| ^^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
21 | #: E712
|
||||
22 | if (True) == TrueElement or x == TrueElement:
|
||||
|
|
||||
@@ -166,15 +166,15 @@ E712.py:20:36: E712 [*] Avoid equality comparisons to `False`; use `if not cond:
|
||||
22 22 | if (True) == TrueElement or x == TrueElement:
|
||||
23 23 | pass
|
||||
|
||||
E712.py:22:4: E712 [*] Avoid equality comparisons to `True`; use `if TrueElement:` for truth checks
|
||||
E712.py:22:5: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
20 | var = 1 if cond == True else -1 if cond == False else cond
|
||||
21 | #: E712
|
||||
22 | if (True) == TrueElement or x == TrueElement:
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
23 | pass
|
||||
|
|
||||
= help: Replace with `TrueElement`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
19 19 | #: E712
|
||||
@@ -186,15 +186,15 @@ E712.py:22:4: E712 [*] Avoid equality comparisons to `True`; use `if TrueElement
|
||||
24 24 |
|
||||
25 25 | if res == True != False:
|
||||
|
||||
E712.py:25:4: E712 [*] Avoid equality comparisons to `True` or `False`
|
||||
E712.py:25:11: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
23 | pass
|
||||
24 |
|
||||
25 | if res == True != False:
|
||||
| ^^^^^^^^^^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
26 | pass
|
||||
|
|
||||
= help: Replace comparison
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
22 22 | if (True) == TrueElement or x == TrueElement:
|
||||
@@ -206,15 +206,15 @@ E712.py:25:4: E712 [*] Avoid equality comparisons to `True` or `False`
|
||||
27 27 |
|
||||
28 28 | if(True) == TrueElement or x == TrueElement:
|
||||
|
||||
E712.py:25:4: E712 [*] Avoid equality comparisons to `True` or `False`
|
||||
E712.py:25:19: E712 [*] Avoid inequality comparisons to `False`; use `if cond:` for truth checks
|
||||
|
|
||||
23 | pass
|
||||
24 |
|
||||
25 | if res == True != False:
|
||||
| ^^^^^^^^^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
26 | pass
|
||||
|
|
||||
= help: Replace comparison
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
22 22 | if (True) == TrueElement or x == TrueElement:
|
||||
@@ -226,15 +226,15 @@ E712.py:25:4: E712 [*] Avoid equality comparisons to `True` or `False`
|
||||
27 27 |
|
||||
28 28 | if(True) == TrueElement or x == TrueElement:
|
||||
|
||||
E712.py:28:3: E712 [*] Avoid equality comparisons to `True`; use `if TrueElement:` for truth checks
|
||||
E712.py:28:4: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
26 | pass
|
||||
27 |
|
||||
28 | if(True) == TrueElement or x == TrueElement:
|
||||
| ^^^^^^^^^^^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
29 | pass
|
||||
|
|
||||
= help: Replace with `TrueElement`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
25 25 | if res == True != False:
|
||||
@@ -246,15 +246,15 @@ E712.py:28:3: E712 [*] Avoid equality comparisons to `True`; use `if TrueElement
|
||||
30 30 |
|
||||
31 31 | if (yield i) == True:
|
||||
|
||||
E712.py:31:4: E712 [*] Avoid equality comparisons to `True`; use `if yield i:` for truth checks
|
||||
E712.py:31:17: E712 [*] Avoid equality comparisons to `True`; use `if cond:` for truth checks
|
||||
|
|
||||
29 | pass
|
||||
30 |
|
||||
31 | if (yield i) == True:
|
||||
| ^^^^^^^^^^^^^^^^^ E712
|
||||
| ^^^^ E712
|
||||
32 | print("even")
|
||||
|
|
||||
= help: Replace with `yield i`
|
||||
= help: Replace with `cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
28 28 | if(True) == TrueElement or x == TrueElement:
|
||||
@@ -265,3 +265,5 @@ E712.py:31:4: E712 [*] Avoid equality comparisons to `True`; use `if yield i:` f
|
||||
32 32 | print("even")
|
||||
33 33 |
|
||||
34 34 | #: Okay
|
||||
|
||||
|
||||
|
||||
@@ -8,5 +8,3 @@ E999.py:3:1: E999 SyntaxError: unindent does not match any outer indentation lev
|
||||
| ^ E999
|
||||
4 |
|
||||
|
|
||||
|
||||
|
||||
|
||||
@@ -106,16 +106,16 @@ constant_literals.py:12:4: F632 [*] Use `==` to compare constant literals
|
||||
14 14 | if False == None: # E711, E712 (fix)
|
||||
15 15 | pass
|
||||
|
||||
constant_literals.py:14:4: E712 [*] Avoid equality comparisons to `False`; use `if not None:` for false checks
|
||||
constant_literals.py:14:4: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
|
||||
|
|
||||
12 | if False is "abc": # F632 (fix, but leaves behind unfixable E712)
|
||||
13 | pass
|
||||
14 | if False == None: # E711, E712 (fix)
|
||||
| ^^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
15 | pass
|
||||
16 | if None == False: # E711, E712 (fix)
|
||||
|
|
||||
= help: Replace with `not None`
|
||||
= help: Replace with `not cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
11 11 | pass
|
||||
@@ -168,15 +168,15 @@ constant_literals.py:16:4: E711 [*] Comparison to `None` should be `cond is None
|
||||
18 18 |
|
||||
19 19 | named_var = []
|
||||
|
||||
constant_literals.py:16:4: E712 [*] Avoid equality comparisons to `False`; use `if not None:` for false checks
|
||||
constant_literals.py:16:12: E712 [*] Avoid equality comparisons to `False`; use `if not cond:` for false checks
|
||||
|
|
||||
14 | if False == None: # E711, E712 (fix)
|
||||
15 | pass
|
||||
16 | if None == False: # E711, E712 (fix)
|
||||
| ^^^^^^^^^^^^^ E712
|
||||
| ^^^^^ E712
|
||||
17 | pass
|
||||
|
|
||||
= help: Replace with `not None`
|
||||
= help: Replace with `not cond`
|
||||
|
||||
ℹ Unsafe fix
|
||||
13 13 | pass
|
||||
@@ -187,3 +187,5 @@ constant_literals.py:16:4: E712 [*] Avoid equality comparisons to `False`; use `
|
||||
17 17 | pass
|
||||
18 18 |
|
||||
19 19 | named_var = []
|
||||
|
||||
|
||||
|
||||
@@ -1,281 +0,0 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
---
|
||||
E502.py:9:9: E502 [*] Redundant backslash
|
||||
|
|
||||
7 | + 4
|
||||
8 |
|
||||
9 | a = (3 -\
|
||||
| ^ E502
|
||||
10 | 2 + \
|
||||
11 | 7)
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
6 6 | 3 \
|
||||
7 7 | + 4
|
||||
8 8 |
|
||||
9 |-a = (3 -\
|
||||
9 |+a = (3 -
|
||||
10 10 | 2 + \
|
||||
11 11 | 7)
|
||||
12 12 |
|
||||
|
||||
E502.py:10:11: E502 [*] Redundant backslash
|
||||
|
|
||||
9 | a = (3 -\
|
||||
10 | 2 + \
|
||||
| ^ E502
|
||||
11 | 7)
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
7 7 | + 4
|
||||
8 8 |
|
||||
9 9 | a = (3 -\
|
||||
10 |- 2 + \
|
||||
10 |+ 2 +
|
||||
11 11 | 7)
|
||||
12 12 |
|
||||
13 13 | z = 5 + \
|
||||
|
||||
E502.py:14:9: E502 [*] Redundant backslash
|
||||
|
|
||||
13 | z = 5 + \
|
||||
14 | (3 -\
|
||||
| ^ E502
|
||||
15 | 2 + \
|
||||
16 | 7) + \
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
11 11 | 7)
|
||||
12 12 |
|
||||
13 13 | z = 5 + \
|
||||
14 |- (3 -\
|
||||
14 |+ (3 -
|
||||
15 15 | 2 + \
|
||||
16 16 | 7) + \
|
||||
17 17 | 4
|
||||
|
||||
E502.py:15:11: E502 [*] Redundant backslash
|
||||
|
|
||||
13 | z = 5 + \
|
||||
14 | (3 -\
|
||||
15 | 2 + \
|
||||
| ^ E502
|
||||
16 | 7) + \
|
||||
17 | 4
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
12 12 |
|
||||
13 13 | z = 5 + \
|
||||
14 14 | (3 -\
|
||||
15 |- 2 + \
|
||||
15 |+ 2 +
|
||||
16 16 | 7) + \
|
||||
17 17 | 4
|
||||
18 18 |
|
||||
|
||||
E502.py:23:17: E502 [*] Redundant backslash
|
||||
|
|
||||
22 | b = [
|
||||
23 | 2 + 4 + 5 + \
|
||||
| ^ E502
|
||||
24 | 44 \
|
||||
25 | - 5
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
20 20 | 2]
|
||||
21 21 |
|
||||
22 22 | b = [
|
||||
23 |- 2 + 4 + 5 + \
|
||||
23 |+ 2 + 4 + 5 +
|
||||
24 24 | 44 \
|
||||
25 25 | - 5
|
||||
26 26 | ]
|
||||
|
||||
E502.py:24:8: E502 [*] Redundant backslash
|
||||
|
|
||||
22 | b = [
|
||||
23 | 2 + 4 + 5 + \
|
||||
24 | 44 \
|
||||
| ^ E502
|
||||
25 | - 5
|
||||
26 | ]
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
21 21 |
|
||||
22 22 | b = [
|
||||
23 23 | 2 + 4 + 5 + \
|
||||
24 |- 44 \
|
||||
24 |+ 44
|
||||
25 25 | - 5
|
||||
26 26 | ]
|
||||
27 27 |
|
||||
|
||||
E502.py:29:11: E502 [*] Redundant backslash
|
||||
|
|
||||
28 | c = (True and
|
||||
29 | False \
|
||||
| ^ E502
|
||||
30 | or False \
|
||||
31 | and True \
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
26 26 | ]
|
||||
27 27 |
|
||||
28 28 | c = (True and
|
||||
29 |- False \
|
||||
29 |+ False
|
||||
30 30 | or False \
|
||||
31 31 | and True \
|
||||
32 32 | )
|
||||
|
||||
E502.py:30:14: E502 [*] Redundant backslash
|
||||
|
|
||||
28 | c = (True and
|
||||
29 | False \
|
||||
30 | or False \
|
||||
| ^ E502
|
||||
31 | and True \
|
||||
32 | )
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
27 27 |
|
||||
28 28 | c = (True and
|
||||
29 29 | False \
|
||||
30 |- or False \
|
||||
30 |+ or False
|
||||
31 31 | and True \
|
||||
32 32 | )
|
||||
33 33 |
|
||||
|
||||
E502.py:31:14: E502 [*] Redundant backslash
|
||||
|
|
||||
29 | False \
|
||||
30 | or False \
|
||||
31 | and True \
|
||||
| ^ E502
|
||||
32 | )
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
28 28 | c = (True and
|
||||
29 29 | False \
|
||||
30 30 | or False \
|
||||
31 |- and True \
|
||||
31 |+ and True
|
||||
32 32 | )
|
||||
33 33 |
|
||||
34 34 | c = (True and
|
||||
|
||||
E502.py:44:14: E502 [*] Redundant backslash
|
||||
|
|
||||
43 | s = {
|
||||
44 | 'x': 2 + \
|
||||
| ^ E502
|
||||
45 | 2
|
||||
46 | }
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
41 41 |
|
||||
42 42 |
|
||||
43 43 | s = {
|
||||
44 |- 'x': 2 + \
|
||||
44 |+ 'x': 2 +
|
||||
45 45 | 2
|
||||
46 46 | }
|
||||
47 47 |
|
||||
|
||||
E502.py:55:12: E502 [*] Redundant backslash
|
||||
|
|
||||
55 | x = {2 + 4 \
|
||||
| ^ E502
|
||||
56 | + 3}
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
52 52 | }
|
||||
53 53 |
|
||||
54 54 |
|
||||
55 |-x = {2 + 4 \
|
||||
55 |+x = {2 + 4
|
||||
56 56 | + 3}
|
||||
57 57 |
|
||||
58 58 | y = (
|
||||
|
||||
E502.py:61:9: E502 [*] Redundant backslash
|
||||
|
|
||||
59 | 2 + 2 # \
|
||||
60 | + 3 # \
|
||||
61 | + 4 \
|
||||
| ^ E502
|
||||
62 | + 3
|
||||
63 | )
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
58 58 | y = (
|
||||
59 59 | 2 + 2 # \
|
||||
60 60 | + 3 # \
|
||||
61 |- + 4 \
|
||||
61 |+ + 4
|
||||
62 62 | + 3
|
||||
63 63 | )
|
||||
64 64 |
|
||||
|
||||
E502.py:82:12: E502 [*] Redundant backslash
|
||||
|
|
||||
80 | "xyz"
|
||||
81 |
|
||||
82 | x = ("abc" \
|
||||
| ^ E502
|
||||
83 | "xyz")
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
79 79 | x = "abc" \
|
||||
80 80 | "xyz"
|
||||
81 81 |
|
||||
82 |-x = ("abc" \
|
||||
82 |+x = ("abc"
|
||||
83 83 | "xyz")
|
||||
84 84 |
|
||||
85 85 |
|
||||
|
||||
E502.py:87:14: E502 [*] Redundant backslash
|
||||
|
|
||||
86 | def foo():
|
||||
87 | x = (a + \
|
||||
| ^ E502
|
||||
88 | 2)
|
||||
|
|
||||
= help: Remove redundant backslash
|
||||
|
||||
ℹ Safe fix
|
||||
84 84 |
|
||||
85 85 |
|
||||
86 86 | def foo():
|
||||
87 |- x = (a + \
|
||||
87 |+ x = (a +
|
||||
88 88 | 2)
|
||||
@@ -1,17 +0,0 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
---
|
||||
W391_0.py:14:1: W391 [*] Extra newline at end of file
|
||||
|
|
||||
12 | foo()
|
||||
13 | bar()
|
||||
14 |
|
||||
| ^ W391
|
||||
|
|
||||
= help: Remove trailing newline
|
||||
|
||||
ℹ Safe fix
|
||||
11 11 | if __name__ == '__main__':
|
||||
12 12 | foo()
|
||||
13 13 | bar()
|
||||
14 |-
|
||||
@@ -1,4 +0,0 @@
|
||||
---
|
||||
source: crates/ruff_linter/src/rules/pycodestyle/mod.rs
|
||||
---
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user