Compare commits

..

66 Commits

Author SHA1 Message Date
Charlie Marsh
dbd60b2cf5 Bump version to 0.0.281 (#6195) 2023-07-31 13:21:43 -04:00
Charlie Marsh
7eb2ba47cc Add empty line after import block (#6200)
## Summary

Ensures that, given:

```python
import os
x = 1
```

We format like:

```python
import os

x = 1
```
2023-07-31 12:01:45 -04:00
Dhruv Manilawala
cb34e6d322 Avoid parenthesizing comprehension element (#6198)
## Summary

This PR adds a new precedence level for the comprehension element. This fixes
the generator to not add parentheses around the comprehension element every
time.

The new precedence level is `COMPREHENSION_ELEMENT` and it should occur after
the `NAMED_EXPR` precedence level because named expressions are always parenthesized.

This matches the behavior of Python `ast.unparse` and tested with the
following snippet:

```python
import ast

code = ""
ast.unparse(ast.parse(code))
```

## Test Plan

Add a bunch of test cases for all the valid nodes at that position.

fixes: #5777
2023-07-31 20:56:42 +05:30
Harutaka Kawamura
0274de1fff Preserve backslash in raw string literal (#6152) 2023-07-31 12:48:17 +00:00
konsti
a540933bc9 Print log when formatter ecosystem checks fail (#6187)
**Summary** Print the errors when the formatter ecosystem checks failed.
Im not happy that we current collect the log in the first place, but
this is the less invasive change and we need it to unblock reviewing
#6152.

**Test Plan**
https://github.com/astral-sh/ruff/actions/runs/5713112075/job/15477879403?pr=6188
2023-07-31 14:45:38 +02:00
Micha Reiser
311a1f9ec4 Remove len from JoinCommaSeparatedBuilder (#6185) 2023-07-31 12:19:47 +00:00
Luc Khai Hai
b95fc6d162 Format bytes string (#6166)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Format bytes string

Closes #6064

## Test Plan

Added a fixture based on string's one
2023-07-31 10:46:40 +02:00
Charlie Marsh
de898c52eb Avoid falsely marking non-submodules as submodule aliases (#6182)
## Summary

We have some code to ensure that if an aliased import is used, any
submodules should be marked as used too. This comment says it best:

```rust
// If the name of a submodule import is the same as an alias of another import, and the
// alias is used, then the submodule import should be marked as used too.
//
// For example, mark `pyarrow.csv` as used in:
//
// ```python
// import pyarrow as pa
// import pyarrow.csv
// print(pa.csv.read_csv("test.csv"))
// ```
```

However, it looks like when we go to look up `pyarrow` (of `import
pyarrow as pa`), we aren't checking to ensure the resolved binding is
_actually_ an import. This was causing us to attribute `print(rm.ANY)`
to `def requests_mock` here:

```python
import requests_mock as rm

def requests_mock(requests_mock: rm.Mocker):
    print(rm.ANY)
```

Closes https://github.com/astral-sh/ruff/issues/6180.
2023-07-30 22:16:25 +00:00
Charlie Marsh
76741cac77 Add global and nonlocal formatting (#6170)
## Summary

Adds `global` and `nonlocal` formatting, without the "deviation from
black" outlined in the linked issue, which I'll do separately.

See: https://github.com/astral-sh/ruff/issues/4798.

## Test Plan

Added a fixture in the Ruff-specific directory since the Black fixtures
don't seem to cover this.
2023-07-29 14:39:42 +00:00
Charlie Marsh
5d9814d84d Remove parentheses around some walrus operators (#6173)
## Summary

Closes https://github.com/astral-sh/ruff/issues/5781

## Test Plan

Added cases to
`crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/named_expr.py`
one-by-one and adjusted the condition as needed.
2023-07-29 10:06:26 -04:00
Micha Reiser
1d7ad30188 CI: Update formatter dependencies (#6168) 2023-07-29 15:24:24 +02:00
Charlie Marsh
4231ed2fc3 Skip partial duplicates when applying multi-edit fixes (#6144)
## Summary

Right now, if we have two fixes that have an overlapping edit, but not
an _identical_ set of edits, they'll conflict, causing us to do another
linter traversal. Here, I've enabled the fixer to support partially
overlapping edits, which (as an example) let's us greatly reduce the
number of iterations required in the test suite.

The most common case here is that in which a bunch of edits need to
import some symbol, and then use that symbol, but in different ways. In
that case, all edits will have a common fix (to import the symbol), but
deviate in some way. With this change, we can do all of those edits in
one pass.

Note that the simplest way to enable this was to store sorted edits on
`Fix`. We don't allow modifying the edits on `Fix` once it's
constructed, so this is an easy change, and allows us to avoid a bunch
of clones and traversals later on.

Closes #5800.
2023-07-29 12:11:57 +00:00
Charlie Marsh
badbfb2d3e Skip BOM when determining Locator's line starts (#6159)
## Summary

If a file has a BOM, the import sorter _always_ reports the imports as
unsorted. The acute issue is that we detect that the line has leading
content (before the imports), which we always consider a violation.
Rather than fixing that one site, this PR instead makes `.line_start`
BOM-aware.

Fixes https://github.com/astral-sh/ruff/issues/6155.
2023-07-29 11:47:13 +00:00
Dhruv Manilawala
44bdf20221 [pep8-naming]: New config option extend-ignore-names (#6169)
## Summary

This PR adds a new config option for `pep8-naming` plugin called
`extend-ignore-names` which is used to extend the default values in
`ignore-names` option.

resolves: #6050
2023-07-29 17:11:04 +05:30
Dhruv Manilawala
3c99fbf808 Implement --diff for Jupyter Notebooks (#6149)
## Summary

Implement `--diff` for Jupyter Notebooks

## Test Plan

1. Use `crates/ruff/resources/test/fixtures/jupyter/isort.ipynb` as a
test case
and add a markdown cell in between the code cells to check that the diff
   outputs the correct cell index.
2. Run the command:
`cargo run --bin ruff --package ruff_cli -- check --no-cache --isolated
--select=ALL crates/ruff/resources/test/fixtures/jupyter/isort.ipynb
--fix --diff`

<details><summary>Example output:</summary>
<p>

```diff
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
@@ -1,3 +0,0 @@
-from pathlib import Path
-import random
-import math
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
@@ -1,5 +1,3 @@
-from typing import Any
-import collections
 # Newline should be added here
 def foo():
     pass

--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
@@ -1,8 +1,7 @@
 import pprint
 import tempfile
 
-from IPython import display
 import matplotlib.pyplot as plt
-
 import tensorflow as tf
-import tensorflow_datasets as tfds
+import tensorflow_datasets as tfds
+from IPython import display
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
@@ -1,5 +1,4 @@
 import tensorflow_models as tfm
 
 # These are not in the tfm public API for v2.9. They will be available in v2.10
-from official.vision.serving import export_saved_model_lib
-import official.core.train_lib
+from official.vision.serving import export_saved_model_lib
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
@@ -1,5 +1,5 @@
-exp_config = tfm.core.exp_factory.get_exp_config('resnet_imagenet')
-tfds_name = 'cifar10'
+exp_config = tfm.core.exp_factory.get_exp_config("resnet_imagenet")
+tfds_name = "cifar10"
 ds,ds_info = tfds.load(
 tfds_name,
 with_info=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
@@ -6,12 +6,12 @@
 # Configure training and testing data
 batch_size = 128
 
-exp_config.task.train_data.input_path = ''
+exp_config.task.train_data.input_path = ""
 exp_config.task.train_data.tfds_name = tfds_name
-exp_config.task.train_data.tfds_split = 'train'
+exp_config.task.train_data.tfds_split = "train"
 exp_config.task.train_data.global_batch_size = batch_size
 
-exp_config.task.validation_data.input_path = ''
+exp_config.task.validation_data.input_path = ""
 exp_config.task.validation_data.tfds_name = tfds_name
-exp_config.task.validation_data.tfds_split = 'test'
+exp_config.task.validation_data.tfds_split = "test"
 exp_config.task.validation_data.global_batch_size = batch_size
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
@@ -1,16 +1,16 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
-if 'GPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'GPU'
-elif 'TPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'TPU'
+if "GPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "GPU"
+elif "TPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "TPU"
 else:
-  print('Running on CPU is slow, so only train for a few steps.')
-  device = 'CPU'
+  print("Running on CPU is slow, so only train for a few steps.")
+  device = "CPU"
 
-if device=='CPU':
+if device=="CPU":
   train_steps = 20
   exp_config.trainer.steps_per_loop = 5
 else:
@@ -20,9 +20,9 @@
 exp_config.trainer.summary_interval = 100
 exp_config.trainer.checkpoint_interval = train_steps
 exp_config.trainer.validation_interval = 1000
-exp_config.trainer.validation_steps =  ds_info.splits['test'].num_examples // batch_size
+exp_config.trainer.validation_steps =  ds_info.splits["test"].num_examples // batch_size
 exp_config.trainer.train_steps = train_steps
-exp_config.trainer.optimizer_config.learning_rate.type = 'cosine'
+exp_config.trainer.optimizer_config.learning_rate.type = "cosine"
 exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps
 exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1
 exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = 100
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
@@ -1,14 +1,14 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
 if exp_config.runtime.mixed_precision_dtype == tf.float16:
-    tf.keras.mixed_precision.set_global_policy('mixed_float16')
+    tf.keras.mixed_precision.set_global_policy("mixed_float16")
 
-if 'GPU' in ''.join(logical_device_names):
+if "GPU" in "".join(logical_device_names):
   distribution_strategy = tf.distribute.MirroredStrategy()
-elif 'TPU' in ''.join(logical_device_names):
+elif "TPU" in "".join(logical_device_names):
   tf.tpu.experimental.initialize_tpu_system()
-  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='/device:TPU_SYSTEM:0')
+  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="/device:TPU_SYSTEM:0")
   distribution_strategy = tf.distribute.experimental.TPUStrategy(tpu)
 else:
-  print('Warning: this will be really slow.')
+  print("Warning: this will be really slow.")
   distribution_strategy = tf.distribute.OneDeviceStrategy(logical_device_names[0])
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
@@ -1,5 +1,3 @@
 with distribution_strategy.scope():
   model_dir = tempfile.mkdtemp()
   task = tfm.core.task_factory.get_task(exp_config.task, logging_dir=model_dir)
-
-#  tf.keras.utils.plot_model(task.build_model(), show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
@@ -1,4 +1,4 @@
 for images, labels in task.build_inputs(exp_config.task.train_data).take(1):
   print()
-  print(f'images.shape: {str(images.shape):16}  images.dtype: {images.dtype!r}')
-  print(f'labels.shape: {str(labels.shape):16}  labels.dtype: {labels.dtype!r}')
+  print(f"images.shape: {images.shape!s:16}  images.dtype: {images.dtype!r}")
+  print(f"labels.shape: {labels.shape!s:16}  labels.dtype: {labels.dtype!r}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
@@ -1 +1 @@
-plt.hist(images.numpy().flatten());
+plt.hist(images.numpy().flatten())
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
@@ -1,2 +1,2 @@
-label_info = ds_info.features['label']
+label_info = ds_info.features["label"]
 label_info.int2str(1)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
@@ -10,9 +10,6 @@
     if predictions is None:
       plt.title(label_info.int2str(labels[i]))
     else:
-      if labels[i] == predictions[i]:
-        color = 'g'
-      else:
-        color = 'r'
+      color = "g" if labels[i] == predictions[i] else "r"
       plt.title(label_info.int2str(predictions[i]), color=color)
     plt.axis("off")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
@@ -1,3 +1,3 @@
-plt.figure(figsize=(10, 10));
+plt.figure(figsize=(10, 10))
 for images, labels in task.build_inputs(exp_config.task.validation_data).take(1):
   show_batch(images, labels)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
@@ -1,7 +1,7 @@
 model, eval_logs = tfm.core.train_lib.run_experiment(
     distribution_strategy=distribution_strategy,
     task=task,
-    mode='train_and_eval',
+    mode="train_and_eval",
     params=exp_config,
     model_dir=model_dir,
     run_post_eval=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
@@ -1 +0,0 @@
-#  tf.keras.utils.plot_model(model, show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
@@ -1,4 +1,4 @@
 for key, value in eval_logs.items():
     if isinstance(value, tf.Tensor):
       value = value.numpy()
-    print(f'{key:20}: {value:.3f}')
+    print(f"{key:20}: {value:.3f}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
@@ -4,5 +4,5 @@
 
 show_batch(images, labels, tf.cast(predictions, tf.int32))
 
-if device=='CPU':
-  plt.suptitle('The model was only trained for a few steps, it is not expected to do well.')
+if device=="CPU":
+  plt.suptitle("The model was only trained for a few steps, it is not expected to do well.")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
@@ -1,8 +1,8 @@
 # Saving and exporting the trained model
 export_saved_model_lib.export_inference_graph(
-    input_type='image_tensor',
+    input_type="image_tensor",
     batch_size=1,
     input_image_size=[32, 32],
     params=exp_config,
     checkpoint_path=tf.train.latest_checkpoint(model_dir),
-    export_dir='./export/')
+    export_dir="./export/")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
@@ -1,3 +1,3 @@
 # Importing SavedModel
-imported = tf.saved_model.load('./export/')
-model_fn = imported.signatures['serving_default']
+imported = tf.saved_model.load("./export/")
+model_fn = imported.signatures["serving_default"]
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
@@ -1,10 +1,10 @@
 plt.figure(figsize=(10, 10))
-for data in tfds.load('cifar10', split='test').batch(12).take(1):
+for data in tfds.load("cifar10", split="test").batch(12).take(1):
   predictions = []
-  for image in data['image']:
-    index = tf.argmax(model_fn(image[tf.newaxis, ...])['logits'], axis=1)[0]
+  for image in data["image"]:
+    index = tf.argmax(model_fn(image[tf.newaxis, ...])["logits"], axis=1)[0]
     predictions.append(index)
-  show_batch(data['image'], data['label'], predictions)
+  show_batch(data["image"], data["label"], predictions)
 
-  if device=='CPU':
-    plt.suptitle('The model was only trained for a few steps, it is not expected to do better than random.')
+  if device=="CPU":
+    plt.suptitle("The model was only trained for a few steps, it is not expected to do better than random.")

Would fix 61 errors.
```

</p>
</details> 

resolves: #4727
2023-07-29 04:22:56 +00:00
Charlie Marsh
4802c7c7d8 Avoid key-in-dict violations for self accesses (#6165)
Closes https://github.com/astral-sh/ruff/issues/6163.
2023-07-29 03:35:26 +00:00
Charlie Marsh
646ff6497c Ignore end-of-line file exemption comments (#6160)
## Summary

This PR protects against code like:

```python
from typing import Optional

import bar  # ruff: noqa
import baz

class Foo:
    x: Optional[str] = None
```

In which the user wrote `# ruff: noqa` to ignore a specific error, not
realizing that it was a file-level exemption that thus turned off all
lint rules.

Specifically, if a `# ruff: noqa` directive is not at the start of a
line, we now ignore it and warn, since this is almost certainly a
mistake.
2023-07-29 00:40:32 +00:00
Victor Hugo Gomes
e0d5c7564f [flake8-pyi] Implement PYI049 (#6136)
## Summary

Checks for the presence of unused private `typing.TypedDict`
definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8
2023-07-29 00:34:36 +00:00
Victor Hugo Gomes
7838d8c8af Implement PYI047 (#6134)
## Summary

Checks for the presence of unused private `typing.TypeAlias`
definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8
2023-07-29 00:21:29 +00:00
Zanie Blue
047c211837 Add semantic analysis of type aliases and parameters (#6109)
Requires https://github.com/astral-sh/RustPython-Parser/pull/42
Related https://github.com/PyCQA/pyflakes/pull/778
[PEP-695](https://peps.python.org/pep-0695)
Part of #5062 

<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Adds a scope for type parameters, a type parameter binding kind, and
checker visitation of type parameters in type alias statements, function
definitions, and class definitions.

A few changes were necessary to ensure correctness following the
insertion of a new scope between function and class scopes and their
parent.

## Test Plan

<!-- How was it tested? -->
Undefined name snapshots.

Unused type parameter rule will be added as follow-up.
2023-07-28 17:06:37 -05:00
Charlie Marsh
134d447d4c Avoid refactoring x[:1]-like slices in RUF015 (#6150)
## Summary

Right now, `RUF015` will try to rewrite `x[:1]` as `[next(x)]`. This
isn't equivalent if `x`, for example, is empty, where slicing like
`x[:1]` is forgiving, but `next` raises `StopIteration`. For me this is
a little too much of a deviation to be comfortable with, and most of the
value in this rule is the `x[0]` to `next(x)` conversion anyway.

Closes https://github.com/astral-sh/ruff/issues/6148.
2023-07-28 09:38:13 -04:00
Charlie Marsh
cd4147423c Skip PERF203 violations for multi-statement loops (#6145)
Closes https://github.com/astral-sh/ruff/issues/5858.
2023-07-28 04:55:55 +00:00
Charlie Marsh
d15436458f Only run unused private type rules over finalized bindings (#6142)
## Summary

In #6134 and #6136, we see some false positives for "shadowed" class
definitions. For example, here, the first definition is flagged as
unused, since from the perspective of the semantic model (which doesn't
understand branching), it appears to be immediately shadowed in the
`else`, and thus never used:

```python
if sys.version_info >= (3, 11):
    class _RootLoggerConfiguration(TypedDict, total=False):
        level: _Level
        filters: Sequence[str | _FilterType]
        handlers: Sequence[str]

else:
    class _RootLoggerConfiguration(TypedDict, total=False):
        level: _Level
        filters: Sequence[str]
        handlers: Sequence[str]
```

Instead of looking at _all_ bindings, we should instead look at the
"live" bindings, which is similar to how other rules (like unused
variables detection) is structured. We thus move the rule from
`bindings.rs` (which iterates over _all_ bindings, regardless of whether
they're shadowed) to `deferred_scopes.rs`, which iterates over all
"live" bindings once a scope has been fully analyzed.

## Test Plan

`cargo test`
2023-07-28 02:16:09 +00:00
Charlie Marsh
0bc3edf6c9 Add documentation and test cases for redefinition (#6135) 2023-07-28 00:01:42 +00:00
Aarni Koskela
3d54d31cd9 Implement E241 and E242 (tab/multiple ws after commas) (#6094)
## Summary

This PR implements pycodestyle's E241 (tab after comma) and E242
(multiple whitespace after comma) lints.

These are marked as nursery rules like many other pycodestyle rules.

Refs #2402

## Test Plan

E24.py copied from pycodestyle.
2023-07-27 18:58:41 +00:00
Tom Kuson
1418ee62f8 Add more documentation to the flake8-bandit rules (#6128)
## Summary

Completes the documentation for the ruleset, apart from four rules which
have contradictions, so need to be thought about more regarding how to
document that. Related to #2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-07-27 18:57:45 +00:00
Harutaka Kawamura
bf987f80f4 Add PT017 and PT019 docs (#6115) 2023-07-27 18:56:34 +00:00
rembridge
bb08eea5cc missing-whitespace-around-operators comment (#6106)
**Summary**

Updated doc comments for `missing_whitespace_around_operator.rs`. Online
docs also benefit from this update.

**Test Plan**

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-27 14:52:43 -04:00
Tom Kuson
d16216a2c2 Add documentation to the flynt rules (#6130)
## Summary

Completes the documentation for the one and only (current) rule in the
`flynt` ruleset. Related to #2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-07-27 14:32:59 -04:00
Jelle van der Waa
0853004f41 [pylint] Implement eq-without-hash rule (PLW1641) (#5955)
Implement
https://pylint.pycqa.org/en/latest/user_guide/messages/warning/eq-without-hash.html
Issue https://github.com/astral-sh/ruff/issues/970

It's not enabled by default in pylint, so I guess it shouldn't in Ruff
either?
2023-07-27 18:28:44 +00:00
Harutaka Kawamura
fb5bbe30c7 Update SIM115 to cover pathlib.Path.open (#6118) 2023-07-27 14:20:52 -04:00
Charlie Marsh
dd706c7a35 Fix E211 documentation (#6133) 2023-07-27 17:19:33 +00:00
Charlie Marsh
e15b9c5572 Cache name resolutions in the semantic model (#6047)
## Summary

This PR stores the mapping from `ExprName` node to resolved `BindingId`,
which lets us skip scope lookups in `resolve_call_path`. It's enabled by
#6045, since that PR ensures that when we analyze a node (and thus call
`resolve_call_path`), we'll have already visited its `ExprName`
elements.

In more detail: imagine that we're traversing over `foo.bar()`. When we
read `foo`, it will be an `ExprName`, which we'll then resolve to a
binding via `handle_node_load`. With this change, we then store that
binding in a map. Later, if we call `collect_call_path` on `foo.bar`,
we'll identify `foo` (the "head" of the attribute) and grab the resolved
binding in that map. _Almost_ all names are now resolved in advance,
though it's not a strict requirement, and some rules break that pattern
(e.g., if we're analyzing arguments, and they need to inspect their
annotations, which are visited in a deferred manner).

This improves performance by 4-6% on the all-rules benchmark. It looks
like it hurts performance (1-2% drop) in the default-rules benchmark,
presumedly because those rules don't call `resolve_call_path` nearly as
much, and so we're paying for these extra writes.

Here's the benchmark data:

```
linter/default-rules/numpy/globals.py
                        time:   [67.270 µs 67.380 µs 67.489 µs]
                        thrpt:  [43.720 MiB/s 43.792 MiB/s 43.863 MiB/s]
                 change:
                        time:   [+0.4747% +0.7752% +1.0626%] (p = 0.00 < 0.05)
                        thrpt:  [-1.0514% -0.7693% -0.4724%]
                        Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high severe
linter/default-rules/pydantic/types.py
                        time:   [1.4067 ms 1.4105 ms 1.4146 ms]
                        thrpt:  [18.028 MiB/s 18.081 MiB/s 18.129 MiB/s]
                 change:
                        time:   [+1.3152% +1.6953% +2.0414%] (p = 0.00 < 0.05)
                        thrpt:  [-2.0006% -1.6671% -1.2981%]
                        Performance has regressed.
linter/default-rules/numpy/ctypeslib.py
                        time:   [637.67 µs 638.96 µs 640.28 µs]
                        thrpt:  [26.006 MiB/s 26.060 MiB/s 26.113 MiB/s]
                 change:
                        time:   [+1.5859% +1.8109% +2.0353%] (p = 0.00 < 0.05)
                        thrpt:  [-1.9947% -1.7787% -1.5611%]
                        Performance has regressed.
linter/default-rules/large/dataset.py
                        time:   [3.2289 ms 3.2336 ms 3.2383 ms]
                        thrpt:  [12.563 MiB/s 12.581 MiB/s 12.599 MiB/s]
                 change:
                        time:   [+0.8029% +0.9898% +1.1740%] (p = 0.00 < 0.05)
                        thrpt:  [-1.1604% -0.9801% -0.7965%]
                        Change within noise threshold.

linter/all-rules/numpy/globals.py
                        time:   [134.05 µs 134.15 µs 134.26 µs]
                        thrpt:  [21.977 MiB/s 21.995 MiB/s 22.012 MiB/s]
                 change:
                        time:   [-4.4571% -4.1175% -3.8268%] (p = 0.00 < 0.05)
                        thrpt:  [+3.9791% +4.2943% +4.6651%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  2 (2.00%) low mild
  3 (3.00%) high mild
  3 (3.00%) high severe
linter/all-rules/pydantic/types.py
                        time:   [2.5627 ms 2.5669 ms 2.5720 ms]
                        thrpt:  [9.9158 MiB/s 9.9354 MiB/s 9.9516 MiB/s]
                 change:
                        time:   [-5.8304% -5.6374% -5.4452%] (p = 0.00 < 0.05)
                        thrpt:  [+5.7587% +5.9742% +6.1914%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  6 (6.00%) high mild
  1 (1.00%) high severe
linter/all-rules/numpy/ctypeslib.py
                        time:   [1.3949 ms 1.3956 ms 1.3964 ms]
                        thrpt:  [11.925 MiB/s 11.931 MiB/s 11.937 MiB/s]
                 change:
                        time:   [-6.2496% -6.0856% -5.9293%] (p = 0.00 < 0.05)
                        thrpt:  [+6.3030% +6.4799% +6.6662%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe
linter/all-rules/large/dataset.py
                        time:   [5.5951 ms 5.6019 ms 5.6093 ms]
                        thrpt:  [7.2527 MiB/s 7.2623 MiB/s 7.2711 MiB/s]
                 change:
                        time:   [-5.1781% -4.9783% -4.8070%] (p = 0.00 < 0.05)
                        thrpt:  [+5.0497% +5.2391% +5.4608%]
                        Performance has improved.
```

Still playing with this (the concepts need better names, documentation,
etc.), but opening up for feedback.
2023-07-27 13:01:56 -04:00
qdegraaf
0638a26347 Add AnyExpressionYield to consolidate ExprYield and ExprYieldFrom (#6127)
Co-authored-by: Micha Reiser <micha@reiser.io>
2023-07-27 16:01:16 +00:00
konsti
2a65e6fc38 Explain check_docs_formatted.py error message (#6125)
## Summary

This is an error message only change to lead an implementor of a new
rule that has an unformatted or invalid bad example to the
right code.

## Test Plan

n/a
2023-07-27 10:22:13 -04:00
Charlie Marsh
13af91299d Avoid walking past root when resolving imports (#6126)
## Summary

Noticed in #5954: we walk _past_ the root rather than stopping _at_ the
root when attempting to traverse along the parent path. It's effectively
an off-by-one bug.
2023-07-27 10:22:13 -04:00
konsti
d317af442f Fix windows test warnings (#6124)
See
https://github.com/astral-sh/ruff/actions/runs/5679922286/job/15392998698.
These didn't fail CI because we run clippy on linux only.
2023-07-27 10:22:13 -04:00
Micha Reiser
6bf6646c5d Respect indent when measuring with MeasureMode::AllLines (#6120) 2023-07-27 10:22:13 -04:00
konsti
9574ff3dc7 Unbreak main (#6123)
This fixes main breaking due to two merges.
2023-07-27 10:22:13 -04:00
konsti
06d9ff9577 Don't format trailing comma for lambda arguments (#5946)
**Summary** lambda arguments don't have parentheses, so they shouldn't
get a magic trailing comma either. This fixes some unstable formatting

**Test Plan** Added a regression test.

89 (from previously 145) instances of unstable formatting remaining.

```
$ cargo run --bin ruff_dev --release -- format-dev --stability-check --error-file formatter-ecosystem-errors.txt --multi-project target/checkouts > formatter-ecosystem-progress.txt
$ rg "Unstable formatting" target/formatter-ecosystem-errors.txt | wc -l
89
```

Closes #5892
2023-07-27 10:22:13 -04:00
Micha Reiser
40f54375cb Pull in RustPython parser (#6099) 2023-07-27 09:29:11 +00:00
Victor Hugo Gomes
86539c1fc5 [flake8-pyi] Implement PYI046 (#6098)
## Summary
Checks for the presence of unused private `typing.Protocol` definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8.
2023-07-27 02:34:56 +00:00
rembridge
d04367a042 call-datetime-without-tzinfo comment (#6105)
## Summary

Updated doc comment for `call_datetime_without_tzinfo.rs`. Online docs
also benefit from this update.

## Test Plan

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-26 23:21:03 +00:00
Simon Brugman
ffdd653c54 [flake8-use-pathlib] Implement glob (PTH207) (#5939)
Discovered that the usage of `glob.glob` is
[widespread](https://grep.app/search?current=7&q=glob.glob%28&filter%5Blang%5D%5B0%5D=Python)
when working on the previous lints for `flake8-use-pathlib`.
2023-07-26 23:15:05 +00:00
rembridge
132f07c27b whitespace-before-parameters comment (#6103) 2023-07-26 23:01:47 +00:00
Victor Hugo Gomes
c0dbcb3434 [flake8-pyi] Implement PYI018 (#6018)
## Summary

Check for unused private `TypeVar`. See [original
implementation](2a86db8271/pyi.py (L1958)).

```
$ flake8 --select Y018 crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi

crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:4:1: Y018 TypeVar "_T" is not used
crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:5:1: Y018 TypeVar "_P" is not used
```

```
$ ./target/debug/ruff --select PYI018 crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi --no-cache

crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:4:1: PYI018 TypeVar `_T` is never used
crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:5:1: PYI018 TypeVar `_P` is never used
Found 2 errors.
```
In the file `unused_private_type_declaration.rs`, I'm planning to add
other rules that are similar to `PYI018` like the `PYI046`, `PYI047` and
`PYI049`.

ref #848

## Test Plan

Snapshots and manual runs of flake8.
2023-07-26 22:56:15 +00:00
Victor Hugo Gomes
788643f718 Add "--select E402" to example snippet in CONTRIBUTING.md (#6108)
## Summary
In Ruff only a subset of rules are enabled by default. This change
change aims to clarify that when adding a new rule, you must explicitly
use the `--select name_of_rule` command to ensure the rule gets
executed.

This was talked about on Discord a while back.

## Test Plan
Checked docs via mkdocs
2023-07-26 22:48:53 +00:00
Charlie Marsh
64a186272f Move utf8-encoding-declaration to token-based rules (#6110)
Closes #5979.
2023-07-26 22:42:37 +00:00
Charlie Marsh
8113615534 Add some additional documentation around import categorization (#6107)
Closes https://github.com/astral-sh/ruff/issues/5529.
2023-07-26 22:39:01 +00:00
konsti
ecf4058e52 Fix cargo test -p ruff (#6104) 2023-07-26 22:44:53 +02:00
Zanie Blue
2d2673f613 Add comment regarding class scope short circuit (#6101) 2023-07-26 14:55:05 -05:00
Harutaka Kawamura
564304eba2 Add PT001 documentation (#6023) 2023-07-26 18:05:25 +00:00
Harutaka Kawamura
5b8fc753ec Add PT024 documentation (#6026) 2023-07-26 13:48:37 -04:00
konsti
13f9a16e33 Rewrite placement logic (#6040)
## Summary
This is a rewrite of the main comment placement logic. `place_comment`
now has three parts:

- place own line comments
  - between branches
  - after a branch
- place end-of-line comments
  - after colon
  - after a branch
- place comments for specific nodes (that include module level comments)

The rewrite fixed three bugs: `class A: # trailing comment` comments now
stay end-of-line, `try: # comment` remains end-of-line and deeply
indented try-else-finally comments remain with the right nested
statement.

It will be much easier to give more alternative branches nodes since
this is abstracted away by `is_node_with_body` and the first/last child
helpers. Adding new node types can now be done by adding an entry to the
`place_comment` match. The code went from 1526 lines before #6033 to
1213 lines now.

It thinks it easier to just read the new `placement.rs` rather than
reviewing the diff.

## Test Plan

The existing fixtures staying the same or improving plus new ones for
the bug fixes.
2023-07-26 16:21:23 +00:00
Micha Reiser
2cf00fee96 Remove parser dependency from ruff-python-ast (#6096) 2023-07-26 17:47:22 +02:00
Harutaka Kawamura
99127243f4 Raise PTH201 for Path("") (#6095) 2023-07-26 09:22:46 -04:00
Harutaka Kawamura
77396c6f92 Fix SIM102 to handle indented elif (#6072)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

The `SIM102` auto-fix fails if `elif` is indented like this:

## Example

```python
def f():
    # SIM102
    if a:
        pass
    elif b:
        if c:
            d
```

```
> cargo run -p ruff_cli -- check --select SIM102 --fix a.py
...
error: Failed to fix nested if: Failed to extract statement from source
a.py:5:5: SIM102 Use a single `if` statement instead of nested `if` statements
Found 1 error.
```

## Test Plan

<!-- How was it tested? -->

New test
2023-07-26 14:37:32 +02:00
Micha Reiser
16e1737d1b Use cursor based lexer (#6012) 2023-07-26 11:32:26 +02:00
Dhruv Manilawala
025fa4eba8 Integrate the new Jupyter AST nodes in Ruff (#6086)
## Summary

This PR adds the implementation for the new Jupyter AST nodes i.e.,
`ExprLineMagic` and `StmtLineMagic`.

## Test Plan

Add test cases for `unparse` containing magic commands

resolves: #6087
2023-07-26 08:20:30 +00:00
Micha Reiser
1fdadee59c playground: Persist source and panel (#6071) 2023-07-26 07:55:59 +02:00
Charlie Marsh
c8ee357613 Remove relative import handling from BindingKind::Import case (#6084)
## Summary

Only `ImportFrom` imports can be relative, this is just unused.
2023-07-26 00:17:41 -04:00
Harutaka Kawamura
96d2ca0bda Allow pytest.raises body to contain a single func or class definition (#6083) 2023-07-25 23:45:57 -04:00
Harutaka Kawamura
62f821daaa Avoid raising PT012 for simple with statements (#6081) 2023-07-26 01:43:31 +00:00
Noah Jenner
9dfe484472 Modify PyPA classifiers and Shields.io badge URLs (#6082)
## Summary

Updated `pyproject.toml` classifiers from `"Development Status :: 4 -
Beta"` to `"Development Status :: 5 - Production/Stable"` to reflect the
transition from Beta to Full Release.
Updated the `README.md` to use `.com/astral-sh/ruff/...` instead of
`.com/charliermarsh/ruff/...` in Shields.io badges to reflect the
transition to a company.

## Test Plan

Utilized the official PyPA classifiers list (located at:
https://pypi.org/classifiers/)
Previewed the markdown file in different browsers on Github to ensure
all badges and logos still render properly.
2023-07-26 01:25:46 +00:00
Tom Kuson
da33c26238 Ignore explicit-string-concatenation on single line (#6028)
## Summary

Ignore `explicit-string-concatenation` on single line.

Closes #5332.

## Test Plan

`cargo test`
2023-07-25 19:20:29 -04:00
rembridge
8c80bfa7da tab indentation comment (#6079)
## Summary

Updated doc comment for `tab_indentation.rs`. Online docs also benefit
from this update.

## Test Plan

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-25 23:14:43 +00:00
977 changed files with 114670 additions and 9310 deletions

View File

@@ -50,6 +50,10 @@ jobs:
- crates/ruff_formatter/**
- crates/ruff_python_trivia/**
- crates/ruff_python_ast/**
- crates/ruff_source_file/**
- crates/ruff_python_index/**
- crates/ruff_text_size/**
- crates/ruff_python_parser/**
cargo-fmt:

File diff suppressed because it is too large Load Diff

View File

@@ -122,9 +122,9 @@ At time of writing, the repository includes the following crates:
intermediate representation. The backend for `ruff_python_formatter`.
- `crates/ruff_index`: library crate inspired by `rustc_index`.
- `crates/ruff_macros`: proc macro crate containing macros used by Ruff.
- `crates/ruff_python_ast`: library crate containing Python-specific AST types and utilities. Note
that the AST schema itself is defined in the
[rustpython-ast](https://github.com/astral-sh/RustPython-Parser) crate.
- `crates/ruff_python_ast`: library crate containing Python-specific AST types and utilities.
- `crates/ruff_python_codegen`: library crate containing utilities for generating Python source code.
- `crates/ruff_python_codegen`: library crate containing utilities for generating Python source code.
- `crates/ruff_python_formatter`: library crate implementing the Python formatter. Emits an
intermediate representation for each node, which `ruff_formatter` prints based on the configured
line length.
@@ -135,8 +135,7 @@ At time of writing, the repository includes the following crates:
the names of all built-in exceptions and which standard library types are immutable.
- `crates/ruff_python_trivia`: library crate containing Python-specific trivia utilities (e.g.,
for analyzing indentation, newlines, etc.).
- `crates/ruff_rustpython`: library crate containing `RustPython`-specific utilities.
- `crates/ruff_textwrap`: library crate to indent and dedent Python source code.
- `crates/ruff_python_parser`: library crate containing the Python parser.
- `crates/ruff_wasm`: library crate for exposing Ruff as a WebAssembly module. Powers the
[Ruff Playground](https://play.ruff.rs/).
@@ -224,9 +223,12 @@ Once you've completed the code for the rule itself, you can define tests with th
For example, if you're adding a new rule named `E402`, you would run:
```shell
cargo run -p ruff_cli -- check crates/ruff/resources/test/fixtures/pycodestyle/E402.py --no-cache
cargo run -p ruff_cli -- check crates/ruff/resources/test/fixtures/pycodestyle/E402.py --no-cache --select E402
```
**Note:** Only a subset of rules are enabled by default. When testing a new rule, ensure that
you activate it by adding `--select ${rule_code}` to the command.
1. Add the test to the relevant `crates/ruff/src/rules/[linter]/mod.rs` file. If you're contributing
a rule to a pre-existing set, you should be able to find a similar example to pattern-match
against. If you're adding a new linter, you'll need to create a new `mod.rs` file (see,
@@ -313,7 +315,7 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
1. Run the release workflow with the version number (without starting `v`) as input. Make sure
main has your merged PR as last commit
1. The release workflow will do the following:
1. Build all the assets. If this fails (even though we tested in step 4), we havent tagged or
1. Build all the assets. If this fails (even though we tested in step 4), we haven't tagged or
uploaded anything, you can restart after pushing a fix.
1. Upload to PyPI.
1. Create and push the Git tag (as extracted from `pyproject.toml`). We create the Git tag only
@@ -726,9 +728,9 @@ diagnostics, then our current compilation pipeline proceeds as follows:
1. **File discovery**: Given paths like `foo/`, locate all Python files in any specified subdirectories, taking into account our hierarchical settings system and any `exclude` options.
1. **Package resolution**: Determine the package root for every file by traversing over its parent directories and looking for `__init__.py` files.
1. **Package resolution**: Determine the "package root" for every file by traversing over its parent directories and looking for `__init__.py` files.
1. **Cache initialization**: For every package root, initialize an empty cache.
1. **Cache initialization**: For every "package root", initialize an empty cache.
1. **Analysis**: For every file, in parallel:
@@ -736,7 +738,7 @@ diagnostics, then our current compilation pipeline proceeds as follows:
1. **Tokenization**: Run the lexer over the file to generate a token stream.
1. **Indexing**: Extract metadata from the token stream, such as: comment ranges, `# noqa` locations, `# isort: off` locations, doc lines, etc.
1. **Indexing**: Extract metadata from the token stream, such as: comment ranges, `# noqa` locations, `# isort: off` locations, "doc lines", etc.
1. **Token-based rule evaluation**: Run any lint rules that are based on the contents of the token stream (e.g., commented-out code).
@@ -746,9 +748,9 @@ diagnostics, then our current compilation pipeline proceeds as follows:
1. **Parsing**: Run the parser over the token stream to produce an AST. (This consumes the token stream, so anything that relies on the token stream needs to happen before parsing.)
1. **AST-based rule evaluation**: Run any lint rules that are based on the AST. This includes the vast majority of lint rules. As part of this step, we also build the semantic model for the current file as we traverse over the AST. Some lint rules are evaluated eagerly, as we iterate over the AST, while others are evaluated in a deferred manner (e.g., unused imports, since we cant determine whether an import is unused until weve finished analyzing the entire file), after weve finished the initial traversal.
1. **AST-based rule evaluation**: Run any lint rules that are based on the AST. This includes the vast majority of lint rules. As part of this step, we also build the semantic model for the current file as we traverse over the AST. Some lint rules are evaluated eagerly, as we iterate over the AST, while others are evaluated in a deferred manner (e.g., unused imports, since we can't determine whether an import is unused until we've finished analyzing the entire file), after we've finished the initial traversal.
1. **Import-based rule evaluation**: Run any lint rules that are based on the modules imports (e.g., import sorting). These could, in theory, be included in the AST-based rule evaluation phase — theyre just separated for simplicity.
1. **Import-based rule evaluation**: Run any lint rules that are based on the module's imports (e.g., import sorting). These could, in theory, be included in the AST-based rule evaluation phase — they're just separated for simplicity.
1. **Physical line-based rule evaluation**: Run any lint rules that are based on physical lines (e.g., line-length).
@@ -757,3 +759,116 @@ diagnostics, then our current compilation pipeline proceeds as follows:
1. **Cache write**: Write the generated diagnostics to the package cache using the file as a key.
1. **Reporting**: Print diagnostics in the specified format (text, JSON, etc.), to the specified output channel (stdout, a file, etc.).
### Import Categorization
To understand Ruff's import categorization system, we first need to define two concepts:
- "Project root": The directory containing the `pyproject.toml`, `ruff.toml`, or `.ruff.toml` file,
discovered by identifying the "closest" such directory for each Python file. (If you're running
via `ruff --config /path/to/pyproject.toml`, then the current working directory is used as the
"project root".)
- "Package root": The top-most directory defining the Python package that includes a given Python
file. To find the package root for a given Python file, traverse up its parent directories until
you reach a parent directory that doesn't contain an `__init__.py` file (and isn't marked as
a [namespace package](https://beta.ruff.rs/docs/settings/#namespace-packages)); take the directory
just before that, i.e., the first directory in the package.
For example, given:
```text
my_project
├── pyproject.toml
└── src
└── foo
├── __init__.py
└── bar
├── __init__.py
└── baz.py
```
Then when analyzing `baz.py`, the project root would be the top-level directory (`./my_project`),
and the package root would be `./my_project/src/foo`.
#### Project root
The project root does not have a significant impact beyond that all relative paths within the loaded
configuration file are resolved relative to the project root.
For example, to indicate that `bar` above is a namespace package (it isn't, but let's run with it),
the `pyproject.toml` would list `namespace-packages = ["./src/bar"]`, which would resolve
to `my_project/src/bar`.
The same logic applies when providing a configuration file via `--config`. In that case, the
_current working directory_ is used as the project root, and so all paths in that configuration file
are resolved relative to the current working directory. (As a general rule, we want to avoid relying
on the current working directory as much as possible, to ensure that Ruff exhibits the same behavior
regardless of where and how you invoke it — but that's hard to avoid in this case.)
Additionally, if a `pyproject.toml` file _extends_ another configuration file, Ruff will still use
the directory containing that `pyproject.toml` file as the project root. For example, if
`./my_project/pyproject.toml` contains:
```toml
[tool.ruff]
extend = "/path/to/pyproject.toml"
```
Then Ruff will use `./my_project` as the project root, even though the configuration file extends
`/path/to/pyproject.toml`. As such, if the configuration file at `/path/to/pyproject.toml` contains
any relative paths, they will be resolved relative to `./my_project`.
If a project uses nested configuration files, then Ruff would detect multiple project roots, one for
each configuration file.
#### Package root
The package root is used to determine a file's "module path". Consider, again, `baz.py`. In that
case, `./my_project/src/foo` was identified as the package root, so the module path for `baz.py`
would resolve to `foo.bar.baz` — as computed by taking the relative path from the package root
(inclusive of the root itself). The module path can be thought of as "the path you would use to
import the module" (e.g., `import foo.bar.baz`).
The package root and module path are used to, e.g., convert relative to absolute imports, and for
import categorization, as described below.
#### Import categorization
When sorting and formatting import blocks, Ruff categorizes every import into one of five
categories:
1. **"Future"**: the import is a `__future__` import. That's easy: just look at the name of the
imported module!
1. **"Standard library"**: the import comes from the Python standard library (e.g., `import os`).
This is easy too: we include a list of all known standard library modules in Ruff itself, so it's
a simple lookup.
1. **"Local folder"**: the import is a relative import (e.g., `from .foo import bar`). This is easy
too: just check if the import includes a `level` (i.e., a dot-prefix).
1. **"First party"**: the import is part of the current project. (More on this below.)
1. **"Third party"**: everything else.
The real challenge lies in determining whether an import is first-party — everything else is either
trivial, or (as in the case of third-party) merely defined as "not first-party".
There are three ways in which an import can be categorized as "first-party":
1. **Explicit settings**: the import is marked as such via the `known-first-party` setting. (This
should generally be seen as an escape hatch.)
1. **Same-package**: the imported module is in the same package as the current file. This gets back
to the importance of the "package root" and the file's "module path". Imagine that we're
analyzing `baz.py` above. If `baz.py` contains any imports that appear to come from the `foo`
package (e.g., `from foo import bar` or `import foo.bar`), they'll be classified as first-party
automatically. This check is as simple as comparing the first segment of the current file's
module path to the first segment of the import.
1. **Source roots**: Ruff supports a `[src](https://beta.ruff.rs/docs/settings/#src)` setting, which
sets the directories to scan when identifying first-party imports. The algorithm is
straightforward: given an import, like `import foo`, iterate over the directories enumerated in
the `src` setting and, for each directory, check for the existence of a subdirectory `foo` or a
file `foo.py`.
By default, `src` is set to the project root. In the above example, we'd want to set
`src = ["./src"]` to ensure that we locate `./my_project/src/foo` and thus categorize `import foo`
as first-party in `baz.py`. In practice, for this limited example, setting `src = ["./src"]` is
unnecessary, as all imports within `./my_project/src/foo` would be categorized as first-party via
the same-package heuristic; but your project contains multiple packages, you'll want to set `src`
explicitly.

421
Cargo.lock generated
View File

@@ -133,6 +133,15 @@ dependencies = [
"os_str_bytes",
]
[[package]]
name = "ascii-canvas"
version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8824ecca2e851cec16968d54a01dd372ef8f95b244fb84b84e70128be347c3c6"
dependencies = [
"term",
]
[[package]]
name = "assert_cmd"
version = "2.0.11"
@@ -169,6 +178,21 @@ dependencies = [
"serde",
]
[[package]]
name = "bit-set"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0700ddab506f33b20a03b13996eccd309a48e5ff77d0d95926aa0210fb4e95f1"
dependencies = [
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb"
[[package]]
name = "bitflags"
version = "1.3.2"
@@ -609,6 +633,16 @@ dependencies = [
"dirs-sys 0.4.1",
]
[[package]]
name = "dirs-next"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b98cf8ebf19c3d1b223e151f99a4f9f0690dca41414773390fc824184ac833e1"
dependencies = [
"cfg-if",
"dirs-sys-next",
]
[[package]]
name = "dirs-sys"
version = "0.3.7"
@@ -632,6 +666,17 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "dirs-sys-next"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ebda144c4fe02d1f7ea1a7d9641b6fc6b580adcfa024ae48797ecdeb6825b4d"
dependencies = [
"libc",
"redox_users",
"winapi",
]
[[package]]
name = "doc-comment"
version = "0.3.3"
@@ -656,6 +701,15 @@ version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcaabb2fef8c910e7f4c7ce9f67a1283a1715879a7c230ca9d6d1ae31f16d91"
[[package]]
name = "ena"
version = "0.14.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c533630cf40e9caa44bd91aadc88a75d75a4c3a12b4cfde353cbed41daa1e1f1"
dependencies = [
"log",
]
[[package]]
name = "encode_unicode"
version = "0.3.6"
@@ -732,9 +786,15 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "fixedbitset"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]]
name = "flake8-to-ruff"
version = "0.0.280"
version = "0.0.281"
dependencies = [
"anyhow",
"clap",
@@ -1099,6 +1159,28 @@ dependencies = [
"libc",
]
[[package]]
name = "lalrpop"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da4081d44f4611b66c6dd725e6de3169f9f63905421e8626fcb86b6a898998b8"
dependencies = [
"ascii-canvas",
"bit-set",
"diff",
"ena",
"is-terminal",
"itertools",
"lalrpop-util",
"petgraph",
"regex",
"regex-syntax 0.7.3",
"string_cache",
"term",
"tiny-keccak",
"unicode-xid",
]
[[package]]
name = "lalrpop-util"
version = "0.20.0"
@@ -1199,6 +1281,16 @@ version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09fc20d2ca12cb9f044c93e3bd6d32d523e6e2ec3db4f7b2939cd99026ecd3f0"
[[package]]
name = "lock_api"
version = "0.4.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1cc9717a20b1bb222f333e6a92fd32f7d8a18ddc5a3191a11af45dcbf4dcd16"
dependencies = [
"autocfg",
"scopeguard",
]
[[package]]
name = "log"
version = "0.4.19"
@@ -1277,6 +1369,12 @@ version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "308d96db8debc727c3fd9744aac51751243420e46edf401010908da7f8d5e57c"
[[package]]
name = "new_debug_unreachable"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54"
[[package]]
name = "nextest-workspace-hack"
version = "0.1.0"
@@ -1421,6 +1519,29 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
[[package]]
name = "parking_lot"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3742b2c103b9f06bc9fff0a37ff4912935851bee6d36f3c02bcc755bcfec228f"
dependencies = [
"lock_api",
"parking_lot_core",
]
[[package]]
name = "parking_lot_core"
version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93f00c865fe7cabf650081affecd3871070f26767e7b2070a3ffae14c654b447"
dependencies = [
"cfg-if",
"libc",
"redox_syscall 0.3.5",
"smallvec",
"windows-targets 0.48.1",
]
[[package]]
name = "paste"
version = "1.0.13"
@@ -1512,13 +1633,23 @@ version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b2a4787296e9989611394c33f193f676704af1686e70b8f8033ab5ba9a35a94"
[[package]]
name = "petgraph"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4dd7d28ee937e54fe3080c91faa1c3a46c06de6252988a7f4592ba2310ef22a4"
dependencies = [
"fixedbitset",
"indexmap 1.9.3",
]
[[package]]
name = "phf"
version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ade2d8b8f33c7333b51bcf0428d37e217e9f32192ae4772156f65063b8ce03dc"
dependencies = [
"phf_shared",
"phf_shared 0.11.2",
]
[[package]]
@@ -1528,7 +1659,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8d39688d359e6b34654d328e262234662d16cc0f60ec8dcbe5e718709342a5a"
dependencies = [
"phf_generator",
"phf_shared",
"phf_shared 0.11.2",
]
[[package]]
@@ -1537,10 +1668,19 @@ version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48e4cc64c2ad9ebe670cb8fd69dd50ae301650392e81c05f9bfcb2d5bdbc24b0"
dependencies = [
"phf_shared",
"phf_shared 0.11.2",
"rand",
]
[[package]]
name = "phf_shared"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6796ad771acdc0123d2a88dc428b5e38ef24456743ddb1744ed628f9815c096"
dependencies = [
"siphasher",
]
[[package]]
name = "phf_shared"
version = "0.11.2"
@@ -1601,6 +1741,18 @@ version = "1.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "767eb9f07d4a5ebcb39bbf2d452058a93c011373abf6832e24194a1c3f004794"
[[package]]
name = "ppv-lite86"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
[[package]]
name = "precomputed-hash"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "925383efa346730478fb4838dbe9137d2a47675ad789c546d150a6e1dd4ab31c"
[[package]]
name = "predicates"
version = "3.0.3"
@@ -1725,6 +1877,18 @@ version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
dependencies = [
"libc",
"rand_chacha",
"rand_core",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
dependencies = [
"ppv-lite86",
"rand_core",
]
@@ -1733,6 +1897,9 @@ name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
dependencies = [
"getrandom",
]
[[package]]
name = "rayon"
@@ -1868,7 +2035,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.0.280"
version = "0.0.281"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1905,15 +2072,16 @@ dependencies = [
"ruff_index",
"ruff_macros",
"ruff_python_ast",
"ruff_python_codegen",
"ruff_python_index",
"ruff_python_literal",
"ruff_python_parser",
"ruff_python_semantic",
"ruff_python_stdlib",
"ruff_python_trivia",
"ruff_rustpython",
"ruff_source_file",
"ruff_text_size",
"ruff_textwrap",
"rustc-hash",
"rustpython-format",
"rustpython-parser",
"schemars",
"semver",
"serde",
@@ -1944,7 +2112,7 @@ dependencies = [
"ruff",
"ruff_python_ast",
"ruff_python_formatter",
"rustpython-parser",
"ruff_python_parser",
"serde",
"serde_json",
"tikv-jemallocator",
@@ -1966,7 +2134,7 @@ dependencies = [
[[package]]
name = "ruff_cli"
version = "0.0.280"
version = "0.0.281"
dependencies = [
"annotate-snippets 0.9.1",
"anyhow",
@@ -1999,8 +2167,9 @@ dependencies = [
"ruff_python_ast",
"ruff_python_formatter",
"ruff_python_stdlib",
"ruff_python_trivia",
"ruff_source_file",
"ruff_text_size",
"ruff_textwrap",
"rustc-hash",
"serde",
"serde_json",
@@ -2034,11 +2203,13 @@ dependencies = [
"ruff_cli",
"ruff_diagnostics",
"ruff_formatter",
"ruff_python_ast",
"ruff_python_codegen",
"ruff_python_formatter",
"ruff_python_literal",
"ruff_python_parser",
"ruff_python_stdlib",
"ruff_textwrap",
"rustpython-format",
"rustpython-parser",
"ruff_python_trivia",
"schemars",
"serde",
"serde_json",
@@ -2089,7 +2260,7 @@ dependencies = [
"itertools",
"proc-macro2",
"quote",
"ruff_textwrap",
"ruff_python_trivia",
"syn 2.0.23",
]
@@ -2097,26 +2268,33 @@ dependencies = [
name = "ruff_python_ast"
version = "0.0.0"
dependencies = [
"anyhow",
"bitflags 2.3.3",
"insta",
"is-macro",
"itertools",
"log",
"memchr",
"num-bigint",
"num-traits",
"once_cell",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_source_file",
"ruff_text_size",
"rustc-hash",
"rustpython-ast",
"rustpython-literal",
"rustpython-parser",
"serde",
"smallvec",
]
[[package]]
name = "ruff_python_codegen"
version = "0.0.0"
dependencies = [
"once_cell",
"ruff_python_ast",
"ruff_python_literal",
"ruff_python_parser",
"ruff_source_file",
]
[[package]]
name = "ruff_python_formatter"
version = "0.0.0"
@@ -2131,10 +2309,12 @@ dependencies = [
"once_cell",
"ruff_formatter",
"ruff_python_ast",
"ruff_python_index",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_source_file",
"ruff_text_size",
"rustc-hash",
"rustpython-parser",
"serde",
"serde_json",
"similar",
@@ -2142,6 +2322,55 @@ dependencies = [
"thiserror",
]
[[package]]
name = "ruff_python_index"
version = "0.0.0"
dependencies = [
"itertools",
"ruff_python_ast",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_source_file",
"ruff_text_size",
]
[[package]]
name = "ruff_python_literal"
version = "0.0.0"
dependencies = [
"bitflags 2.3.3",
"hexf-parse",
"is-macro",
"itertools",
"lexical-parse-float",
"num-bigint",
"num-traits",
"rand",
"unic-ucd-category",
]
[[package]]
name = "ruff_python_parser"
version = "0.0.0"
dependencies = [
"anyhow",
"insta",
"is-macro",
"itertools",
"lalrpop",
"lalrpop-util",
"num-bigint",
"num-traits",
"ruff_python_ast",
"ruff_text_size",
"rustc-hash",
"static_assertions",
"tiny-keccak",
"unic-emoji-char",
"unic-ucd-ident",
"unicode_names2",
]
[[package]]
name = "ruff_python_resolver"
version = "0.0.0"
@@ -2162,9 +2391,9 @@ dependencies = [
"ruff_index",
"ruff_python_ast",
"ruff_python_stdlib",
"ruff_source_file",
"ruff_text_size",
"rustc-hash",
"rustpython-parser",
"smallvec",
]
@@ -2178,19 +2407,14 @@ version = "0.0.0"
dependencies = [
"insta",
"memchr",
"ruff_python_ast",
"ruff_python_parser",
"ruff_source_file",
"ruff_text_size",
"smallvec",
"unic-ucd-ident",
]
[[package]]
name = "ruff_rustpython"
version = "0.0.0"
dependencies = [
"anyhow",
"rustpython-parser",
]
[[package]]
name = "ruff_shrinking"
version = "0.1.0"
@@ -2200,28 +2424,32 @@ dependencies = [
"fs-err",
"regex",
"ruff_python_ast",
"ruff_rustpython",
"rustpython-ast",
"ruff_python_parser",
"ruff_text_size",
"shlex",
"tracing",
"tracing-subscriber",
]
[[package]]
name = "ruff_text_size"
name = "ruff_source_file"
version = "0.0.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=4d03b9b5b212fc869e4cfda151414438186a7779#4d03b9b5b212fc869e4cfda151414438186a7779"
dependencies = [
"schemars",
"insta",
"memchr",
"once_cell",
"ruff_text_size",
"serde",
]
[[package]]
name = "ruff_textwrap"
name = "ruff_text_size"
version = "0.0.0"
dependencies = [
"ruff_python_trivia",
"ruff_text_size",
"schemars",
"serde",
"serde_test",
"static_assertions",
]
[[package]]
@@ -2235,9 +2463,11 @@ dependencies = [
"ruff",
"ruff_diagnostics",
"ruff_python_ast",
"ruff_python_codegen",
"ruff_python_formatter",
"ruff_rustpython",
"rustpython-parser",
"ruff_python_index",
"ruff_python_parser",
"ruff_source_file",
"serde",
"serde-wasm-bindgen",
"wasm-bindgen",
@@ -2309,74 +2539,6 @@ dependencies = [
"untrusted",
]
[[package]]
name = "rustpython-ast"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=4d03b9b5b212fc869e4cfda151414438186a7779#4d03b9b5b212fc869e4cfda151414438186a7779"
dependencies = [
"is-macro",
"num-bigint",
"rustpython-parser-core",
"static_assertions",
]
[[package]]
name = "rustpython-format"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=4d03b9b5b212fc869e4cfda151414438186a7779#4d03b9b5b212fc869e4cfda151414438186a7779"
dependencies = [
"bitflags 2.3.3",
"itertools",
"num-bigint",
"num-traits",
"rustpython-literal",
]
[[package]]
name = "rustpython-literal"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=4d03b9b5b212fc869e4cfda151414438186a7779#4d03b9b5b212fc869e4cfda151414438186a7779"
dependencies = [
"hexf-parse",
"is-macro",
"lexical-parse-float",
"num-traits",
"unic-ucd-category",
]
[[package]]
name = "rustpython-parser"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=4d03b9b5b212fc869e4cfda151414438186a7779#4d03b9b5b212fc869e4cfda151414438186a7779"
dependencies = [
"anyhow",
"is-macro",
"itertools",
"lalrpop-util",
"log",
"num-bigint",
"num-traits",
"phf",
"phf_codegen",
"rustc-hash",
"rustpython-ast",
"rustpython-parser-core",
"tiny-keccak",
"unic-emoji-char",
"unic-ucd-ident",
"unicode_names2",
]
[[package]]
name = "rustpython-parser-core"
version = "0.2.0"
source = "git+https://github.com/astral-sh/RustPython-Parser.git?rev=4d03b9b5b212fc869e4cfda151414438186a7779#4d03b9b5b212fc869e4cfda151414438186a7779"
dependencies = [
"is-macro",
"memchr",
"ruff_text_size",
]
[[package]]
name = "rustversion"
version = "1.0.13"
@@ -2512,6 +2674,15 @@ dependencies = [
"serde",
]
[[package]]
name = "serde_test"
version = "1.0.176"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a2f49ace1498612d14f7e0b8245519584db8299541dfe31a06374a828d620ab"
dependencies = [
"serde",
]
[[package]]
name = "serde_with"
version = "3.0.0"
@@ -2594,6 +2765,19 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "string_cache"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f91138e76242f575eb1d3b38b4f1362f10d3a43f47d182a5b359af488a02293b"
dependencies = [
"new_debug_unreachable",
"once_cell",
"parking_lot",
"phf_shared 0.10.0",
"precomputed-hash",
]
[[package]]
name = "strsim"
version = "0.10.0"
@@ -2667,6 +2851,17 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "term"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c59df8ac95d96ff9bede18eb7300b0fda5e5d8d90960e76f8e14ae765eedbf1f"
dependencies = [
"dirs-next",
"rustversion",
"winapi",
]
[[package]]
name = "termcolor"
version = "1.2.0"
@@ -3046,6 +3241,12 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b"
[[package]]
name = "unicode-xid"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f962df74c8c05a667b5ee8bcf162993134c104e96440b663c8daa176dc772d8c"
[[package]]
name = "unicode_names2"
version = "0.6.0"

View File

@@ -51,12 +51,6 @@ wsl = { version = "0.1.0" }
# v1.0.1
libcst = { git = "https://github.com/Instagram/LibCST.git", rev = "3cacca1a1029f05707e50703b49fe3dd860aa839", default-features = false }
ruff_text_size = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "4d03b9b5b212fc869e4cfda151414438186a7779" }
rustpython-ast = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "4d03b9b5b212fc869e4cfda151414438186a7779" , default-features = false, features = ["num-bigint"]}
rustpython-format = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "4d03b9b5b212fc869e4cfda151414438186a7779", default-features = false, features = ["num-bigint"] }
rustpython-literal = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "4d03b9b5b212fc869e4cfda151414438186a7779", default-features = false }
rustpython-parser = { git = "https://github.com/astral-sh/RustPython-Parser.git", rev = "4d03b9b5b212fc869e4cfda151414438186a7779" , default-features = false, features = ["full-lexer", "num-bigint"] }
[profile.release]
lto = "fat"
codegen-units = 1
@@ -69,7 +63,7 @@ opt-level = 3
# Reduce complexity of a parser function that would trigger a locals limit in a wasm tool.
# https://github.com/bytecodealliance/wasm-tools/blob/b5c3d98e40590512a3b12470ef358d5c7b983b15/crates/wasmparser/src/limits.rs#L29
[profile.dev.package.rustpython-parser]
[profile.dev.package.ruff_python_parser]
opt-level = 1
# Use the `--profile release-debug` flag to show symbols in release mode.

View File

@@ -2,7 +2,7 @@
# Ruff
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![image](https://img.shields.io/pypi/v/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![image](https://img.shields.io/pypi/l/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![image](https://img.shields.io/pypi/pyversions/ruff.svg)](https://pypi.python.org/pypi/ruff)
@@ -140,7 +140,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com) hook:
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.280
rev: v0.0.281
hooks:
- id: ruff
```
@@ -424,13 +424,13 @@ Ruff is used by a number of major open-source projects and companies, including:
If you're using Ruff, consider adding the Ruff badge to project's `README.md`:
```md
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
```
...or `README.rst`:
```rst
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
:target: https://github.com/astral-sh/ruff
:alt: Ruff
```
@@ -438,7 +438,7 @@ If you're using Ruff, consider adding the Ruff badge to project's `README.md`:
...or, as HTML:
```html
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
```
## License
@@ -447,6 +447,6 @@ MIT
<div align="center">
<a target="_blank" href="https://astral.sh" style="background:none">
<img src="https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/svg/Astral.svg">
<img src="https://raw.githubusercontent.com/astral-sh/ruff/main/assets/svg/Astral.svg">
</a>
</div>

View File

@@ -1,6 +1,6 @@
[package]
name = "flake8-to-ruff"
version = "0.0.280"
version = "0.0.281"
description = """
Convert Flake8 configuration files to Ruff configuration files.
"""

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.0.280"
version = "0.0.281"
publish = false
authors = { workspace = true }
edition = { workspace = true }
@@ -19,13 +19,16 @@ ruff_cache = { path = "../ruff_cache" }
ruff_diagnostics = { path = "../ruff_diagnostics", features = ["serde"] }
ruff_index = { path = "../ruff_index" }
ruff_macros = { path = "../ruff_macros" }
ruff_python_trivia = { path = "../ruff_python_trivia" }
ruff_python_ast = { path = "../ruff_python_ast", features = ["serde"] }
ruff_python_codegen = { path = "../ruff_python_codegen" }
ruff_python_index = { path = "../ruff_python_index" }
ruff_python_literal = { path = "../ruff_python_literal" }
ruff_python_semantic = { path = "../ruff_python_semantic" }
ruff_python_stdlib = { path = "../ruff_python_stdlib" }
ruff_rustpython = { path = "../ruff_rustpython" }
ruff_text_size = { workspace = true }
ruff_textwrap = { path = "../ruff_textwrap" }
ruff_python_trivia = { path = "../ruff_python_trivia" }
ruff_python_parser = { path = "../ruff_python_parser" }
ruff_source_file = { path = "../ruff_source_file", features = ["serde"] }
ruff_text_size = { path = "../ruff_text_size" }
annotate-snippets = { version = "0.9.1", features = ["color"] }
anyhow = { workspace = true }
@@ -59,8 +62,8 @@ quick-junit = { version = "0.3.2" }
regex = { workspace = true }
result-like = { version = "0.4.6" }
rustc-hash = { workspace = true }
rustpython-format = { workspace = true }
rustpython-parser = { workspace = true }
schemars = { workspace = true, optional = true }
semver = { version = "1.0.16" }
serde = { workspace = true }

View File

@@ -50,3 +50,12 @@ _ = """a""" "b"
_ = 'a' "b"
_ = rf"a" rf"b"
# Single-line explicit concatenation should be ignored.
_ = "abc" + "def" + "ghi"
_ = foo + "abc" + "def"
_ = "abc" + foo + "def"
_ = "abc" + "def" + foo
_ = foo + bar + "abc"
_ = "abc" + foo + bar
_ = foo + "abc" + bar

View File

@@ -0,0 +1,12 @@
import typing
from typing import TypeVar
_T = typing.TypeVar("_T")
_P = TypeVar("_P")
# OK
_UsedTypeVar = TypeVar("_UsedTypeVar")
def func(arg: _UsedTypeVar) -> _UsedTypeVar: ...
_A, _B = TypeVar("_A"), TypeVar("_B")
_C = _D = TypeVar("_C")

View File

@@ -0,0 +1,12 @@
import typing
from typing import TypeVar
_T = typing.TypeVar("_T")
_P = TypeVar("_P")
# OK
_UsedTypeVar = TypeVar("_UsedTypeVar")
def func(arg: _UsedTypeVar) -> _UsedTypeVar: ...
_A, _B = TypeVar("_A"), TypeVar("_B")
_C = _D = TypeVar("_C")

View File

@@ -0,0 +1,18 @@
import typing
from typing import Protocol
class _Foo(Protocol):
bar: int
class _Bar(typing.Protocol):
bar: int
# OK
class _UsedPrivateProtocol(Protocol):
bar: int
def uses__UsedPrivateProtocol(arg: _UsedPrivateProtocol) -> None: ...

View File

@@ -0,0 +1,18 @@
import typing
from typing import Protocol
class _Foo(object, Protocol):
bar: int
class _Bar(typing.Protocol):
bar: int
# OK
class _UsedPrivateProtocol(Protocol):
bar: int
def uses__UsedPrivateProtocol(arg: _UsedPrivateProtocol) -> None: ...

View File

@@ -0,0 +1,22 @@
import typing
import sys
from typing import TypeAlias
_UnusedPrivateTypeAlias: TypeAlias = int | None
_T: typing.TypeAlias = str
# OK
_UsedPrivateTypeAlias: TypeAlias = int | None
def func(arg: _UsedPrivateTypeAlias) -> _UsedPrivateTypeAlias:
...
if sys.version_info > (3, 9):
_PrivateTypeAlias: TypeAlias = str | None
else:
_PrivateTypeAlias: TypeAlias = float | None
def func2(arg: _PrivateTypeAlias) -> None: ...

View File

@@ -0,0 +1,22 @@
import typing
import sys
from typing import TypeAlias
_UnusedPrivateTypeAlias: TypeAlias = int | None
_T: typing.TypeAlias = str
# OK
_UsedPrivateTypeAlias: TypeAlias = int | None
def func(arg: _UsedPrivateTypeAlias) -> _UsedPrivateTypeAlias:
...
if sys.version_info > (3, 9):
_PrivateTypeAlias: TypeAlias = str | None
else:
_PrivateTypeAlias: TypeAlias = float | None
def func2(arg: _PrivateTypeAlias) -> None: ...

View File

@@ -0,0 +1,18 @@
import typing
from typing import TypedDict
class _UnusedTypedDict(TypedDict):
foo: str
class _UnusedTypedDict2(typing.TypedDict):
bar: int
class _UsedTypedDict(TypedDict):
foo: bytes
class _CustomClass(_UsedTypedDict):
bar: list[int]

View File

@@ -0,0 +1,32 @@
import sys
import typing
from typing import TypedDict
class _UnusedTypedDict(TypedDict):
foo: str
class _UnusedTypedDict2(typing.TypedDict):
bar: int
# OK
class _UsedTypedDict(TypedDict):
foo: bytes
class _CustomClass(_UsedTypedDict):
bar: list[int]
if sys.version_info >= (3, 10):
class _UsedTypedDict2(TypedDict):
foo: int
else:
class _UsedTypedDict2(TypedDict):
foo: float
class _CustomClass2(_UsedTypedDict2):
bar: list[int]

View File

@@ -11,6 +11,10 @@ async def test_ok_trivial_with():
with context_manager_under_test():
pass
with pytest.raises(ValueError):
with context_manager_under_test():
raise ValueError
with pytest.raises(AttributeError):
async with context_manager_under_test():
pass
@@ -24,6 +28,16 @@ def test_ok_complex_single_call():
)
def test_ok_func_and_class():
with pytest.raises(AttributeError):
class A:
pass
with pytest.raises(AttributeError):
def f():
pass
def test_error_multiple_statements():
with pytest.raises(AttributeError):
len([])
@@ -47,13 +61,10 @@ async def test_error_complex_statement():
while True:
[].size
with pytest.raises(AttributeError):
with context_manager_under_test():
[].size
with pytest.raises(AttributeError):
async with context_manager_under_test():
[].size
if True:
raise Exception
def test_error_try():

View File

@@ -156,3 +156,12 @@ if False:
if True:
if a:
pass
# SIM102
def f():
if a:
pass
elif b:
if c:
d

View File

@@ -1,7 +1,15 @@
import contextlib
import pathlib
import pathlib as pl
from pathlib import Path
from pathlib import Path as P
# SIM115
f = open("foo.txt")
f = Path("foo.txt").open()
f = pathlib.Path("foo.txt").open()
f = pl.Path("foo.txt").open()
f = P("foo.txt").open()
data = f.read()
f.close()

View File

@@ -30,3 +30,13 @@ for key in list(obj.keys()):
(k for k in obj.keys()) # SIM118
key in (obj or {}).keys() # SIM118
from typing import KeysView
class Foo:
def keys(self) -> KeysView[object]:
...
def __contains__(self, key: object) -> bool:
return key in self.keys() # OK

View File

@@ -5,6 +5,7 @@ from pathlib import Path as pth
_ = Path(".")
_ = pth(".")
_ = PurePath(".")
_ = Path("")
# no match
_ = Path()

View File

@@ -0,0 +1,11 @@
import os
import glob
from glob import glob as search
extensions_dir = "./extensions"
# PTH207
glob.glob(os.path.join(extensions_dir, "ops", "autograd", "*.cpp"))
list(glob.iglob(os.path.join(extensions_dir, "ops", "autograd", "*.cpp")))
search("*.png")

View File

@@ -0,0 +1,2 @@
import bar
import foo

View File

@@ -0,0 +1,2 @@
import foo
import bar

View File

@@ -1 +1 @@
broken "§=($/=(")
broken "§=($/=()

View File

@@ -1,28 +1,23 @@
# PERF203
for i in range(10):
try: # PERF203
try:
print(f"{i}")
except:
print("error")
# OK
try:
for i in range(10):
print(f"{i}")
except:
print("error")
# OK
i = 0
while i < 10: # PERF203
while i < 10:
try:
print(f"{i}")
except:
print("error")
i += 1
try:
i = 0
while i < 10:
print(f"{i}")
i += 1
except:
print("error")

View File

@@ -0,0 +1,13 @@
#: E241
a = (1, 2)
#: Okay
b = (1, 20)
#: E242
a = (1, 2) # tab before 2
#: Okay
b = (1, 20) # space before 20
#: E241 E241 E241
# issue 135
more_spaces = [a, b,
ef, +h,
c, -d]

View File

@@ -0,0 +1,103 @@
"""Test type parameters and aliases"""
# Type parameters in type alias statements
from some_module import Bar
type Foo[T] = T # OK
type Foo[T] = list[T] # OK
type Foo[T: ForwardA] = T # OK
type Foo[*Ts] = Bar[Ts] # OK
type Foo[**P] = Bar[P] # OK
class ForwardA: ...
# Types used in aliased assignment must exist
type Foo = DoesNotExist # F821: Undefined name `DoesNotExist`
type Foo = list[DoesNotExist] # F821: Undefined name `DoesNotExist`
# Type parameters do not escape alias scopes
type Foo[T] = T
T # F821: Undefined name `T` - not accessible afterward alias scope
# Type parameters in functions
def foo[T](t: T) -> T: return t # OK
async def afoo[T](t: T) -> T: return t # OK
def with_forward_ref[T: ForwardB](t: T) -> T: return t # OK
def can_access_inside[T](t: T) -> T: # OK
print(T) # OK
return t # OK
class ForwardB: ...
# Type parameters do not escape function scopes
from some_library import some_decorator
@some_decorator(T) # F821: Undefined name `T` - not accessible in decorators
def foo[T](t: T) -> None: ...
T # F821: Undefined name `T` - not accessible afterward function scope
# Type parameters in classes
class Foo[T](list[T]): ... # OK
class UsesForward[T: ForwardC](list[T]): ... # OK
class ForwardC: ...
class WithinBody[T](list[T]): # OK
t = T # OK
x: T # OK
def foo(self, x: T) -> T: # OK
return x
def foo(self):
T # OK
# Type parameters do not escape class scopes
from some_library import some_decorator
@some_decorator(T) # F821: Undefined name `T` - not accessible in decorators
class Foo[T](list[T]): ...
T # F821: Undefined name `T` - not accessible after class scope
# Types specified in bounds should exist
type Foo[T: DoesNotExist] = T # F821: Undefined name `DoesNotExist`
def foo[T: DoesNotExist](t: T) -> T: return t # F821: Undefined name `DoesNotExist`
class Foo[T: DoesNotExist](list[T]): ... # F821: Undefined name `DoesNotExist`
type Foo[T: (DoesNotExist1, DoesNotExist2)] = T # F821: Undefined name `DoesNotExist1`, Undefined name `DoesNotExist2`
def foo[T: (DoesNotExist1, DoesNotExist2)](t: T) -> T: return t # F821: Undefined name `DoesNotExist1`, Undefined name `DoesNotExist2`
class Foo[T: (DoesNotExist1, DoesNotExist2)](list[T]): ... # F821: Undefined name `DoesNotExist1`, Undefined name `DoesNotExist2`
# Type parameters in nested classes
class Parent[T]:
t = T # OK
def can_use_class_variable(self, x: t) -> t: # OK
return x
class Child:
def can_access_parent_type_parameter(self, x: T) -> T: # OK
T # OK
return x
def cannot_access_parent_variable(self, x: t) -> t: # F821: Undefined name `T`
t # F821: Undefined name `t`
return x
# Type parameters in nested functions
def can_access_inside_nested[T](t: T) -> T: # OK
def bar(x: T) -> T: # OK
T # OK
return x
bar(t)

View File

@@ -63,3 +63,10 @@ def main():
for sys in range(5):
pass
import requests_mock as rm
def requests_mock(requests_mock: rm.Mocker):
print(rm.ANY)

View File

@@ -0,0 +1,16 @@
class Person:
def __init__(self):
self.name = "monty"
def __eq__(self, other):
return isinstance(other, Person) and other.name == self.name
class Language:
def __init__(self):
self.name = "python"
def __eq__(self, other):
return isinstance(other, Language) and other.name == self.name
def __hash__(self):
return hash(self.name)

View File

@@ -0,0 +1,4 @@
print('Hello world')
#!/usr/bin/python
# -*- coding: utf-8 -*-

View File

@@ -2,21 +2,9 @@ x = range(10)
# RUF015
list(x)[0]
list(x)[:1]
list(x)[:1:1]
list(x)[:1:2]
tuple(x)[0]
tuple(x)[:1]
tuple(x)[:1:1]
tuple(x)[:1:2]
list(i for i in x)[0]
list(i for i in x)[:1]
list(i for i in x)[:1:1]
list(i for i in x)[:1:2]
[i for i in x][0]
[i for i in x][:1]
[i for i in x][:1:1]
[i for i in x][:1:2]
# OK (not indexing (solely) the first element)
list(x)
@@ -29,6 +17,9 @@ list(x)[::]
[i for i in x]
[i for i in x][1]
[i for i in x][-1]
[i for i in x][:1]
[i for i in x][:1:1]
[i for i in x][:1:2]
[i for i in x][1:]
[i for i in x][:3:2]
[i for i in x][::2]

View File

@@ -0,0 +1,5 @@
import os # ruff: noqa: F401
def f():
x = 1

View File

@@ -4,9 +4,10 @@ use anyhow::{bail, Result};
use libcst_native::{
Codegen, CodegenState, ImportNames, ParenthesizableWhitespace, SmallStatement, Statement,
};
use rustpython_parser::ast::{Ranged, Stmt};
use ruff_python_ast::{Ranged, Stmt};
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_codegen::Stylist;
use ruff_source_file::Locator;
use crate::cst::helpers::compose_module_path;
use crate::cst::matchers::match_statement;

View File

@@ -1,13 +1,14 @@
//! Interface for generating autofix edits from higher-level actions (e.g., "remove an argument").
use anyhow::{bail, Result};
use ruff_python_ast::{self as ast, ExceptHandler, Expr, Keyword, Ranged, Stmt};
use ruff_python_parser::{lexer, Mode};
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustpython_parser::ast::{self, ExceptHandler, Expr, Keyword, Ranged, Stmt};
use rustpython_parser::{lexer, Mode};
use ruff_diagnostics::Edit;
use ruff_python_ast::helpers;
use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use ruff_python_trivia::{is_python_whitespace, NewlineWithTrailingNewline, PythonWhitespace};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_trivia::{has_leading_content, is_python_whitespace, PythonWhitespace};
use ruff_source_file::{Locator, NewlineWithTrailingNewline};
use crate::autofix::codemods;
@@ -41,11 +42,9 @@ pub(crate) fn delete_stmt(
if let Some(semicolon) = trailing_semicolon(stmt.end(), locator) {
let next = next_stmt_break(semicolon, locator);
Edit::deletion(stmt.start(), next)
} else if helpers::has_leading_content(stmt.start(), locator) {
} else if has_leading_content(stmt.start(), locator) {
Edit::range_deletion(stmt.range())
} else if let Some(start) =
helpers::preceded_by_continuations(stmt.start(), locator, indexer)
{
} else if let Some(start) = indexer.preceded_by_continuations(stmt.start(), locator) {
Edit::range_deletion(TextRange::new(start, stmt.end()))
} else {
let range = locator.full_lines_range(stmt.range());
@@ -295,11 +294,11 @@ fn next_stmt_break(semicolon: TextSize, locator: &Locator) -> TextSize {
#[cfg(test)]
mod tests {
use anyhow::Result;
use ruff_python_ast::{Ranged, Suite};
use ruff_python_parser::Parse;
use ruff_text_size::TextSize;
use rustpython_parser::ast::{Ranged, Suite};
use rustpython_parser::Parse;
use ruff_python_ast::source_code::Locator;
use ruff_source_file::Locator;
use crate::autofix::edits::{next_stmt_break, trailing_semicolon};

View File

@@ -1,11 +1,11 @@
use itertools::Itertools;
use std::collections::BTreeSet;
use itertools::Itertools;
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustc_hash::{FxHashMap, FxHashSet};
use ruff_diagnostics::{Diagnostic, Edit, Fix, IsolationLevel};
use ruff_python_ast::source_code::Locator;
use ruff_source_file::Locator;
use crate::autofix::source_map::SourceMap;
use crate::linter::FixTable;
@@ -59,35 +59,30 @@ fn apply_fixes<'a>(
})
.sorted_by(|(rule1, fix1), (rule2, fix2)| cmp_fix(*rule1, *rule2, fix1, fix2))
{
// If we already applied an identical fix as part of another correction, skip
// any re-application.
if fix.edits().iter().all(|edit| applied.contains(edit)) {
*fixed.entry(rule).or_default() += 1;
continue;
}
let mut edits = fix
.edits()
.iter()
.filter(|edit| !applied.contains(edit))
.peekable();
// Best-effort approach: if this fix overlaps with a fix we've already applied,
// skip it.
if last_pos.map_or(false, |last_pos| {
fix.min_start()
.map_or(false, |fix_location| last_pos >= fix_location)
}) {
continue;
}
// If the fix contains at least one new edit, enforce isolation and positional requirements.
if let Some(first) = edits.peek() {
// If this fix requires isolation, and we've already applied another fix in the
// same isolation group, skip it.
if let IsolationLevel::Group(id) = fix.isolation() {
if !isolated.insert(id) {
continue;
}
}
// If this fix requires isolation, and we've already applied another fix in the
// same isolation group, skip it.
if let IsolationLevel::Group(id) = fix.isolation() {
if !isolated.insert(id) {
// If this fix overlaps with a fix we've already applied, skip it.
if last_pos.map_or(false, |last_pos| last_pos >= first.start()) {
continue;
}
}
for edit in fix
.edits()
.iter()
.sorted_unstable_by_key(|edit| edit.start())
{
let mut applied_edits = Vec::with_capacity(fix.edits().len());
for edit in edits {
// Add all contents from `last_pos` to `fix.location`.
let slice = locator.slice(TextRange::new(last_pos.unwrap_or_default(), edit.start()));
output.push_str(slice);
@@ -103,9 +98,10 @@ fn apply_fixes<'a>(
// Track that the edit was applied.
last_pos = Some(edit.end());
applied.insert(edit);
applied_edits.push(edit);
}
applied.extend(applied_edits.drain(..));
*fixed.entry(rule).or_default() += 1;
}
@@ -146,7 +142,7 @@ mod tests {
use ruff_diagnostics::Diagnostic;
use ruff_diagnostics::Edit;
use ruff_diagnostics::Fix;
use ruff_python_ast::source_code::Locator;
use ruff_source_file::Locator;
use crate::autofix::source_map::SourceMarker;
use crate::autofix::{apply_fixes, FixResult};

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{Arg, Ranged};
use ruff_python_ast::{Arg, Ranged};
use crate::checkers::ast::Checker;
use crate::codes::Rule;

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::Arguments;
use ruff_python_ast::Arguments;
use crate::checkers::ast::Checker;
use crate::codes::Rule;

View File

@@ -1,7 +1,8 @@
use ruff_diagnostics::{Diagnostic, Fix};
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::{flake8_import_conventions, flake8_pyi, pyflakes, pylint};
use ruff_diagnostics::{Diagnostic, Fix};
/// Run lint rules over the [`Binding`]s.
pub(crate) fn bindings(checker: &mut Checker) {

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::Comprehension;
use ruff_python_ast::Comprehension;
use crate::checkers::ast::Checker;
use crate::codes::Rule;

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{self, Stmt};
use ruff_python_ast::{self as ast, Stmt};
use crate::checkers::ast::Checker;
use crate::codes::Rule;

View File

@@ -5,7 +5,7 @@ use ruff_python_semantic::{Binding, BindingKind, ScopeKind};
use crate::checkers::ast::Checker;
use crate::codes::Rule;
use crate::rules::{flake8_type_checking, flake8_unused_arguments, pyflakes, pylint};
use crate::rules::{flake8_pyi, flake8_type_checking, flake8_unused_arguments, pyflakes, pylint};
/// Run lint rules over all deferred scopes in the [`SemanticModel`].
pub(crate) fn deferred_scopes(checker: &mut Checker) {
@@ -24,6 +24,10 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
Rule::UnusedImport,
Rule::UnusedLambdaArgument,
Rule::UnusedMethodArgument,
Rule::UnusedPrivateProtocol,
Rule::UnusedPrivateTypeAlias,
Rule::UnusedPrivateTypeVar,
Rule::UnusedPrivateTypedDict,
Rule::UnusedStaticMethodArgument,
Rule::UnusedVariable,
]) {
@@ -214,6 +218,21 @@ pub(crate) fn deferred_scopes(checker: &mut Checker) {
}
}
if checker.is_stub {
if checker.enabled(Rule::UnusedPrivateTypeVar) {
flake8_pyi::rules::unused_private_type_var(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateProtocol) {
flake8_pyi::rules::unused_private_protocol(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypeAlias) {
flake8_pyi::rules::unused_private_type_alias(checker, scope, &mut diagnostics);
}
if checker.enabled(Rule::UnusedPrivateTypedDict) {
flake8_pyi::rules::unused_private_typed_dict(checker, scope, &mut diagnostics);
}
}
if matches!(
scope.kind,
ScopeKind::Function(_) | ScopeKind::AsyncFunction(_) | ScopeKind::Lambda(_)

View File

@@ -1,6 +1,6 @@
use ruff_python_ast::str::raw_contents_range;
use ruff_python_ast::Ranged;
use ruff_text_size::TextRange;
use rustpython_parser::ast::Ranged;
use ruff_python_semantic::{BindingKind, ContextualizedDefinition, Export};

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{self, ExceptHandler, Ranged};
use ruff_python_ast::{self as ast, ExceptHandler, Ranged};
use crate::checkers::ast::Checker;
use crate::registry::Rule;

View File

@@ -1,5 +1,5 @@
use rustpython_format::cformat::{CFormatError, CFormatErrorType};
use rustpython_parser::ast::{self, Constant, Expr, ExprContext, Operator, Ranged};
use ruff_python_ast::{self as ast, Constant, Expr, ExprContext, Operator, Ranged};
use ruff_python_literal::cformat::{CFormatError, CFormatErrorType};
use ruff_diagnostics::Diagnostic;
@@ -849,6 +849,7 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
Rule::OsPathGetatime,
Rule::OsPathGetmtime,
Rule::OsPathGetctime,
Rule::Glob,
]) {
flake8_use_pathlib::rules::replaceable_by_pathlib(checker, func);
}
@@ -1056,7 +1057,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
op: Operator::Add, ..
}) => {
if checker.enabled(Rule::ExplicitStringConcatenation) {
if let Some(diagnostic) = flake8_implicit_str_concat::rules::explicit(expr) {
if let Some(diagnostic) =
flake8_implicit_str_concat::rules::explicit(expr, checker.locator)
{
checker.diagnostics.push(diagnostic);
}
}

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::Suite;
use ruff_python_ast::Suite;
use crate::checkers::ast::Checker;
use crate::codes::Rule;

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{self, Expr, Ranged, Stmt};
use ruff_python_ast::{self as ast, Expr, Ranged, Stmt};
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::helpers;
@@ -396,6 +396,9 @@ pub(crate) fn statement(stmt: &Stmt, checker: &mut Checker) {
flake8_django::rules::model_without_dunder_str(checker, class_def);
}
}
if checker.enabled(Rule::EqWithoutHash) {
pylint::rules::object_without_hash_method(checker, class_def);
}
if checker.enabled(Rule::GlobalStatement) {
pylint::rules::global_statement(checker, name);
}

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::Stmt;
use ruff_python_ast::Stmt;
use crate::checkers::ast::Checker;
use crate::codes::Rule;

View File

@@ -1,7 +1,6 @@
use ruff_text_size::TextRange;
use rustpython_parser::ast::Expr;
use ruff_python_ast::{Expr, TypeParam};
use ruff_python_semantic::{ScopeId, Snapshot};
use ruff_text_size::TextRange;
/// A collection of AST nodes that are deferred for later analysis.
/// Used to, e.g., store functions, whose bodies shouldn't be analyzed until all
@@ -11,6 +10,7 @@ pub(crate) struct Deferred<'a> {
pub(crate) scopes: Vec<ScopeId>,
pub(crate) string_type_definitions: Vec<(TextRange, &'a str, Snapshot)>,
pub(crate) future_type_definitions: Vec<(&'a Expr, Snapshot)>,
pub(crate) type_param_definitions: Vec<(&'a TypeParam, Snapshot)>,
pub(crate) functions: Vec<Snapshot>,
pub(crate) lambdas: Vec<(&'a Expr, Snapshot)>,
pub(crate) for_loops: Vec<Snapshot>,

View File

@@ -30,21 +30,22 @@ use std::path::Path;
use itertools::Itertools;
use log::error;
use ruff_text_size::{TextRange, TextSize};
use rustpython_parser::ast::{
self, Arg, ArgWithDefault, Arguments, Comprehension, Constant, ElifElseClause, ExceptHandler,
Expr, ExprContext, Keyword, Pattern, Ranged, Stmt, Suite, UnaryOp,
use ruff_python_ast::{
self as ast, Arg, ArgWithDefault, Arguments, Comprehension, Constant, ElifElseClause,
ExceptHandler, Expr, ExprContext, Keyword, Pattern, Ranged, Stmt, Suite, UnaryOp,
};
use ruff_text_size::{TextRange, TextSize};
use ruff_diagnostics::{Diagnostic, IsolationLevel};
use ruff_python_ast::all::{extract_all_names, DunderAllFlags};
use ruff_python_ast::helpers::{extract_handled_exceptions, to_module_path};
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::source_code::{Generator, Indexer, Locator, Quote, Stylist};
use ruff_python_ast::str::trailing_quote;
use ruff_python_ast::typing::{parse_type_annotation, AnnotationKind};
use ruff_python_ast::visitor::{walk_except_handler, walk_pattern, Visitor};
use ruff_python_ast::{helpers, str, visitor};
use ruff_python_codegen::{Generator, Quote, Stylist};
use ruff_python_index::Indexer;
use ruff_python_parser::typing::{parse_type_annotation, AnnotationKind};
use ruff_python_semantic::analyze::{typing, visibility};
use ruff_python_semantic::{
BindingFlags, BindingId, BindingKind, Exceptions, Export, FromImport, Globals, Import, Module,
@@ -52,6 +53,7 @@ use ruff_python_semantic::{
};
use ruff_python_stdlib::builtins::{BUILTINS, MAGIC_GLOBALS};
use ruff_python_stdlib::path::is_python_stub_file;
use ruff_source_file::Locator;
use crate::checkers::ast::deferred::Deferred;
use crate::docstrings::extraction::ExtractionTarget;
@@ -451,12 +453,14 @@ where
args,
decorator_list,
returns,
type_params,
..
})
| Stmt::AsyncFunctionDef(ast::StmtAsyncFunctionDef {
body,
args,
decorator_list,
type_params,
returns,
..
}) => {
@@ -470,6 +474,12 @@ where
// are enabled.
let runtime_annotation = !self.semantic.future_annotations();
self.semantic.push_scope(ScopeKind::Type);
for type_param in type_params {
self.visit_type_param(type_param);
}
for arg_with_default in args
.posonlyargs
.iter()
@@ -540,18 +550,25 @@ where
bases,
keywords,
decorator_list,
type_params,
..
},
) => {
for decorator in decorator_list {
self.visit_decorator(decorator);
}
self.semantic.push_scope(ScopeKind::Type);
for type_param in type_params {
self.visit_type_param(type_param);
}
for expr in bases {
self.visit_expr(expr);
}
for keyword in keywords {
self.visit_keyword(keyword);
}
for decorator in decorator_list {
self.visit_decorator(decorator);
}
let definition = docstrings::extraction::extract_definition(
ExtractionTarget::Class,
@@ -560,7 +577,6 @@ where
&self.semantic.definitions,
);
self.semantic.push_definition(definition);
self.semantic.push_scope(ScopeKind::Class(class_def));
// Extract any global bindings from the class body.
@@ -570,6 +586,20 @@ where
self.visit_body(body);
}
Stmt::TypeAlias(ast::StmtTypeAlias {
range: _range,
name,
type_params,
value,
}) => {
self.semantic.push_scope(ScopeKind::Type);
for type_param in type_params {
self.visit_type_param(type_param);
}
self.visit_expr(value);
self.semantic.pop_scope();
self.visit_expr(name);
}
Stmt::Try(ast::StmtTry {
body,
handlers,
@@ -713,8 +743,9 @@ where
| Stmt::AsyncFunctionDef(ast::StmtAsyncFunctionDef { name, .. }) => {
let scope_id = self.semantic.scope_id;
self.deferred.scopes.push(scope_id);
self.semantic.pop_scope();
self.semantic.pop_scope(); // Function scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
@@ -725,8 +756,9 @@ where
Stmt::ClassDef(ast::StmtClassDef { name, .. }) => {
let scope_id = self.semantic.scope_id;
self.deferred.scopes.push(scope_id);
self.semantic.pop_scope();
self.semantic.pop_scope(); // Class scope
self.semantic.pop_definition();
self.semantic.pop_scope(); // Type parameter scope
self.add_binding(
name,
stmt.identifier(),
@@ -1329,6 +1361,26 @@ where
self.visit_stmt(stmt);
}
}
fn visit_type_param(&mut self, type_param: &'b ast::TypeParam) {
// Step 1: Binding
match type_param {
ast::TypeParam::TypeVar(ast::TypeParamTypeVar { name, range, .. })
| ast::TypeParam::TypeVarTuple(ast::TypeParamTypeVarTuple { name, range })
| ast::TypeParam::ParamSpec(ast::TypeParamParamSpec { name, range }) => {
self.add_binding(
name.as_str(),
*range,
BindingKind::TypeParam,
BindingFlags::empty(),
);
}
}
// Step 2: Traversal
self.deferred
.type_param_definitions
.push((type_param, self.semantic.snapshot()));
}
}
impl<'a> Checker<'a> {
@@ -1472,6 +1524,11 @@ impl<'a> Checker<'a> {
// Create the `Binding`.
let binding_id = self.semantic.push_binding(range, kind, flags);
// If the name is private, mark is as such.
if name.starts_with('_') {
self.semantic.bindings[binding_id].flags |= BindingFlags::PRIVATE_DECLARATION;
}
// If there's an existing binding in this scope, copy its references.
if let Some(shadowed_id) = self.semantic.scopes[scope_id].get(name) {
// If this is an annotation, and we already have an existing value in the same scope,
@@ -1540,10 +1597,10 @@ impl<'a> Checker<'a> {
}
fn handle_node_load(&mut self, expr: &Expr) {
let Expr::Name(ast::ExprName { id, .. }) = expr else {
let Expr::Name(expr) = expr else {
return;
};
self.semantic.resolve_load(id, expr.range());
self.semantic.resolve_load(expr);
}
fn handle_node_store(&mut self, id: &'a str, expr: &Expr) {
@@ -1686,11 +1743,29 @@ impl<'a> Checker<'a> {
}
}
fn visit_deferred_type_param_definitions(&mut self) {
while !self.deferred.type_param_definitions.is_empty() {
let type_params = std::mem::take(&mut self.deferred.type_param_definitions);
for (type_param, snapshot) in type_params {
self.semantic.restore(snapshot);
if let ast::TypeParam::TypeVar(ast::TypeParamTypeVar {
bound: Some(bound), ..
}) = type_param
{
self.visit_expr(bound);
}
}
}
}
fn visit_deferred_string_type_definitions(&mut self, allocator: &'a typed_arena::Arena<Expr>) {
while !self.deferred.string_type_definitions.is_empty() {
let type_definitions = std::mem::take(&mut self.deferred.string_type_definitions);
for (range, value, snapshot) in type_definitions {
if let Ok((expr, kind)) = parse_type_annotation(value, range, self.locator) {
if let Ok((expr, kind)) =
parse_type_annotation(value, range, self.locator.contents())
{
let expr = allocator.alloc(expr);
self.semantic.restore(snapshot);
@@ -1873,6 +1948,7 @@ pub(crate) fn check_ast(
checker.visit_deferred_functions();
checker.visit_deferred_lambdas();
checker.visit_deferred_future_type_definitions();
checker.visit_deferred_type_param_definitions();
let allocator = typed_arena::Arena::new();
checker.visit_deferred_string_type_definitions(&allocator);
checker.visit_exports();

View File

@@ -2,14 +2,16 @@
use std::borrow::Cow;
use std::path::Path;
use rustpython_parser::ast::{self, Ranged, Stmt, Suite};
use ruff_python_ast::{self as ast, Ranged, Stmt, Suite};
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::helpers::to_module_path;
use ruff_python_ast::imports::{ImportMap, ModuleImport};
use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_stdlib::path::is_python_stub_file;
use ruff_source_file::Locator;
use crate::directives::IsortDirectives;
use crate::registry::Rule;

View File

@@ -1,16 +1,17 @@
use ruff_python_parser::lexer::LexResult;
use ruff_text_size::TextRange;
use rustpython_parser::lexer::LexResult;
use ruff_diagnostics::{Diagnostic, DiagnosticKind};
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_ast::token_kind::TokenKind;
use ruff_python_codegen::Stylist;
use ruff_python_parser::TokenKind;
use ruff_source_file::Locator;
use crate::registry::{AsRule, Rule};
use crate::rules::pycodestyle::rules::logical_lines::{
extraneous_whitespace, indentation, missing_whitespace, missing_whitespace_after_keyword,
missing_whitespace_around_operator, space_around_operator, whitespace_around_keywords,
whitespace_around_named_parameter_equals, whitespace_before_comment,
whitespace_before_parameters, LogicalLines, TokenFlags,
missing_whitespace_around_operator, space_after_comma, space_around_operator,
whitespace_around_keywords, whitespace_around_named_parameter_equals,
whitespace_before_comment, whitespace_before_parameters, LogicalLines, TokenFlags,
};
use crate::settings::Settings;
@@ -60,6 +61,9 @@ pub(crate) fn check_logical_lines(
missing_whitespace_around_operator(&line, &mut context);
missing_whitespace(&line, should_fix_missing_whitespace, &mut context);
}
if line.flags().contains(TokenFlags::PUNCTUATION) {
space_after_comma(&line, &mut context);
}
if line
.flags()

View File

@@ -3,11 +3,11 @@
use std::path::Path;
use itertools::Itertools;
use ruff_python_ast::Ranged;
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustpython_parser::ast::Ranged;
use ruff_diagnostics::{Diagnostic, Edit, Fix};
use ruff_python_ast::source_code::Locator;
use ruff_source_file::Locator;
use crate::noqa;
use crate::noqa::{Directive, FileExemption, NoqaDirectives, NoqaMapping};

View File

@@ -2,8 +2,9 @@
use ruff_text_size::TextSize;
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use ruff_python_trivia::UniversalNewlines;
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_source_file::{Locator, UniversalNewlines};
use crate::registry::Rule;
use crate::rules::flake8_copyright::rules::missing_copyright_notice;
@@ -12,7 +13,6 @@ use crate::rules::pycodestyle::rules::{
tab_indentation, trailing_whitespace,
};
use crate::rules::pylint;
use crate::rules::pyupgrade::rules::unnecessary_coding_comment;
use crate::settings::Settings;
pub(crate) fn check_physical_lines(
@@ -27,7 +27,6 @@ pub(crate) fn check_physical_lines(
let enforce_doc_line_too_long = settings.rules.enabled(Rule::DocLineTooLong);
let enforce_line_too_long = settings.rules.enabled(Rule::LineTooLong);
let enforce_no_newline_at_end_of_file = settings.rules.enabled(Rule::MissingNewlineAtEndOfFile);
let enforce_unnecessary_coding_comment = settings.rules.enabled(Rule::UTF8EncodingDeclaration);
let enforce_mixed_spaces_and_tabs = settings.rules.enabled(Rule::MixedSpacesAndTabs);
let enforce_bidirectional_unicode = settings.rules.enabled(Rule::BidirectionalUnicode);
let enforce_trailing_whitespace = settings.rules.enabled(Rule::TrailingWhitespace);
@@ -36,27 +35,9 @@ pub(crate) fn check_physical_lines(
let enforce_tab_indentation = settings.rules.enabled(Rule::TabIndentation);
let enforce_copyright_notice = settings.rules.enabled(Rule::MissingCopyrightNotice);
let fix_unnecessary_coding_comment = settings.rules.should_fix(Rule::UTF8EncodingDeclaration);
let mut commented_lines_iter = indexer.comment_ranges().iter().peekable();
let mut doc_lines_iter = doc_lines.iter().peekable();
for (index, line) in locator.contents().universal_newlines().enumerate() {
while commented_lines_iter
.next_if(|comment_range| line.range().contains_range(**comment_range))
.is_some()
{
if enforce_unnecessary_coding_comment {
if index < 2 {
if let Some(diagnostic) =
unnecessary_coding_comment(&line, fix_unnecessary_coding_comment)
{
diagnostics.push(diagnostic);
}
}
}
}
for line in locator.contents().universal_newlines() {
while doc_lines_iter
.next_if(|doc_line_start| line.range().contains_inclusive(**doc_line_start))
.is_some()
@@ -118,10 +99,12 @@ pub(crate) fn check_physical_lines(
#[cfg(test)]
mod tests {
use rustpython_parser::lexer::lex;
use rustpython_parser::Mode;
use ruff_python_parser::lexer::lex;
use ruff_python_parser::Mode;
use ruff_python_ast::source_code::{Indexer, Locator, Stylist};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
use crate::line_width::LineLength;
use crate::registry::Rule;

View File

@@ -2,11 +2,12 @@
use std::path::Path;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::source_code::{Indexer, Locator};
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
use crate::directives::TodoComment;
use crate::lex::docstring_detection::StateMachine;
@@ -68,6 +69,10 @@ pub(crate) fn check_tokens(
eradicate::rules::commented_out_code(&mut diagnostics, locator, indexer, settings);
}
if settings.rules.enabled(Rule::UTF8EncodingDeclaration) {
pyupgrade::rules::unnecessary_coding_comment(&mut diagnostics, locator, indexer, settings);
}
if settings.rules.enabled(Rule::InvalidEscapeSequence) {
for (tok, range) in tokens.iter().flatten() {
if tok.is_string() {

View File

@@ -84,6 +84,8 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pycodestyle, "E227") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundBitwiseOrShiftOperator),
(Pycodestyle, "E228") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundModuloOperator),
(Pycodestyle, "E231") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespace),
(Pycodestyle, "E241") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MultipleSpacesAfterComma),
(Pycodestyle, "E242") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TabAfterComma),
(Pycodestyle, "E251") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::UnexpectedSpacesAroundKeywordParameterEquals),
(Pycodestyle, "E252") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::MissingWhitespaceAroundParameterEquals),
(Pycodestyle, "E261") => (RuleGroup::Nursery, rules::pycodestyle::rules::logical_lines::TooFewSpacesBeforeInlineComment),
@@ -223,6 +225,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "W0711") => (RuleGroup::Unspecified, rules::pylint::rules::BinaryOpException),
(Pylint, "W1508") => (RuleGroup::Unspecified, rules::pylint::rules::InvalidEnvvarDefault),
(Pylint, "W1509") => (RuleGroup::Unspecified, rules::pylint::rules::SubprocessPopenPreexecFn),
(Pylint, "W1641") => (RuleGroup::Nursery, rules::pylint::rules::EqWithoutHash),
(Pylint, "W2901") => (RuleGroup::Unspecified, rules::pylint::rules::RedefinedLoopName),
(Pylint, "W3301") => (RuleGroup::Unspecified, rules::pylint::rules::NestedMinMax),
@@ -632,6 +635,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Pyi, "015") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::AssignmentDefaultInStub),
(Flake8Pyi, "016") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::DuplicateUnionMember),
(Flake8Pyi, "017") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::ComplexAssignmentInStub),
(Flake8Pyi, "018") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::UnusedPrivateTypeVar),
(Flake8Pyi, "020") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::QuotedAnnotationInStub),
(Flake8Pyi, "021") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::DocstringInStub),
(Flake8Pyi, "024") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::CollectionsNamedTuple),
@@ -649,7 +653,10 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Pyi, "043") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::TSuffixedTypeAlias),
(Flake8Pyi, "044") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::FutureAnnotationsInStub),
(Flake8Pyi, "045") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::IterMethodReturnIterable),
(Flake8Pyi, "046") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::UnusedPrivateProtocol),
(Flake8Pyi, "047") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::UnusedPrivateTypeAlias),
(Flake8Pyi, "048") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::StubBodyMultipleStatements),
(Flake8Pyi, "049") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::UnusedPrivateTypedDict),
(Flake8Pyi, "050") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::NoReturnArgumentAnnotationInStub),
(Flake8Pyi, "052") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::UnannotatedAssignmentInStub),
(Flake8Pyi, "054") => (RuleGroup::Unspecified, rules::flake8_pyi::rules::NumericLiteralTooLong),
@@ -759,6 +766,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8UsePathlib, "204") => (RuleGroup::Unspecified, rules::flake8_use_pathlib::rules::OsPathGetmtime),
(Flake8UsePathlib, "205") => (RuleGroup::Unspecified, rules::flake8_use_pathlib::rules::OsPathGetctime),
(Flake8UsePathlib, "206") => (RuleGroup::Unspecified, rules::flake8_use_pathlib::rules::OsSepSplit),
(Flake8UsePathlib, "207") => (RuleGroup::Unspecified, rules::flake8_use_pathlib::rules::Glob),
// flake8-logging-format
(Flake8LoggingFormat, "001") => (RuleGroup::Unspecified, rules::flake8_logging_format::violations::LoggingStringFormat),

View File

@@ -3,11 +3,12 @@
use std::str::FromStr;
use bitflags::bitflags;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use ruff_python_ast::source_code::{Indexer, Locator};
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
use crate::noqa::NoqaMapping;
use crate::settings::Settings;
@@ -348,11 +349,12 @@ impl TodoDirectiveKind {
#[cfg(test)]
mod tests {
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::{lexer, Mode};
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::{lexer, Mode};
use ruff_python_ast::source_code::{Indexer, Locator};
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
use crate::directives::{
extract_isort_directives, extract_noqa_line_for, TodoDirective, TodoDirectiveKind,

View File

@@ -3,14 +3,13 @@
use std::iter::FusedIterator;
use ruff_python_ast::{self as ast, Constant, Expr, Ranged, Stmt, Suite};
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Tok;
use ruff_text_size::TextSize;
use rustpython_parser::ast::{self, Constant, Expr, Ranged, Stmt, Suite};
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Tok;
use ruff_python_ast::source_code::Locator;
use ruff_python_ast::statement_visitor::{walk_stmt, StatementVisitor};
use ruff_python_trivia::UniversalNewlineIterator;
use ruff_source_file::{Locator, UniversalNewlineIterator};
/// Extract doc lines (standalone comments) from a token sequence.
pub(crate) fn doc_lines_from_tokens(lxr: &[LexResult]) -> DocLines {

View File

@@ -1,6 +1,6 @@
//! Extract docstrings from an AST.
use rustpython_parser::ast::{self, Constant, Expr, Stmt};
use ruff_python_ast::{self as ast, Constant, Expr, Stmt};
use ruff_python_semantic::{Definition, DefinitionId, Definitions, Member, MemberKind};

View File

@@ -1,8 +1,8 @@
use std::fmt::{Debug, Formatter};
use std::ops::Deref;
use ruff_python_ast::{Expr, Ranged};
use ruff_text_size::{TextRange, TextSize};
use rustpython_parser::ast::{Expr, Ranged};
use ruff_python_semantic::Definition;

View File

@@ -5,7 +5,7 @@ use ruff_python_ast::docstrings::{leading_space, leading_words};
use ruff_text_size::{TextLen, TextRange, TextSize};
use strum_macros::EnumIter;
use ruff_python_trivia::{Line, UniversalNewlineIterator, UniversalNewlines};
use ruff_source_file::{Line, UniversalNewlineIterator, UniversalNewlines};
use crate::docstrings::styles::SectionStyle;
use crate::docstrings::{Docstring, DocstringBody};

View File

@@ -1,15 +1,15 @@
//! Insert statements into Python code.
use std::ops::Add;
use ruff_python_ast::{Ranged, Stmt};
use ruff_python_parser::{lexer, Mode, Tok};
use ruff_text_size::TextSize;
use rustpython_parser::ast::{Ranged, Stmt};
use rustpython_parser::{lexer, Mode, Tok};
use ruff_diagnostics::Edit;
use ruff_python_ast::helpers::is_docstring_stmt;
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_trivia::{PythonWhitespace, UniversalNewlineIterator};
use ruff_textwrap::indent;
use ruff_python_codegen::Stylist;
use ruff_python_trivia::{textwrap::indent, PythonWhitespace};
use ruff_source_file::{Locator, UniversalNewlineIterator};
#[derive(Debug, Clone, PartialEq, Eq)]
pub(super) enum Placement<'a> {
@@ -299,13 +299,13 @@ fn match_leading_semicolon(s: &str) -> Option<TextSize> {
#[cfg(test)]
mod tests {
use anyhow::Result;
use ruff_python_ast::Suite;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::Parse;
use ruff_text_size::TextSize;
use rustpython_parser::ast::Suite;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::Parse;
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_trivia::LineEnding;
use ruff_python_codegen::Stylist;
use ruff_source_file::{LineEnding, Locator};
use super::Insertion;
@@ -313,7 +313,7 @@ mod tests {
fn start_of_file() -> Result<()> {
fn insert(contents: &str) -> Result<Insertion> {
let program = Suite::parse(contents, "<filename>")?;
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(contents);
let tokens: Vec<LexResult> = ruff_python_parser::tokenize(contents);
let locator = Locator::new(contents);
let stylist = Stylist::from_tokens(&tokens, &locator);
Ok(Insertion::start_of_file(&program, &locator, &stylist))
@@ -424,7 +424,7 @@ x = 1
#[test]
fn start_of_block() {
fn insert(contents: &str, offset: TextSize) -> Insertion {
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(contents);
let tokens: Vec<LexResult> = ruff_python_parser::tokenize(contents);
let locator = Locator::new(contents);
let stylist = Stylist::from_tokens(&tokens, &locator);
Insertion::start_of_block(offset, &locator, &stylist)

View File

@@ -7,14 +7,15 @@ use std::error::Error;
use anyhow::Result;
use libcst_native::{ImportAlias, Name, NameOrAttribute};
use ruff_python_ast::{self as ast, Ranged, Stmt, Suite};
use ruff_text_size::TextSize;
use rustpython_parser::ast::{self, Ranged, Stmt, Suite};
use ruff_diagnostics::Edit;
use ruff_python_ast::imports::{AnyImport, Import, ImportFrom};
use ruff_python_ast::source_code::{Locator, Stylist};
use ruff_python_codegen::Stylist;
use ruff_python_semantic::SemanticModel;
use ruff_textwrap::indent;
use ruff_python_trivia::textwrap::indent;
use ruff_source_file::Locator;
use crate::autofix;
use crate::autofix::codemods::CodegenStylist;

View File

@@ -1,4 +1,5 @@
use std::cmp::Ordering;
use std::fmt::Display;
use std::fs::File;
use std::io::{BufReader, BufWriter, Read, Seek, SeekFrom, Write};
use std::iter;
@@ -10,7 +11,9 @@ use serde::Serialize;
use serde_json::error::Category;
use ruff_diagnostics::Diagnostic;
use ruff_python_trivia::{NewlineWithTrailingNewline, UniversalNewlineIterator};
use ruff_python_parser::lexer::lex;
use ruff_python_parser::Mode;
use ruff_source_file::{NewlineWithTrailingNewline, UniversalNewlineIterator};
use ruff_text_size::{TextRange, TextSize};
use crate::autofix::source_map::{SourceMap, SourceMarker};
@@ -39,6 +42,20 @@ pub fn round_trip(path: &Path) -> anyhow::Result<String> {
Ok(String::from_utf8(writer)?)
}
impl Display for SourceValue {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
SourceValue::String(string) => f.write_str(string),
SourceValue::StringArray(string_array) => {
for string in string_array {
f.write_str(string)?;
}
Ok(())
}
}
}
}
impl Cell {
/// Return the [`SourceValue`] of the cell.
fn source(&self) -> &SourceValue {
@@ -158,10 +175,7 @@ impl Notebook {
)
})?;
// Check if tokenizing was successful and the file is non-empty
if (ruff_rustpython::tokenize(&contents))
.last()
.map_or(true, Result::is_err)
{
if lex(&contents, Mode::Module).any(|result| result.is_err()) {
Diagnostic::new(
SyntaxError {
message: format!(
@@ -408,6 +422,11 @@ impl Notebook {
self.content = transformed.to_string();
}
/// Return a slice of [`Cell`] in the Jupyter notebook.
pub fn cells(&self) -> &[Cell] {
&self.raw.cells
}
/// Return `true` if the notebook is a Python notebook, `false` otherwise.
pub fn is_python_notebook(&self) -> bool {
self.raw

View File

@@ -4,7 +4,7 @@
//!
//! TODO(charlie): Consolidate with the existing AST-based docstring extraction.
use rustpython_parser::Tok;
use ruff_python_parser::Tok;
#[derive(Default, Copy, Clone)]
enum State {

View File

@@ -5,7 +5,6 @@
//!
//! [Ruff]: https://github.com/astral-sh/ruff
pub use ruff_python_ast::source_code::round_trip;
pub use rule_selector::RuleSelector;
pub use rules::pycodestyle::rules::IOError;

View File

@@ -6,14 +6,16 @@ use anyhow::{anyhow, Result};
use colored::Colorize;
use itertools::Itertools;
use log::error;
use ruff_python_parser::lexer::LexResult;
use ruff_python_parser::ParseError;
use rustc_hash::FxHashMap;
use rustpython_parser::lexer::LexResult;
use rustpython_parser::ParseError;
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::imports::ImportMap;
use ruff_python_ast::source_code::{Indexer, Locator, SourceFileBuilder, Stylist};
use ruff_python_codegen::Stylist;
use ruff_python_index::Indexer;
use ruff_python_stdlib::path::is_python_stub_file;
use ruff_source_file::{Locator, SourceFileBuilder};
use crate::autofix::{fix_file, FixResult};
use crate::checkers::ast::check_ast;
@@ -136,7 +138,7 @@ pub fn check_path(
.iter_enabled()
.any(|rule_code| rule_code.lint_source().is_imports());
if use_ast || use_imports || use_doc_lines {
match ruff_rustpython::parse_program_tokens(tokens, &path.to_string_lossy()) {
match ruff_python_parser::parse_program_tokens(tokens, &path.to_string_lossy()) {
Ok(python_ast) => {
if use_ast {
diagnostics.extend(check_ast(
@@ -258,7 +260,7 @@ pub fn add_noqa_to_path(path: &Path, package: Option<&Path>, settings: &Settings
let contents = std::fs::read_to_string(path)?;
// Tokenize once.
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(&contents);
let tokens: Vec<LexResult> = ruff_python_parser::tokenize(&contents);
// Map row and column locations to byte slices (lazily).
let locator = Locator::new(&contents);
@@ -326,7 +328,7 @@ pub fn lint_only(
source_kind: Option<&SourceKind>,
) -> LinterResult<(Vec<Message>, Option<ImportMap>)> {
// Tokenize once.
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(contents);
let tokens: Vec<LexResult> = ruff_python_parser::tokenize(contents);
// Map row and column locations to byte slices (lazily).
let locator = Locator::new(contents);
@@ -418,7 +420,7 @@ pub fn lint_fix<'a>(
// Continuously autofix until the source code stabilizes.
loop {
// Tokenize once.
let tokens: Vec<LexResult> = ruff_rustpython::tokenize(&transformed);
let tokens: Vec<LexResult> = ruff_python_parser::tokenize(&transformed);
// Map row and column locations to byte slices (lazily).
let locator = Locator::new(&transformed);

View File

@@ -7,9 +7,9 @@ use colored::Colorize;
use fern;
use log::Level;
use once_cell::sync::Lazy;
use rustpython_parser::{ParseError, ParseErrorType};
use ruff_python_parser::{ParseError, ParseErrorType};
use ruff_python_ast::source_code::{OneIndexed, SourceCode, SourceLocation};
use ruff_source_file::{OneIndexed, SourceCode, SourceLocation};
use crate::fs;
use crate::jupyter::Notebook;

View File

@@ -1,6 +1,6 @@
use std::io::Write;
use ruff_python_ast::source_code::SourceLocation;
use ruff_source_file::SourceLocation;
use crate::message::{Emitter, EmitterContext, Message};
use crate::registry::AsRule;

View File

@@ -2,12 +2,12 @@ use std::fmt::{Display, Formatter};
use std::num::NonZeroUsize;
use colored::{Color, ColoredString, Colorize, Styles};
use itertools::Itertools;
use ruff_text_size::{TextRange, TextSize};
use similar::{ChangeTag, TextDiff};
use ruff_diagnostics::{Applicability, Fix};
use ruff_python_ast::source_code::{OneIndexed, SourceFile};
use ruff_source_file::{OneIndexed, SourceFile};
use crate::message::Message;
@@ -38,12 +38,7 @@ impl Display for Diff<'_> {
let mut output = String::with_capacity(self.source_code.source_text().len());
let mut last_end = TextSize::default();
for edit in self
.fix
.edits()
.iter()
.sorted_unstable_by_key(|edit| edit.start())
{
for edit in self.fix.edits() {
output.push_str(
self.source_code
.slice(TextRange::new(last_end, edit.start())),

View File

@@ -1,6 +1,6 @@
use std::io::Write;
use ruff_python_ast::source_code::SourceLocation;
use ruff_source_file::SourceLocation;
use crate::fs::relativize_path;
use crate::message::{Emitter, EmitterContext, Message};

View File

@@ -6,7 +6,7 @@ use serde::ser::SerializeSeq;
use serde::{Serialize, Serializer};
use serde_json::json;
use ruff_python_ast::source_code::SourceLocation;
use ruff_source_file::SourceLocation;
use crate::fs::{relativize_path, relativize_path_to};
use crate::message::{Emitter, EmitterContext, Message};

View File

@@ -4,7 +4,7 @@ use std::num::NonZeroUsize;
use colored::Colorize;
use ruff_python_ast::source_code::OneIndexed;
use ruff_source_file::OneIndexed;
use crate::fs::relativize_path;
use crate::jupyter::{JupyterIndex, Notebook};

View File

@@ -5,7 +5,7 @@ use serde::{Serialize, Serializer};
use serde_json::{json, Value};
use ruff_diagnostics::Edit;
use ruff_python_ast::source_code::SourceCode;
use ruff_source_file::SourceCode;
use crate::message::{Emitter, EmitterContext, Message};
use crate::registry::AsRule;

View File

@@ -3,7 +3,7 @@ use std::path::Path;
use quick_junit::{NonSuccessKind, Report, TestCase, TestCaseStatus, TestSuite};
use ruff_python_ast::source_code::SourceLocation;
use ruff_source_file::SourceLocation;
use crate::message::{
group_messages_by_filename, Emitter, EmitterContext, Message, MessageWithLocation,

View File

@@ -16,7 +16,7 @@ pub use json_lines::JsonLinesEmitter;
pub use junit::JunitEmitter;
pub use pylint::PylintEmitter;
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Fix};
use ruff_python_ast::source_code::{SourceFile, SourceLocation};
use ruff_source_file::{SourceFile, SourceLocation};
pub use text::TextEmitter;
mod azure;
@@ -155,7 +155,7 @@ mod tests {
use rustc_hash::FxHashMap;
use ruff_diagnostics::{Diagnostic, DiagnosticKind, Edit, Fix};
use ruff_python_ast::source_code::SourceFileBuilder;
use ruff_source_file::SourceFileBuilder;
use crate::message::{Emitter, EmitterContext, Message};

View File

@@ -1,6 +1,6 @@
use std::io::Write;
use ruff_python_ast::source_code::OneIndexed;
use ruff_source_file::OneIndexed;
use crate::fs::relativize_path;
use crate::message::{Emitter, EmitterContext, Message};

View File

@@ -8,7 +8,7 @@ use bitflags::bitflags;
use colored::Colorize;
use ruff_text_size::{TextRange, TextSize};
use ruff_python_ast::source_code::{OneIndexed, SourceLocation};
use ruff_source_file::{OneIndexed, SourceLocation};
use crate::fs::relativize_path;
use crate::jupyter::{JupyterIndex, Notebook};

View File

@@ -8,12 +8,12 @@ use std::path::Path;
use anyhow::Result;
use itertools::Itertools;
use log::warn;
use ruff_python_ast::Ranged;
use ruff_text_size::{TextLen, TextRange, TextSize};
use rustpython_parser::ast::Ranged;
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::source_code::Locator;
use ruff_python_trivia::LineEnding;
use ruff_python_trivia::indentation_at_offset;
use ruff_source_file::{LineEnding, Locator};
use crate::codes::NoqaCode;
use crate::fs::relativize_path;
@@ -249,22 +249,34 @@ impl FileExemption {
let path_display = relativize_path(path);
warn!("Invalid `# ruff: noqa` directive at {path_display}:{line}: {err}");
}
Ok(Some(ParsedFileExemption::All)) => {
return Some(Self::All);
}
Ok(Some(ParsedFileExemption::Codes(codes))) => {
exempt_codes.extend(codes.into_iter().filter_map(|code| {
if let Ok(rule) = Rule::from_code(get_redirect_target(code).unwrap_or(code))
{
Some(rule.noqa_code())
} else {
#[allow(deprecated)]
let line = locator.compute_line_index(range.start());
let path_display = relativize_path(path);
warn!("Invalid code provided to `# ruff: noqa` at {path_display}:{line}: {code}");
None
Ok(Some(exemption)) => {
if indentation_at_offset(range.start(), locator).is_none() {
#[allow(deprecated)]
let line = locator.compute_line_index(range.start());
let path_display = relativize_path(path);
warn!("Unexpected `# ruff: noqa` directive at {path_display}:{line}. File-level suppression comments must appear on their own line.");
continue;
}
match exemption {
ParsedFileExemption::All => {
return Some(Self::All);
}
}));
ParsedFileExemption::Codes(codes) => {
exempt_codes.extend(codes.into_iter().filter_map(|code| {
if let Ok(rule) = Rule::from_code(get_redirect_target(code).unwrap_or(code))
{
Some(rule.noqa_code())
} else {
#[allow(deprecated)]
let line = locator.compute_line_index(range.start());
let path_display = relativize_path(path);
warn!("Invalid rule code provided to `# ruff: noqa` at {path_display}:{line}: {code}");
None
}
}));
}
}
}
Ok(None) => {}
}
@@ -782,8 +794,7 @@ mod tests {
use ruff_text_size::{TextRange, TextSize};
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::source_code::Locator;
use ruff_python_trivia::LineEnding;
use ruff_source_file::{LineEnding, Locator};
use crate::noqa::{add_noqa_inner, Directive, NoqaMapping, ParsedFileExemption};
use crate::rules::pycodestyle::rules::AmbiguousVariableName;

View File

@@ -5,7 +5,7 @@ use ruff_text_size::{TextRange, TextSize};
use serde::{Deserialize, Serialize};
use ruff_diagnostics::Diagnostic;
use ruff_python_ast::source_code::SourceFile;
use ruff_source_file::SourceFile;
use crate::message::Message;
use crate::registry::Rule;

View File

@@ -246,8 +246,7 @@ impl Rule {
| Rule::MissingNewlineAtEndOfFile
| Rule::MixedSpacesAndTabs
| Rule::TabIndentation
| Rule::TrailingWhitespace
| Rule::UTF8EncodingDeclaration => LintSource::PhysicalLines,
| Rule::TrailingWhitespace => LintSource::PhysicalLines,
Rule::AmbiguousUnicodeCharacterComment
| Rule::AmbiguousUnicodeCharacterDocstring
| Rule::AmbiguousUnicodeCharacterString
@@ -289,7 +288,8 @@ impl Rule {
| Rule::SingleLineImplicitStringConcatenation
| Rule::TrailingCommaOnBareTuple
| Rule::TypeCommentInStub
| Rule::UselessSemicolon => LintSource::Tokens,
| Rule::UselessSemicolon
| Rule::UTF8EncodingDeclaration => LintSource::Tokens,
Rule::IOError => LintSource::Io,
Rule::UnsortedImports | Rule::MissingRequiredImport => LintSource::Imports,
Rule::ImplicitNamespacePackage | Rule::InvalidModuleName => LintSource::Filesystem,
@@ -303,6 +303,7 @@ impl Rule {
| Rule::MissingWhitespaceAroundOperator
| Rule::MissingWhitespaceAroundParameterEquals
| Rule::MultipleLeadingHashesForBlockComment
| Rule::MultipleSpacesAfterComma
| Rule::MultipleSpacesAfterKeyword
| Rule::MultipleSpacesAfterOperator
| Rule::MultipleSpacesBeforeKeyword
@@ -312,6 +313,7 @@ impl Rule {
| Rule::NoSpaceAfterBlockComment
| Rule::NoSpaceAfterInlineComment
| Rule::OverIndented
| Rule::TabAfterComma
| Rule::TabAfterKeyword
| Rule::TabAfterOperator
| Rule::TabBeforeKeyword

View File

@@ -242,6 +242,7 @@ impl Renamer {
// By default, replace the binding's name with the target name.
BindingKind::Annotation
| BindingKind::Argument
| BindingKind::TypeParam
| BindingKind::NamedExprAssignment
| BindingKind::UnpackedAssignment
| BindingKind::Assignment

View File

@@ -1,10 +1,10 @@
use rustpython_parser::ast;
use rustpython_parser::ast::{Expr, Ranged};
use ruff_python_ast as ast;
use ruff_python_ast::{Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::helpers::find_keyword;
use rustpython_parser::ast::Constant;
use ruff_python_ast::Constant;
use crate::checkers::ast::Checker;

View File

@@ -1,8 +1,8 @@
/// See: [eradicate.py](https://github.com/myint/eradicate/blob/98f199940979c94447a461d50d27862b118b282d/eradicate.py)
use once_cell::sync::Lazy;
use regex::Regex;
use rustpython_parser::ast::Suite;
use rustpython_parser::Parse;
use ruff_python_ast::Suite;
use ruff_python_parser::Parse;
static ALLOWLIST_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(

View File

@@ -1,6 +1,7 @@
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Edit, Fix};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::source_code::{Indexer, Locator};
use ruff_python_index::Indexer;
use ruff_source_file::Locator;
use crate::registry::Rule;
use crate::settings::Settings;

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::Expr;
use ruff_python_ast::Expr;
use ruff_python_semantic::SemanticModel;

View File

@@ -1,5 +1,5 @@
use num_bigint::BigInt;
use rustpython_parser::ast::{self, CmpOp, Constant, Expr, Ranged};
use ruff_python_ast::{self as ast, CmpOp, Constant, Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{Expr, Ranged};
use ruff_python_ast::{Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

View File

@@ -1,5 +1,5 @@
use num_bigint::BigInt;
use rustpython_parser::ast::{self, Constant, Expr, Ranged};
use ruff_python_ast::{self as ast, Constant, Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

View File

@@ -1,9 +1,9 @@
use anyhow::{bail, Result};
use rustpython_parser::ast::{Ranged, Stmt};
use rustpython_parser::{lexer, Mode, Tok};
use ruff_python_ast::{Ranged, Stmt};
use ruff_python_parser::{lexer, Mode, Tok};
use ruff_diagnostics::Edit;
use ruff_python_ast::source_code::Locator;
use ruff_source_file::Locator;
/// ANN204
pub(crate) fn add_return_annotation(

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{self, Arguments, Expr, Stmt};
use ruff_python_ast::{self as ast, Arguments, Expr, Stmt};
use ruff_python_ast::cast;
use ruff_python_semantic::analyze::visibility;

View File

@@ -1,4 +1,4 @@
use rustpython_parser::ast::{self, ArgWithDefault, Constant, Expr, Ranged, Stmt};
use ruff_python_ast::{self as ast, ArgWithDefault, Constant, Expr, Ranged, Stmt};
use ruff_diagnostics::{AlwaysAutofixableViolation, Diagnostic, Fix, Violation};
use ruff_macros::{derive_message_formats, violation};
@@ -6,7 +6,7 @@ use ruff_python_ast::cast;
use ruff_python_ast::helpers::ReturnStatementVisitor;
use ruff_python_ast::identifier::Identifier;
use ruff_python_ast::statement_visitor::StatementVisitor;
use ruff_python_ast::typing::parse_type_annotation;
use ruff_python_parser::typing::parse_type_annotation;
use ruff_python_semantic::analyze::visibility;
use ruff_python_semantic::{Definition, Member, MemberKind};
use ruff_python_stdlib::typing::simple_magic_return_type;
@@ -462,7 +462,8 @@ fn check_dynamically_typed<F>(
}) = annotation
{
// Quoted annotations
if let Ok((parsed_annotation, _)) = parse_type_annotation(string, *range, checker.locator())
if let Ok((parsed_annotation, _)) =
parse_type_annotation(string, *range, checker.locator().contents())
{
if type_hint_resolves_to_any(
&parsed_annotation,

View File

@@ -1,5 +1,5 @@
use rustpython_parser::ast;
use rustpython_parser::ast::{Expr, Ranged};
use ruff_python_ast as ast;
use ruff_python_ast::{Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

View File

@@ -1,5 +1,5 @@
use rustpython_parser::ast;
use rustpython_parser::ast::{Expr, Ranged};
use ruff_python_ast as ast;
use ruff_python_ast::{Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

View File

@@ -1,5 +1,5 @@
use rustpython_parser::ast;
use rustpython_parser::ast::{Expr, Ranged};
use ruff_python_ast as ast;
use ruff_python_ast::{Expr, Ranged};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

View File

@@ -1,6 +1,6 @@
use once_cell::sync::Lazy;
use regex::Regex;
use rustpython_parser::ast::{self, Constant, Expr};
use ruff_python_ast::{self as ast, Constant, Expr};
use ruff_python_semantic::SemanticModel;

View File

@@ -1,5 +1,5 @@
use ruff_python_ast::{Ranged, Stmt};
use ruff_text_size::{TextLen, TextRange};
use rustpython_parser::ast::{Ranged, Stmt};
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};

Some files were not shown because too many files have changed in this diff Show More