Compare commits

...

48 Commits

Author SHA1 Message Date
Zanie Blue
48eb23b488 Empty commit to test performance 2024-11-19 19:07:54 -06:00
Zanie Blue
7f624cd0bb Improve CI caching of fuzz dependencies 2024-11-19 18:09:03 -06:00
Micha Reiser
dbbe7a773c Mark UP043 fix unsafe when the type annotation contains any comments (#14458) 2024-11-19 15:24:02 +01:00
InSync
5f09d4a90a [ruff] re and regex calls with unraw string as first argument (RUF039) (#14446) 2024-11-19 13:44:55 +01:00
David Peter
f8c20258ae [red-knot] Do not panic on f-string format spec expressions (#14436)
## Summary

Previously, we panicked on expressions like `f"{v:{f'0.2f'}}"` because
we did not infer types for expressions nested inside format spec
elements.

## Test Plan

```
cargo nextest run -p red_knot_workspace -- --ignored linter_af linter_gz
```
2024-11-19 10:04:51 +01:00
David Peter
d8538d8c98 [red-knot] Narrowing for type(x) is C checks (#14432)
## Summary

Add type narrowing for `type(x) is C` conditions (and `else` clauses of
`type(x) is not C` conditionals):

```py
if type(x) is A:
    reveal_type(x)  # revealed: A
else:
    reveal_type(x)  # revealed: A | B
```

closes: #14431, part of: #13694

## Test Plan

New Markdown-based tests.
2024-11-18 16:21:46 +01:00
InSync
3642381489 [ruff] Add rule forbidding map(int, package.__version__.split('.')) (RUF048) (#14373)
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-11-18 13:43:24 +00:00
Micha Reiser
1f07880d5c Add tests for python version compatibility (#14430) 2024-11-18 12:26:55 +00:00
David Peter
d81b6cd334 [red-knot] Types for subexpressions of annotations (#14426)
## Summary

This patches up various missing paths where sub-expressions of type
annotations previously had no type attached. Examples include:
```py
tuple[int, str]
#     ~~~~~~~~

type[MyClass]
#    ~~~~~~~

Literal["foo"]
#       ~~~~~

Literal["foo", Literal[1, 2]]
#              ~~~~~~~~~~~~~

Literal[1, "a", random.illegal(sub[expr + ession])]
#               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```

## Test Plan

```
cargo nextest run -p red_knot_workspace -- --ignored linter_af linter_gz
```
2024-11-18 13:03:27 +01:00
Micha Reiser
d99210c049 [red-knot] Default to python 3.9 (#14429) 2024-11-18 11:27:40 +00:00
Steve C
577653551c [pylint] - use sets when possible for PLR1714 autofix (repeated-equality-comparison) (#14372) 2024-11-18 08:57:43 +01:00
Dhruv Manilawala
38a385fb6f Simplify quote annotator logic for list expression (#14425)
## Summary

Follow-up to #14371, this PR simplifies the visitor logic for list
expressions to remove the state management. We just need to make sure
that we visit the nested expressions using the `QuoteAnnotator` and not
the `Generator`. This is similar to what's being done for binary
expressions.

As per the
[grammar](https://typing.readthedocs.io/en/latest/spec/annotations.html#grammar-token-expression-grammar-annotation_expression),
list expressions can be present which can contain other type expressions
(`Callable`):
```
       | <Callable> '[' <Concatenate> '[' (type_expression ',')+
                    (name | '...') ']' ',' type_expression ']'
             (where name must be a valid in-scope ParamSpec)
       | <Callable> '[' '[' maybe_unpacked (',' maybe_unpacked)*
                    ']' ',' type_expression ']'
```

## Test Plan

`cargo insta test`
2024-11-18 12:33:19 +05:30
renovate[bot]
cd2ae5aa2d Update NPM Development dependencies (#14416)
This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
|
[@cloudflare/workers-types](https://redirect.github.com/cloudflare/workerd)
| [`4.20241106.0` ->
`4.20241112.0`](https://renovatebot.com/diffs/npm/@cloudflare%2fworkers-types/4.20241106.0/4.20241112.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@cloudflare%2fworkers-types/4.20241112.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@cloudflare%2fworkers-types/4.20241112.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@cloudflare%2fworkers-types/4.20241106.0/4.20241112.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@cloudflare%2fworkers-types/4.20241106.0/4.20241112.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[@typescript-eslint/eslint-plugin](https://typescript-eslint.io/packages/eslint-plugin)
([source](https://redirect.github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin))
| [`8.13.0` ->
`8.14.0`](https://renovatebot.com/diffs/npm/@typescript-eslint%2feslint-plugin/8.13.0/8.14.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@typescript-eslint%2feslint-plugin/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@typescript-eslint%2feslint-plugin/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@typescript-eslint%2feslint-plugin/8.13.0/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@typescript-eslint%2feslint-plugin/8.13.0/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
|
[@typescript-eslint/parser](https://typescript-eslint.io/packages/parser)
([source](https://redirect.github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser))
| [`8.13.0` ->
`8.14.0`](https://renovatebot.com/diffs/npm/@typescript-eslint%2fparser/8.13.0/8.14.0)
|
[![age](https://developer.mend.io/api/mc/badges/age/npm/@typescript-eslint%2fparser/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@typescript-eslint%2fparser/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@typescript-eslint%2fparser/8.13.0/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@typescript-eslint%2fparser/8.13.0/8.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [postcss](https://postcss.org/)
([source](https://redirect.github.com/postcss/postcss)) | [`8.4.48` ->
`8.4.49`](https://renovatebot.com/diffs/npm/postcss/8.4.48/8.4.49) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/postcss/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/postcss/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/postcss/8.4.48/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/postcss/8.4.48/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [tailwindcss](https://tailwindcss.com)
([source](https://redirect.github.com/tailwindlabs/tailwindcss)) |
[`3.4.14` ->
`3.4.15`](https://renovatebot.com/diffs/npm/tailwindcss/3.4.14/3.4.15) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/tailwindcss/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/tailwindcss/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/tailwindcss/3.4.14/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/tailwindcss/3.4.14/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [vite](https://vite.dev)
([source](https://redirect.github.com/vitejs/vite/tree/HEAD/packages/vite))
| [`5.4.10` ->
`5.4.11`](https://renovatebot.com/diffs/npm/vite/5.4.10/5.4.11) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/vite/5.4.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/vite/5.4.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/vite/5.4.10/5.4.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/vite/5.4.10/5.4.11?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
| [wrangler](https://redirect.github.com/cloudflare/workers-sdk)
([source](https://redirect.github.com/cloudflare/workers-sdk/tree/HEAD/packages/wrangler))
| [`3.86.0` ->
`3.87.0`](https://renovatebot.com/diffs/npm/wrangler/3.86.0/3.87.0) |
[![age](https://developer.mend.io/api/mc/badges/age/npm/wrangler/3.87.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/wrangler/3.87.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/wrangler/3.86.0/3.87.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/wrangler/3.86.0/3.87.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>cloudflare/workerd (@&#8203;cloudflare/workers-types)</summary>

###
[`v4.20241112.0`](8bf3af4699...7b28eb5032)

[Compare
Source](8bf3af4699...7b28eb5032)

</details>

<details>
<summary>typescript-eslint/typescript-eslint
(@&#8203;typescript-eslint/eslint-plugin)</summary>

###
[`v8.14.0`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/eslint-plugin/CHANGELOG.md#8140-2024-11-11)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.13.0...v8.14.0)

##### 🚀 Features

- **eslint-plugin:** \[await-thenable] report unnecessary `await using`
statements
([#&#8203;10209](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/10209))
- **eslint-plugin:** \[no-confusing-void-expression] add an option to
ignore void<->void
([#&#8203;10067](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/10067))

##### 🩹 Fixes

- **scope-manager:** fix asserted increments not being marked as write
references
([#&#8203;10271](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/10271))
- **eslint-plugin:** \[no-misused-promises] improve report loc for
methods
([#&#8203;10216](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/10216))
- **eslint-plugin:** \[no-unnecessary-condition] improve error message
for literal comparisons
([#&#8203;10194](https://redirect.github.com/typescript-eslint/typescript-eslint/pull/10194))

##### ❤️  Thank You

-   Gyumong [@&#8203;Gyumong](https://redirect.github.com/Gyumong)
-   Jan Ochwat [@&#8203;janek515](https://redirect.github.com/janek515)
- Kirk Waiblinger
[@&#8203;kirkwaiblinger](https://redirect.github.com/kirkwaiblinger)
-   Ronen Amiel

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

</details>

<details>
<summary>typescript-eslint/typescript-eslint
(@&#8203;typescript-eslint/parser)</summary>

###
[`v8.14.0`](https://redirect.github.com/typescript-eslint/typescript-eslint/blob/HEAD/packages/parser/CHANGELOG.md#8140-2024-11-11)

[Compare
Source](https://redirect.github.com/typescript-eslint/typescript-eslint/compare/v8.13.0...v8.14.0)

This was a version bump only for parser to align it with other projects,
there were no code changes.

You can read about our [versioning
strategy](https://main--typescript-eslint.netlify.app/users/versioning)
and
[releases](https://main--typescript-eslint.netlify.app/users/releases)
on our website.

</details>

<details>
<summary>postcss/postcss (postcss)</summary>

###
[`v8.4.49`](https://redirect.github.com/postcss/postcss/blob/HEAD/CHANGELOG.md#8449)

[Compare
Source](https://redirect.github.com/postcss/postcss/compare/8.4.48...8.4.49)

- Fixed custom syntax without `source.offset` (by
[@&#8203;romainmenke](https://redirect.github.com/romainmenke)).

</details>

<details>
<summary>tailwindlabs/tailwindcss (tailwindcss)</summary>

###
[`v3.4.15`](https://redirect.github.com/tailwindlabs/tailwindcss/releases/tag/v3.4.15)

[Compare
Source](https://redirect.github.com/tailwindlabs/tailwindcss/compare/v3.4.14...v3.4.15)

- Bump versions for security vulnerabilities
([#&#8203;14697](https://redirect.github.com/tailwindlabs/tailwindcss/pull/14697))
- Ensure the TypeScript types for the `boxShadow` theme configuration
allows arrays
([#&#8203;14856](https://redirect.github.com/tailwindlabs/tailwindcss/pull/14856))
- Set fallback for opacity variables to ensure setting colors with the
`selection:*` variant works in Chrome 131
([#&#8203;15003](https://redirect.github.com/tailwindlabs/tailwindcss/pull/15003))

</details>

<details>
<summary>vitejs/vite (vite)</summary>

###
[`v5.4.11`](https://redirect.github.com/vitejs/vite/releases/tag/v5.4.11)

[Compare
Source](https://redirect.github.com/vitejs/vite/compare/v5.4.10...v5.4.11)

Please refer to
[CHANGELOG.md](https://redirect.github.com/vitejs/vite/blob/v5.4.11/packages/vite/CHANGELOG.md)
for details.

</details>

<details>
<summary>cloudflare/workers-sdk (wrangler)</summary>

###
[`v3.87.0`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3870)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/wrangler@3.86.1...wrangler@3.87.0)

##### Minor Changes

-
[#&#8203;7201](https://redirect.github.com/cloudflare/workers-sdk/pull/7201)
[`beed72e`](beed72e7f3)
Thanks [@&#8203;GregBrimble](https://redirect.github.com/GregBrimble)! -
feat: Tail Consumers are now supported for Workers with assets.

You can now configure `tail_consumers` in conjunction with `assets` in
your `wrangler.toml` file. Read more about [Static
Assets](https://developers.cloudflare.com/workers/static-assets/) and
[Tail
Consumers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/)
in the documentation.

-
[#&#8203;7212](https://redirect.github.com/cloudflare/workers-sdk/pull/7212)
[`837f2f5`](837f2f569b)
Thanks [@&#8203;jonesphillip](https://redirect.github.com/jonesphillip)!
- Added r2 bucket info command to Wrangler. Improved formatting of r2
bucket list output

##### Patch Changes

-
[#&#8203;7210](https://redirect.github.com/cloudflare/workers-sdk/pull/7210)
[`c12c0fe`](c12c0fed88)
Thanks [@&#8203;taylorlee](https://redirect.github.com/taylorlee)! -
Avoid an unnecessary GET request during `wrangler deploy`.

-
[#&#8203;7197](https://redirect.github.com/cloudflare/workers-sdk/pull/7197)
[`4814455`](4814455717)
Thanks
[@&#8203;michelheusschen](https://redirect.github.com/michelheusschen)!
- fix console output for `wrangler d1 migrations create`

-
[#&#8203;6795](https://redirect.github.com/cloudflare/workers-sdk/pull/6795)
[`94f07ee`](94f07eec15)
Thanks [@&#8203;benmccann](https://redirect.github.com/benmccann)! -
chore: upgrade chokidar to v4

-
[#&#8203;7133](https://redirect.github.com/cloudflare/workers-sdk/pull/7133)
[`c46e02d`](c46e02dfd7)
Thanks [@&#8203;gpanders](https://redirect.github.com/gpanders)! - Do
not emit escape sequences when stdout is not a TTY

###
[`v3.86.1`](https://redirect.github.com/cloudflare/workers-sdk/blob/HEAD/packages/wrangler/CHANGELOG.md#3861)

[Compare
Source](https://redirect.github.com/cloudflare/workers-sdk/compare/wrangler@3.86.0...wrangler@3.86.1)

##### Patch Changes

-
[#&#8203;7069](https://redirect.github.com/cloudflare/workers-sdk/pull/7069)
[`b499b74`](b499b743e2)
Thanks [@&#8203;penalosa](https://redirect.github.com/penalosa)! -
Internal refactor to remove the non `--x-dev-env` flow from `wrangler
dev`

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4xOS4wIiwidXBkYXRlZEluVmVyIjoiMzkuMTkuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-18 12:25:29 +05:30
renovate[bot]
41694f21c6 Update pre-commit dependencies (#14419)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[abravalheri/validate-pyproject](https://redirect.github.com/abravalheri/validate-pyproject)
| repository | minor | `v0.22` -> `v0.23` |
|
[astral-sh/ruff-pre-commit](https://redirect.github.com/astral-sh/ruff-pre-commit)
| repository | patch | `v0.7.3` -> `v0.7.4` |

Note: The `pre-commit` manager in Renovate is not supported by the
`pre-commit` maintainers or community. Please do not report any problems
there, instead [create a Discussion in the Renovate
repository](https://redirect.github.com/renovatebot/renovate/discussions/new)
if you have any questions.

---

### Release Notes

<details>
<summary>abravalheri/validate-pyproject
(abravalheri/validate-pyproject)</summary>

###
[`v0.23`](https://redirect.github.com/abravalheri/validate-pyproject/releases/tag/v0.23)

[Compare
Source](https://redirect.github.com/abravalheri/validate-pyproject/compare/v0.22...v0.23)

#### What's Changed

- Validate SPDX license expressions by
[@&#8203;cdce8p](https://redirect.github.com/cdce8p) in
[https://github.com/abravalheri/validate-pyproject/pull/217](https://redirect.github.com/abravalheri/validate-pyproject/pull/217)

#### New Contributors

- [@&#8203;cdce8p](https://redirect.github.com/cdce8p) made their first
contribution in
[https://github.com/abravalheri/validate-pyproject/pull/217](https://redirect.github.com/abravalheri/validate-pyproject/pull/217)

**Full Changelog**:
https://github.com/abravalheri/validate-pyproject/compare/v0.22...v0.23

</details>

<details>
<summary>astral-sh/ruff-pre-commit (astral-sh/ruff-pre-commit)</summary>

###
[`v0.7.4`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.7.4)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.7.3...v0.7.4)

See: https://github.com/astral-sh/ruff/releases/tag/0.7.4

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/ruff).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4xOS4wIiwidXBkYXRlZEluVmVyIjoiMzkuMTkuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-18 11:27:14 +05:30
Charlie Marsh
fccbe56d23 Reverse order of __contains__ arguments (#14424)
## Summary

Closes https://github.com/astral-sh/ruff/issues/14423.
2024-11-18 03:58:12 +00:00
Shantanu
c46555da41 Drive by typo fix (#14420)
Introduced in
https://github.com/astral-sh/ruff/pull/14397/files#diff-42314c006689490bbdfbeeb973de64046b3e069e3d88f67520aeba375f20e655
2024-11-18 03:03:36 +00:00
InSync
0a27c9dabd [flake8-pie] Mark fix as unsafe if the following statement is a string literal (PIE790) (#14393)
## Summary

Resolves #12616.

## Test Plan

`cargo nextest run` and `cargo insta test`.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-11-18 02:30:06 +00:00
InSync
3c9e76eb66 [flake8-datetimez] Also exempt .time() (DTZ901) (#14394)
## Summary

Resolves #14378.

## Test Plan

`cargo nextest run` and `cargo insta test`.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-11-18 02:24:35 +00:00
renovate[bot]
80f5cdcf66 Update dependency tomli to v2.1.0 (#14418) 2024-11-18 01:56:05 +00:00
renovate[bot]
35fe0e90da Update Rust crate bstr to v1.11.0 (#14417) 2024-11-17 20:41:49 -05:00
renovate[bot]
157b49a8ee Update dependency ruff to v0.7.4 (#14415) 2024-11-17 20:41:40 -05:00
renovate[bot]
8a6e223df5 Update dependency react-resizable-panels to v2.1.7 (#14414) 2024-11-17 20:41:34 -05:00
renovate[bot]
5a48da53da Update Rust crate serde_json to v1.0.133 (#14413) 2024-11-17 20:41:29 -05:00
renovate[bot]
58005b590c Update Rust crate serde to v1.0.215 (#14412) 2024-11-17 20:41:23 -05:00
renovate[bot]
884835e386 Update Rust crate libc to v0.2.164 (#14411) 2024-11-17 20:41:17 -05:00
renovate[bot]
efd4407f7f Update Rust crate indicatif to v0.17.9 (#14410) 2024-11-17 20:41:13 -05:00
renovate[bot]
761588a60e Update Rust crate clap to v4.5.21 (#14409) 2024-11-17 20:41:06 -05:00
Charlie Marsh
e1eb188049 Avoid panic in unfixable redundant-numeric-union (#14402)
## Summary

Closes https://github.com/astral-sh/ruff/issues/14396.
2024-11-17 12:15:44 -05:00
Shaygan Hooshyari
ff19629b11 Understand typing.Optional in annotations (#14397) 2024-11-17 17:04:58 +00:00
Micha Reiser
cd80c9d907 Fix Red Knot benchmarks on Windows (#14400) 2024-11-17 16:21:09 +00:00
Matt Norton
abb34828bd Improve rule & options documentation (#14329)
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-11-17 10:16:47 +01:00
InSync
cab7caf80b [flake8-logging] Suggest .getLogger(__name__) instead of .getLogger(__file__) (LOG015) (#14392) 2024-11-17 09:22:52 +01:00
David Peter
d470f29093 [red-knot] Disable linter-corpus tests (#14391)
## Summary

Disable the no-panic tests for the linter corpus, as there are too many
problems right now, requiring linter-contributors to add their test
files to the allow-list.

We can still run the tests using `cargo test -p red_knot_workspace --
--ignored linter_af linter_gz`. This is also why I left the
`crates/ruff_linter/` entries in the allow list for now, even if they
will get out of sync. But let me know if I should rather remove them.
2024-11-16 23:33:19 +01:00
Simon Brugman
1fbed6c325 [ruff] Implement redundant-bool-literal (RUF038) (#14319)
## Summary

Implements `redundant-bool-literal`

## Test Plan

<!-- How was it tested? -->

`cargo test`

The ecosystem results are all correct, but for `Airflow` the rule is not
relevant due to the use of overloading (and is marked as unsafe
correctly).

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-11-16 21:52:51 +00:00
David Peter
4dcb7ddafe [red-knot] Remove duplicates from KNOWN_FAILURES (#14386)
## Summary

- Sort the list of `KNOWN_FAILURES`
- Remove accidental duplicates
2024-11-16 20:54:21 +01:00
Micha Reiser
5be90c3a67 Split the corpus tests into smaller tests (#14367)
## Summary

This PR splits the corpus tests into smaller chunks because running all
of them takes 8s on my windows machine and it's by far the longest test
in `red_knot_workspace`.

Splitting the tests has the advantage that they run in parallel. This PR
brings down the wall time from 8s to 4s.

This PR also limits the glob for the linter tests because it's common to
clone cpython into the `ruff_linter/resources/test` folder for
benchmarks (because that's what's written in the contributing guides)

## Test Plan

`cargo test`
2024-11-16 20:29:21 +01:00
Tom Kuson
d0dca7bfcf [pydoclint] Update diagnostics to target the docstring (#14381)
## Summary

Updates the `pydoclint` diagnostics to target the docstring instead of a
related statement.

Closes #13184

## Test Plan

`cargo nextest run`
2024-11-16 13:32:20 -05:00
Simon Brugman
78210b198b [flake8-pyi] Implement redundant-none-literal (PYI061) (#14316)
## Summary

`Literal[None]` can be simplified into `None` in type annotations.

Surprising to see that this is not that rare:
-
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chat_models/base.py#L54
-
https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/sql/annotation.py#L69
- https://github.com/jax-ml/jax/blob/main/jax/numpy/__init__.pyi#L961
-
https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_common.py#L179

## Test Plan

`cargo test`

Reviewed all ecosystem results, and they are true positives.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-11-16 18:22:51 +00:00
Simon Brugman
4a2310b595 [flake8-pyi] Implement autofix for redundant-numeric-union (PYI041) (#14273)
## Summary

This PR adds autofix for `redundant-numeric-union` (`PYI041`)

There are some comments below to explain the reasoning behind some
choices that might help review.

<!-- What's the purpose of the change? What does it do, and why? -->

Resolves part of https://github.com/astral-sh/ruff/issues/14185.

## Test Plan

<!-- How was it tested? -->

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-11-16 18:13:23 +00:00
Dylan
fc392c663a [flake8-type-checking] Fix helper function which surrounds annotations in quotes (#14371)
This PR adds corrected handling of list expressions to the `Visitor`
implementation of `QuotedAnnotator` in `flake8_type_checking::helpers`.

Closes #14368
2024-11-16 12:58:02 -05:00
Alex Waygood
81d3c419e9 [red-knot] Simplify some traits in ast_ids.rs (#14379) 2024-11-16 17:22:23 +00:00
Micha Reiser
a6a3d3f656 Fix file watcher panic when event has no paths (#14364) 2024-11-16 08:36:57 +01:00
Micha Reiser
c847cad389 Update insta snapshots (#14366) 2024-11-15 19:31:15 +01:00
Micha Reiser
81e5830585 Workspace discovery (#14308) 2024-11-15 19:20:15 +01:00
Micha Reiser
2b58705cc1 Remove the optional salsa dependency from the AST crate (#14363) 2024-11-15 16:46:04 +00:00
David Peter
9f3235a37f [red-knot] Expand test corpus (#14360)
## Summary

- Add 383 files from `crates/ruff_python_parser/resources` to the test
corpus
- Add 1296 files from `crates/ruff_linter/resources` to the test corpus
- Use in-memory file system for tests
- Improve test isolation by cleaning the test environment between checks
- Add a mechanism for "known failures". Mark ~80 files as known
failures.
- The corpus test is now a lot slower (6 seconds).

Note:
While `red_knot` as a command line tool can run over all of these
files without panicking, we still have a lot of test failures caused by
explicitly "pulling" all types.

## Test Plan

Run `cargo test -p red_knot_workspace` while making sure that
- Introducing code that is known to lead to a panic fails the test
- Removing code that is known to lead to a panic from
`KNOWN_FAILURES`-files also fails the test
2024-11-15 17:09:15 +01:00
Alex Waygood
62d650226b [red-knot] Derive more Default methods (#14361) 2024-11-15 13:15:41 +00:00
David Peter
5d8a391a3e [red-knot] Mark LoggingGuard as must_use (#14356) 2024-11-15 12:47:25 +01:00
2721 changed files with 10611 additions and 3842 deletions

View File

@@ -17,4 +17,7 @@ indent_size = 4
trim_trailing_whitespace = false
[*.md]
max_line_length = 100
max_line_length = 100
[*.toml]
indent_size = 4

View File

@@ -268,6 +268,7 @@ jobs:
- uses: Swatinem/rust-cache@v2
with:
workspaces: "fuzz -> target"
cache-all-crates: "true"
- name: "Install cargo-binstall"
uses: cargo-bins/cargo-binstall@main
with:

View File

@@ -17,7 +17,7 @@ exclude: |
repos:
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.22
rev: v0.23
hooks:
- id: validate-pyproject
@@ -73,7 +73,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.7.3
rev: v0.7.4
hooks:
- id: ruff-format
- id: ruff

73
Cargo.lock generated
View File

@@ -170,6 +170,12 @@ version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1fdabc7756949593fe60f30ec81974b613357de856987752631dea1e3394c80"
[[package]]
name = "base64"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e1b586273c5702936fe7b7d6896644d8be71e6314cfe09d3167c95f712589e8"
[[package]]
name = "base64"
version = "0.22.0"
@@ -208,9 +214,9 @@ dependencies = [
[[package]]
name = "bstr"
version = "1.10.0"
version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40723b8fb387abc38f4f4a37c09073622e41dd12327033091ef8950659e6dc0c"
checksum = "1a68f1f47cdf0ec8ee4b941b2eee2a80cb796db73118c0dd09ac63fbe405be22"
dependencies = [
"memchr",
"regex-automata 0.4.8",
@@ -341,9 +347,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.20"
version = "4.5.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b97f376d85a664d5837dbae44bf546e6477a679ff6610010f17276f686d867e8"
checksum = "fb3b4b9e5a7c7514dfa52869339ee98b3156b0bfb4e8a77c4ff4babb64b1604f"
dependencies = [
"clap_builder",
"clap_derive",
@@ -351,9 +357,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.20"
version = "4.5.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19bc80abd44e4bed93ca373a0704ccbd1b710dc5749406201bb018272808dc54"
checksum = "b17a95aa67cc7b5ebd32aa5370189aa0d79069ef1c64ce893bd30fb24bff20ec"
dependencies = [
"anstream",
"anstyle",
@@ -829,6 +835,12 @@ version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9bda8e21c04aca2ae33ffc2fd8c23134f3cac46db123ba97bd9d3f3b8a4a85e1"
[[package]]
name = "dunce"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813"
[[package]]
name = "dyn-clone"
version = "1.0.17"
@@ -1307,16 +1319,16 @@ dependencies = [
[[package]]
name = "indicatif"
version = "0.17.8"
version = "0.17.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "763a5a8f45087d6bcea4222e7b72c291a054edf80e4ef6efd2a4979878c7bea3"
checksum = "cbf675b85ed934d3c67b5c5469701eec7db22689d0a2139d856e0925fa28b281"
dependencies = [
"console",
"instant",
"number_prefix",
"portable-atomic",
"unicode-width 0.1.13",
"unicode-width 0.2.0",
"vt100",
"web-time",
]
[[package]]
@@ -1358,6 +1370,7 @@ dependencies = [
"pest",
"pest_derive",
"regex",
"ron",
"serde",
"similar",
"walkdir",
@@ -1501,9 +1514,9 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.162"
version = "0.2.164"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18d287de67fe55fd7e1581fe933d965a5a9477b38e949cfa9f8574ef01506398"
checksum = "433bfe06b8c75da9b2e3fbea6e5329ff87748f0b144ef75306e674c3f6f7c13f"
[[package]]
name = "libcst"
@@ -2269,6 +2282,7 @@ dependencies = [
"ruff_text_size",
"rustc-hash 2.0.0",
"salsa",
"serde",
"smallvec",
"static_assertions",
"tempfile",
@@ -2353,7 +2367,10 @@ version = "0.0.0"
dependencies = [
"anyhow",
"crossbeam",
"glob",
"insta",
"notify",
"pep440_rs 0.7.2",
"rayon",
"red_knot_python_semantic",
"red_knot_vendored",
@@ -2363,7 +2380,9 @@ dependencies = [
"ruff_text_size",
"rustc-hash 2.0.0",
"salsa",
"tempfile",
"serde",
"thiserror 2.0.3",
"toml",
"tracing",
]
@@ -2455,6 +2474,17 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "ron"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "88073939a61e5b7680558e6be56b419e208420c2adb92be54921fa6b72283f1a"
dependencies = [
"base64 0.13.1",
"bitflags 1.3.2",
"serde",
]
[[package]]
name = "ruff"
version = "0.7.4"
@@ -2556,7 +2586,9 @@ dependencies = [
"camino",
"countme",
"dashmap 6.1.0",
"dunce",
"filetime",
"glob",
"ignore",
"insta",
"matchit",
@@ -2778,7 +2810,6 @@ dependencies = [
"ruff_source_file",
"ruff_text_size",
"rustc-hash 2.0.0",
"salsa",
"schemars",
"serde",
]
@@ -3218,9 +3249,9 @@ checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
[[package]]
name = "serde"
version = "1.0.214"
version = "1.0.215"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f55c3193aca71c12ad7890f1785d2b73e1b9f63a0bbc353c08ef26fe03fc56b5"
checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f"
dependencies = [
"serde_derive",
]
@@ -3238,9 +3269,9 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.214"
version = "1.0.215"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "de523f781f095e28fa605cdce0f8307e451cc0fd14e2eb4cd2e98a355b147766"
checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0"
dependencies = [
"proc-macro2",
"quote",
@@ -3260,9 +3291,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.132"
version = "1.0.133"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d726bfaff4b320266d395898905d0eba0345aae23b54aee3a737e260fd46db03"
checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377"
dependencies = [
"itoa",
"memchr",
@@ -3907,7 +3938,7 @@ version = "2.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b74fc6b57825be3373f7054754755f03ac3a8f5d70015ccad699ba2029956f4a"
dependencies = [
"base64",
"base64 0.22.0",
"flate2",
"log",
"once_cell",

View File

@@ -66,6 +66,7 @@ criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
dir-test = { version = "0.3.0" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
env_logger = { version = "0.11.0" }
etcetera = { version = "0.8.0" }
@@ -81,7 +82,7 @@ hashbrown = { version = "0.15.0", default-features = false, features = [
ignore = { version = "0.4.22" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
indexmap = {version = "2.6.0" }
indexmap = { version = "2.6.0" }
indicatif = { version = "0.17.8" }
indoc = { version = "2.0.4" }
insta = { version = "1.35.1" }

View File

@@ -1,6 +1,11 @@
[files]
# https://github.com/crate-ci/typos/issues/868
extend-exclude = ["crates/red_knot_vendored/vendor/**/*", "**/resources/**/*", "**/snapshots/**/*"]
extend-exclude = [
"crates/red_knot_vendored/vendor/**/*",
"**/resources/**/*",
"**/snapshots/**/*",
"crates/red_knot_workspace/src/workspace/pyproject/package_name.rs"
]
[default.extend-words]
"arange" = "arange" # e.g. `numpy.arange`

View File

@@ -34,6 +34,7 @@ tracing-tree = { workspace = true }
[dev-dependencies]
filetime = { workspace = true }
tempfile = { workspace = true }
ruff_db = { workspace = true, features = ["testing"] }
[lints]
workspace = true

View File

@@ -183,10 +183,10 @@ fn run() -> anyhow::Result<ExitStatus> {
let system = OsSystem::new(cwd.clone());
let cli_configuration = args.to_configuration(&cwd);
let workspace_metadata = WorkspaceMetadata::from_path(
let workspace_metadata = WorkspaceMetadata::discover(
system.current_directory(),
&system,
Some(cli_configuration.clone()),
Some(&cli_configuration),
)?;
// TODO: Use the `program_settings` to compute the key for the database's persistent

View File

@@ -4,8 +4,8 @@
#[derive(Copy, Clone, Hash, Debug, PartialEq, Eq, PartialOrd, Ord, Default, clap::ValueEnum)]
pub enum TargetVersion {
Py37,
#[default]
Py38,
#[default]
Py39,
Py310,
Py311,
@@ -46,3 +46,17 @@ impl From<TargetVersion> for red_knot_python_semantic::PythonVersion {
}
}
}
#[cfg(test)]
mod tests {
use crate::target_version::TargetVersion;
use red_knot_python_semantic::PythonVersion;
#[test]
fn same_default_as_python_version() {
assert_eq!(
PythonVersion::from(TargetVersion::default()),
PythonVersion::default()
);
}
}

View File

@@ -6,7 +6,7 @@ use std::time::Duration;
use anyhow::{anyhow, Context};
use red_knot_python_semantic::{resolve_module, ModuleName, Program, PythonVersion, SitePackages};
use red_knot_workspace::db::RootDatabase;
use red_knot_workspace::db::{Db, RootDatabase};
use red_knot_workspace::watch;
use red_knot_workspace::watch::{directory_watcher, WorkspaceWatcher};
use red_knot_workspace::workspace::settings::{Configuration, SearchPathConfiguration};
@@ -14,6 +14,7 @@ use red_knot_workspace::workspace::WorkspaceMetadata;
use ruff_db::files::{system_path_to_file, File, FileError};
use ruff_db::source::source_text;
use ruff_db::system::{OsSystem, SystemPath, SystemPathBuf};
use ruff_db::testing::setup_logging;
use ruff_db::Upcast;
struct TestCase {
@@ -69,7 +70,6 @@ impl TestCase {
Some(all_events)
}
#[cfg(unix)]
fn take_watch_changes(&self) -> Vec<watch::ChangeEvent> {
self.try_take_watch_changes(Duration::from_secs(10))
.expect("Expected watch changes but observed none")
@@ -110,8 +110,8 @@ impl TestCase {
) -> anyhow::Result<()> {
let program = Program::get(self.db());
self.configuration.search_paths = configuration.clone();
let new_settings = configuration.into_settings(self.db.workspace().root(&self.db));
let new_settings = configuration.to_settings(self.db.workspace().root(&self.db));
self.configuration.search_paths = configuration;
program.update_search_paths(&mut self.db, &new_settings)?;
@@ -204,7 +204,9 @@ where
.as_utf8_path()
.canonicalize_utf8()
.with_context(|| "Failed to canonicalize root path.")?,
);
)
.simplified()
.to_path_buf();
let workspace_path = root_path.join("workspace");
@@ -241,8 +243,7 @@ where
search_paths,
};
let workspace =
WorkspaceMetadata::from_path(&workspace_path, &system, Some(configuration.clone()))?;
let workspace = WorkspaceMetadata::discover(&workspace_path, &system, Some(&configuration))?;
let db = RootDatabase::new(workspace, system)?;
@@ -1311,3 +1312,138 @@ mod unix {
Ok(())
}
}
#[test]
fn nested_packages_delete_root() -> anyhow::Result<()> {
let mut case = setup(|root: &SystemPath, workspace_root: &SystemPath| {
std::fs::write(
workspace_root.join("pyproject.toml").as_std_path(),
r#"
[project]
name = "inner"
"#,
)?;
std::fs::write(
root.join("pyproject.toml").as_std_path(),
r#"
[project]
name = "outer"
"#,
)?;
Ok(())
})?;
assert_eq!(
case.db().workspace().root(case.db()),
&*case.workspace_path("")
);
std::fs::remove_file(case.workspace_path("pyproject.toml").as_std_path())?;
let changes = case.stop_watch();
case.apply_changes(changes);
// It should now pick up the outer workspace.
assert_eq!(case.db().workspace().root(case.db()), case.root_path());
Ok(())
}
#[test]
fn added_package() -> anyhow::Result<()> {
let _ = setup_logging();
let mut case = setup([
(
"pyproject.toml",
r#"
[project]
name = "inner"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(
"packages/a/pyproject.toml",
r#"
[project]
name = "a"
"#,
),
])?;
assert_eq!(case.db().workspace().packages(case.db()).len(), 2);
std::fs::create_dir(case.workspace_path("packages/b").as_std_path())
.context("failed to create folder for package 'b'")?;
// It seems that the file watcher won't pick up on file changes shortly after the folder
// was created... I suspect this is because most file watchers don't support recursive
// file watching. Instead, file-watching libraries manually implement recursive file watching
// by setting a watcher for each directory. But doing this obviously "lags" behind.
case.take_watch_changes();
std::fs::write(
case.workspace_path("packages/b/pyproject.toml")
.as_std_path(),
r#"
[project]
name = "b"
"#,
)
.context("failed to write pyproject.toml for package b")?;
let changes = case.stop_watch();
case.apply_changes(changes);
assert_eq!(case.db().workspace().packages(case.db()).len(), 3);
Ok(())
}
#[test]
fn removed_package() -> anyhow::Result<()> {
let mut case = setup([
(
"pyproject.toml",
r#"
[project]
name = "inner"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(
"packages/a/pyproject.toml",
r#"
[project]
name = "a"
"#,
),
(
"packages/b/pyproject.toml",
r#"
[project]
name = "b"
"#,
),
])?;
assert_eq!(case.db().workspace().packages(case.db()).len(), 3);
std::fs::remove_dir_all(case.workspace_path("packages/b").as_std_path())
.context("failed to remove package 'b'")?;
let changes = case.stop_watch();
case.apply_changes(changes);
assert_eq!(case.db().workspace().packages(case.db()).len(), 2);
Ok(())
}

View File

@@ -13,7 +13,7 @@ license = { workspace = true }
[dependencies]
ruff_db = { workspace = true }
ruff_index = { workspace = true }
ruff_python_ast = { workspace = true, features = ["salsa"] }
ruff_python_ast = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_stdlib = { workspace = true }
ruff_source_file = { workspace = true }
@@ -33,6 +33,7 @@ thiserror = { workspace = true }
tracing = { workspace = true }
rustc-hash = { workspace = true }
hashbrown = { workspace = true }
serde = { workspace = true, optional = true }
smallvec = { workspace = true }
static_assertions = { workspace = true }
test-case = { workspace = true }

View File

@@ -0,0 +1,47 @@
# Optional
## Annotation
`typing.Optional` is equivalent to using the type with a None in a Union.
```py
from typing import Optional
a: Optional[int]
a1: Optional[bool]
a2: Optional[Optional[bool]]
a3: Optional[None]
def f():
# revealed: int | None
reveal_type(a)
# revealed: bool | None
reveal_type(a1)
# revealed: bool | None
reveal_type(a2)
# revealed: None
reveal_type(a3)
```
## Assignment
```py
from typing import Optional
a: Optional[int] = 1
a = None
# error: [invalid-assignment] "Object of type `Literal[""]` is not assignable to `int | None`"
a = ""
```
## Typing Extensions
```py
from typing_extensions import Optional
a: Optional[int]
def f():
# revealed: int | None
reveal_type(a)
```

View File

@@ -51,6 +51,8 @@ invalid1: Literal[3 + 4]
invalid2: Literal[4 + 3j]
# error: [invalid-literal-parameter]
invalid3: Literal[(3, 4)]
hello = "hello"
invalid4: Literal[
1 + 2, # error: [invalid-literal-parameter]
"foo",

View File

@@ -0,0 +1,152 @@
# Narrowing for checks involving `type(x)`
## `type(x) is C`
```py
class A: ...
class B: ...
def get_a_or_b() -> A | B:
return A()
x = get_a_or_b()
if type(x) is A:
reveal_type(x) # revealed: A
else:
# It would be wrong to infer `B` here. The type
# of `x` could be a subclass of `A`, so we need
# to infer the full union type:
reveal_type(x) # revealed: A | B
```
## `type(x) is not C`
```py
class A: ...
class B: ...
def get_a_or_b() -> A | B:
return A()
x = get_a_or_b()
if type(x) is not A:
# Same reasoning as above: no narrowing should occur here.
reveal_type(x) # revealed: A | B
else:
reveal_type(x) # revealed: A
```
## `type(x) == C`, `type(x) != C`
No narrowing can occur for equality comparisons, since there might be a custom `__eq__`
implementation on the metaclass.
TODO: Narrowing might be possible in some cases where the classes themselves are `@final` or their
metaclass is `@final`.
```py
class IsEqualToEverything(type):
def __eq__(cls, other):
return True
class A(metaclass=IsEqualToEverything): ...
class B(metaclass=IsEqualToEverything): ...
def get_a_or_b() -> A | B:
return B()
x = get_a_or_b()
if type(x) == A:
reveal_type(x) # revealed: A | B
if type(x) != A:
reveal_type(x) # revealed: A | B
```
## No narrowing for custom `type` callable
```py
class A: ...
class B: ...
def type(x):
return int
def get_a_or_b() -> A | B:
return A()
x = get_a_or_b()
if type(x) is A:
reveal_type(x) # revealed: A | B
else:
reveal_type(x) # revealed: A | B
```
## No narrowing for multiple arguments
No narrowing should occur if `type` is used to dynamically create a class:
```py
def get_str_or_int() -> str | int:
return "test"
x = get_str_or_int()
if type(x, (), {}) is str:
reveal_type(x) # revealed: str | int
else:
reveal_type(x) # revealed: str | int
```
## No narrowing for keyword arguments
`type` can't be used with a keyword argument:
```py
def get_str_or_int() -> str | int:
return "test"
x = get_str_or_int()
# TODO: we could issue a diagnostic here
if type(object=x) is str:
reveal_type(x) # revealed: str | int
```
## Narrowing if `type` is aliased
```py
class A: ...
class B: ...
alias_for_type = type
def get_a_or_b() -> A | B:
return A()
x = get_a_or_b()
if alias_for_type(x) is A:
reveal_type(x) # revealed: A
```
## Limitations
```py
class Base: ...
class Derived(Base): ...
def get_base() -> Base:
return Base()
x = get_base()
if type(x) is Base:
# Ideally, this could be narrower, but there is now way to
# express a constraint like `Base & ~ProperSubtypeOf[Base]`.
reveal_type(x) # revealed: Base
```

View File

@@ -22,23 +22,23 @@ type:
```py
import sys
reveal_type(sys.version_info >= (3, 8)) # revealed: Literal[True]
reveal_type((3, 8) <= sys.version_info) # revealed: Literal[True]
reveal_type(sys.version_info >= (3, 9)) # revealed: Literal[True]
reveal_type((3, 9) <= sys.version_info) # revealed: Literal[True]
reveal_type(sys.version_info > (3, 8)) # revealed: Literal[True]
reveal_type((3, 8) < sys.version_info) # revealed: Literal[True]
reveal_type(sys.version_info > (3, 9)) # revealed: Literal[True]
reveal_type((3, 9) < sys.version_info) # revealed: Literal[True]
reveal_type(sys.version_info < (3, 8)) # revealed: Literal[False]
reveal_type((3, 8) > sys.version_info) # revealed: Literal[False]
reveal_type(sys.version_info < (3, 9)) # revealed: Literal[False]
reveal_type((3, 9) > sys.version_info) # revealed: Literal[False]
reveal_type(sys.version_info <= (3, 8)) # revealed: Literal[False]
reveal_type((3, 8) >= sys.version_info) # revealed: Literal[False]
reveal_type(sys.version_info <= (3, 9)) # revealed: Literal[False]
reveal_type((3, 9) >= sys.version_info) # revealed: Literal[False]
reveal_type(sys.version_info == (3, 8)) # revealed: Literal[False]
reveal_type((3, 8) == sys.version_info) # revealed: Literal[False]
reveal_type(sys.version_info == (3, 9)) # revealed: Literal[False]
reveal_type((3, 9) == sys.version_info) # revealed: Literal[False]
reveal_type(sys.version_info != (3, 8)) # revealed: Literal[True]
reveal_type((3, 8) != sys.version_info) # revealed: Literal[True]
reveal_type(sys.version_info != (3, 9)) # revealed: Literal[True]
reveal_type((3, 9) != sys.version_info) # revealed: Literal[True]
```
## Non-literal types from comparisons
@@ -49,17 +49,17 @@ sometimes not:
```py
import sys
reveal_type(sys.version_info >= (3, 8, 1)) # revealed: bool
reveal_type(sys.version_info >= (3, 8, 1, "final", 0)) # revealed: bool
reveal_type(sys.version_info >= (3, 9, 1)) # revealed: bool
reveal_type(sys.version_info >= (3, 9, 1, "final", 0)) # revealed: bool
# TODO: While this won't fail at runtime, the user has probably made a mistake
# if they're comparing a tuple of length >5 with `sys.version_info`
# (`sys.version_info` is a tuple of length 5). It might be worth
# emitting a lint diagnostic of some kind warning them about the probable error?
reveal_type(sys.version_info >= (3, 8, 1, "final", 0, 5)) # revealed: bool
reveal_type(sys.version_info >= (3, 9, 1, "final", 0, 5)) # revealed: bool
# TODO: this should be `Literal[False]`; see #14279
reveal_type(sys.version_info == (3, 8, 1, "finallllll", 0)) # revealed: bool
reveal_type(sys.version_info == (3, 9, 1, "finallllll", 0)) # revealed: bool
```
## Imports and aliases
@@ -71,11 +71,11 @@ another name:
from sys import version_info
from sys import version_info as foo
reveal_type(version_info >= (3, 8)) # revealed: Literal[True]
reveal_type(foo >= (3, 8)) # revealed: Literal[True]
reveal_type(version_info >= (3, 9)) # revealed: Literal[True]
reveal_type(foo >= (3, 9)) # revealed: Literal[True]
bar = version_info
reveal_type(bar >= (3, 8)) # revealed: Literal[True]
reveal_type(bar >= (3, 9)) # revealed: Literal[True]
```
## Non-stdlib modules named `sys`
@@ -92,7 +92,7 @@ version_info: tuple[int, int] = (4, 2)
```py path=package/script.py
from .sys import version_info
reveal_type(version_info >= (3, 8)) # revealed: bool
reveal_type(version_info >= (3, 9)) # revealed: bool
```
## Accessing fields by name
@@ -103,8 +103,8 @@ The fields of `sys.version_info` can be accessed by name:
import sys
reveal_type(sys.version_info.major >= 3) # revealed: Literal[True]
reveal_type(sys.version_info.minor >= 8) # revealed: Literal[True]
reveal_type(sys.version_info.minor >= 9) # revealed: Literal[False]
reveal_type(sys.version_info.minor >= 9) # revealed: Literal[True]
reveal_type(sys.version_info.minor >= 10) # revealed: Literal[False]
```
But the `micro`, `releaselevel` and `serial` fields are inferred as `@Todo` until we support
@@ -126,14 +126,14 @@ The fields of `sys.version_info` can be accessed by index or by slice:
import sys
reveal_type(sys.version_info[0] < 3) # revealed: Literal[False]
reveal_type(sys.version_info[1] > 8) # revealed: Literal[False]
reveal_type(sys.version_info[1] > 9) # revealed: Literal[False]
# revealed: tuple[Literal[3], Literal[8], int, Literal["alpha", "beta", "candidate", "final"], int]
# revealed: tuple[Literal[3], Literal[9], int, Literal["alpha", "beta", "candidate", "final"], int]
reveal_type(sys.version_info[:5])
reveal_type(sys.version_info[:2] >= (3, 8)) # revealed: Literal[True]
reveal_type(sys.version_info[0:2] >= (3, 9)) # revealed: Literal[False]
reveal_type(sys.version_info[:3] >= (3, 9, 1)) # revealed: Literal[False]
reveal_type(sys.version_info[:2] >= (3, 9)) # revealed: Literal[True]
reveal_type(sys.version_info[0:2] >= (3, 10)) # revealed: Literal[False]
reveal_type(sys.version_info[:3] >= (3, 10, 1)) # revealed: Literal[False]
reveal_type(sys.version_info[3] == "final") # revealed: bool
reveal_type(sys.version_info[3] == "finalllllll") # revealed: Literal[False]
```

View File

@@ -459,11 +459,11 @@ foo: 3.8- # trailing comment
";
let parsed_versions = TypeshedVersions::from_str(VERSIONS).unwrap();
assert_eq!(parsed_versions.len(), 3);
assert_snapshot!(parsed_versions.to_string(), @r###"
assert_snapshot!(parsed_versions.to_string(), @r"
bar: 2.7-3.10
bar.baz: 3.1-3.9
foo: 3.8-
"###
"
);
}

View File

@@ -54,6 +54,7 @@ impl Program {
}
#[derive(Clone, Debug, Eq, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize))]
pub struct ProgramSettings {
pub target_version: PythonVersion,
pub search_paths: SearchPathSettings,
@@ -61,6 +62,7 @@ pub struct ProgramSettings {
/// Configures the search paths for module resolution.
#[derive(Eq, PartialEq, Debug, Clone)]
#[cfg_attr(feature = "serde", derive(serde::Serialize))]
pub struct SearchPathSettings {
/// List of user-provided paths that should take first priority in the module resolution.
/// Examples in other type checkers are mypy's MYPYPATH environment variable,
@@ -91,6 +93,7 @@ impl SearchPathSettings {
}
#[derive(Debug, Clone, Eq, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize))]
pub enum SitePackages {
Derived {
venv_path: SystemPathBuf,

View File

@@ -5,6 +5,7 @@ use std::fmt;
/// Unlike the `TargetVersion` enums in the CLI crates,
/// this does not necessarily represent a Python version that we actually support.
#[derive(Debug, Clone, Copy, Eq, PartialEq, Ord, PartialOrd, Hash)]
#[cfg_attr(feature = "serde", derive(serde::Serialize))]
pub struct PythonVersion {
pub major: u8,
pub minor: u8,
@@ -38,7 +39,7 @@ impl PythonVersion {
impl Default for PythonVersion {
fn default() -> Self {
Self::PY38
Self::PY39
}
}

View File

@@ -49,64 +49,50 @@ fn ast_ids<'db>(db: &'db dyn Db, scope: ScopeId) -> &'db AstIds {
semantic_index(db, scope.file(db)).ast_ids(scope.file_scope_id(db))
}
pub trait HasScopedUseId {
/// The type of the ID uniquely identifying the use.
type Id: Copy;
/// Returns the ID that uniquely identifies the use in `scope`.
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id;
}
/// Uniquely identifies a use of a name in a [`crate::semantic_index::symbol::FileScopeId`].
#[newtype_index]
pub struct ScopedUseId;
impl HasScopedUseId for ast::ExprName {
type Id = ScopedUseId;
pub trait HasScopedUseId {
/// Returns the ID that uniquely identifies the use in `scope`.
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedUseId;
}
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
impl HasScopedUseId for ast::ExprName {
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedUseId {
let expression_ref = ExpressionRef::from(self);
expression_ref.scoped_use_id(db, scope)
}
}
impl HasScopedUseId for ast::ExpressionRef<'_> {
type Id = ScopedUseId;
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedUseId {
let ast_ids = ast_ids(db, scope);
ast_ids.use_id(*self)
}
}
pub trait HasScopedAstId {
/// The type of the ID uniquely identifying the node.
type Id: Copy;
/// Returns the ID that uniquely identifies the node in `scope`.
fn scoped_ast_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id;
}
impl<T: HasScopedAstId> HasScopedAstId for Box<T> {
type Id = <T as HasScopedAstId>::Id;
fn scoped_ast_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
self.as_ref().scoped_ast_id(db, scope)
}
}
/// Uniquely identifies an [`ast::Expr`] in a [`crate::semantic_index::symbol::FileScopeId`].
#[newtype_index]
pub struct ScopedExpressionId;
pub trait HasScopedExpressionId {
/// Returns the ID that uniquely identifies the node in `scope`.
fn scoped_expression_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedExpressionId;
}
impl<T: HasScopedExpressionId> HasScopedExpressionId for Box<T> {
fn scoped_expression_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedExpressionId {
self.as_ref().scoped_expression_id(db, scope)
}
}
macro_rules! impl_has_scoped_expression_id {
($ty: ty) => {
impl HasScopedAstId for $ty {
type Id = ScopedExpressionId;
fn scoped_ast_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
impl HasScopedExpressionId for $ty {
fn scoped_expression_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedExpressionId {
let expression_ref = ExpressionRef::from(self);
expression_ref.scoped_ast_id(db, scope)
expression_ref.scoped_expression_id(db, scope)
}
}
};
@@ -146,29 +132,20 @@ impl_has_scoped_expression_id!(ast::ExprSlice);
impl_has_scoped_expression_id!(ast::ExprIpyEscapeCommand);
impl_has_scoped_expression_id!(ast::Expr);
impl HasScopedAstId for ast::ExpressionRef<'_> {
type Id = ScopedExpressionId;
fn scoped_ast_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
impl HasScopedExpressionId for ast::ExpressionRef<'_> {
fn scoped_expression_id(&self, db: &dyn Db, scope: ScopeId) -> ScopedExpressionId {
let ast_ids = ast_ids(db, scope);
ast_ids.expression_id(*self)
}
}
#[derive(Debug)]
#[derive(Debug, Default)]
pub(super) struct AstIdsBuilder {
expressions_map: FxHashMap<ExpressionNodeKey, ScopedExpressionId>,
uses_map: FxHashMap<ExpressionNodeKey, ScopedUseId>,
}
impl AstIdsBuilder {
pub(super) fn new() -> Self {
Self {
expressions_map: FxHashMap::default(),
uses_map: FxHashMap::default(),
}
}
/// Adds `expr` to the expression ids map and returns its id.
pub(super) fn record_expression(&mut self, expr: &ast::Expr) -> ScopedExpressionId {
let expression_id = self.expressions_map.len().into();

View File

@@ -124,9 +124,9 @@ impl<'db> SemanticIndexBuilder<'db> {
self.try_node_context_stack_manager.enter_nested_scope();
let file_scope_id = self.scopes.push(scope);
self.symbol_tables.push(SymbolTableBuilder::new());
self.use_def_maps.push(UseDefMapBuilder::new());
let ast_id_scope = self.ast_ids.push(AstIdsBuilder::new());
self.symbol_tables.push(SymbolTableBuilder::default());
self.use_def_maps.push(UseDefMapBuilder::default());
let ast_id_scope = self.ast_ids.push(AstIdsBuilder::default());
let scope_id = ScopeId::new(self.db, self.file, file_scope_id, countme::Count::default());

View File

@@ -210,7 +210,7 @@ impl ScopeKind {
}
/// Symbol table for a specific [`Scope`].
#[derive(Debug)]
#[derive(Debug, Default)]
pub struct SymbolTable {
/// The symbols in this scope.
symbols: IndexVec<ScopedSymbolId, Symbol>,
@@ -220,13 +220,6 @@ pub struct SymbolTable {
}
impl SymbolTable {
fn new() -> Self {
Self {
symbols: IndexVec::new(),
symbols_by_name: SymbolMap::default(),
}
}
fn shrink_to_fit(&mut self) {
self.symbols.shrink_to_fit();
}
@@ -278,18 +271,12 @@ impl PartialEq for SymbolTable {
impl Eq for SymbolTable {}
#[derive(Debug)]
#[derive(Debug, Default)]
pub(super) struct SymbolTableBuilder {
table: SymbolTable,
}
impl SymbolTableBuilder {
pub(super) fn new() -> Self {
Self {
table: SymbolTable::new(),
}
}
pub(super) fn add_symbol(&mut self, name: Name) -> (ScopedSymbolId, bool) {
let hash = SymbolTable::hash_name(&name);
let entry = self

View File

@@ -459,10 +459,6 @@ pub(super) struct UseDefMapBuilder<'db> {
}
impl<'db> UseDefMapBuilder<'db> {
pub(super) fn new() -> Self {
Self::default()
}
pub(super) fn add_symbol(&mut self, symbol: ScopedSymbolId) {
let new_symbol = self.symbol_states.push(SymbolState::undefined());
debug_assert_eq!(symbol, new_symbol);

View File

@@ -6,7 +6,7 @@ use ruff_source_file::LineIndex;
use crate::module_name::ModuleName;
use crate::module_resolver::{resolve_module, Module};
use crate::semantic_index::ast_ids::HasScopedAstId;
use crate::semantic_index::ast_ids::HasScopedExpressionId;
use crate::semantic_index::semantic_index;
use crate::types::{binding_ty, infer_scope_types, Type};
use crate::Db;
@@ -54,7 +54,7 @@ impl HasTy for ast::ExpressionRef<'_> {
let file_scope = index.expression_scope_id(*self);
let scope = file_scope.to_scope_id(model.db, model.file);
let expression_id = self.scoped_ast_id(model.db, scope);
let expression_id = self.scoped_expression_id(model.db, scope);
infer_scope_types(model.db, scope).expression_ty(expression_id)
}
}

View File

@@ -732,7 +732,20 @@ mod tests {
let system = TestSystem::default();
assert!(matches!(
VirtualEnvironment::new("/.venv", &system),
Err(SitePackagesDiscoveryError::VenvDirIsNotADirectory(_))
Err(SitePackagesDiscoveryError::VenvDirCanonicalizationError(..))
));
}
#[test]
fn reject_venv_that_is_not_a_directory() {
let system = TestSystem::default();
system
.memory_file_system()
.write_file("/.venv", "")
.unwrap();
assert!(matches!(
VirtualEnvironment::new("/.venv", &system),
Err(SitePackagesDiscoveryError::VenvDirIsNotADirectory(..))
));
}

View File

@@ -14,7 +14,7 @@ pub(crate) use self::infer::{
};
pub(crate) use self::signatures::Signature;
use crate::module_resolver::file_to_module;
use crate::semantic_index::ast_ids::HasScopedAstId;
use crate::semantic_index::ast_ids::HasScopedExpressionId;
use crate::semantic_index::definition::Definition;
use crate::semantic_index::symbol::{self as symbol, ScopeId, ScopedSymbolId};
use crate::semantic_index::{
@@ -47,7 +47,7 @@ pub fn check_types(db: &dyn Db, file: File) -> TypeCheckDiagnostics {
tracing::debug!("Checking file '{path}'", path = file.path(db));
let index = semantic_index(db, file);
let mut diagnostics = TypeCheckDiagnostics::new();
let mut diagnostics = TypeCheckDiagnostics::default();
for scope_id in index.scope_ids() {
let result = infer_scope_types(db, scope_id);
@@ -207,7 +207,7 @@ fn definition_expression_ty<'db>(
let index = semantic_index(db, file);
let file_scope = index.expression_scope_id(expression);
let scope = file_scope.to_scope_id(db, file);
let expr_id = expression.scoped_ast_id(db, scope);
let expr_id = expression.scoped_expression_id(db, scope);
if scope == definition.scope(db) {
// expression is in the definition scope
let inference = infer_definition_types(db, definition);
@@ -1807,6 +1807,8 @@ impl<'db> KnownClass {
pub enum KnownInstanceType<'db> {
/// The symbol `typing.Literal` (which can also be found as `typing_extensions.Literal`)
Literal,
/// The symbol `typing.Optional` (which can also be found as `typing_extensions.Optional`)
Optional,
/// A single instance of `typing.TypeVar`
TypeVar(TypeVarInstance<'db>),
// TODO: fill this enum out with more special forms, etc.
@@ -1816,6 +1818,7 @@ impl<'db> KnownInstanceType<'db> {
pub const fn as_str(self) -> &'static str {
match self {
KnownInstanceType::Literal => "Literal",
KnownInstanceType::Optional => "Optional",
KnownInstanceType::TypeVar(_) => "TypeVar",
}
}
@@ -1823,8 +1826,7 @@ impl<'db> KnownInstanceType<'db> {
/// Evaluate the known instance in boolean context
pub const fn bool(self) -> Truthiness {
match self {
Self::Literal => Truthiness::AlwaysTrue,
Self::TypeVar(_) => Truthiness::AlwaysTrue,
Self::Literal | Self::Optional | Self::TypeVar(_) => Truthiness::AlwaysTrue,
}
}
@@ -1832,6 +1834,7 @@ impl<'db> KnownInstanceType<'db> {
pub fn repr(self, db: &'db dyn Db) -> &'db str {
match self {
Self::Literal => "typing.Literal",
Self::Optional => "typing.Optional",
Self::TypeVar(typevar) => typevar.name(db),
}
}
@@ -1840,6 +1843,7 @@ impl<'db> KnownInstanceType<'db> {
pub const fn class(self) -> KnownClass {
match self {
Self::Literal => KnownClass::SpecialForm,
Self::Optional => KnownClass::SpecialForm,
Self::TypeVar(_) => KnownClass::TypeVar,
}
}
@@ -1859,6 +1863,7 @@ impl<'db> KnownInstanceType<'db> {
}
match (module.name().as_str(), instance_name) {
("typing" | "typing_extensions", "Literal") => Some(Self::Literal),
("typing" | "typing_extensions", "Optional") => Some(Self::Optional),
_ => None,
}
}

View File

@@ -128,7 +128,7 @@ impl<'db> IntersectionBuilder<'db> {
pub(crate) fn new(db: &'db dyn Db) -> Self {
Self {
db,
intersections: vec![InnerIntersectionBuilder::new()],
intersections: vec![InnerIntersectionBuilder::default()],
}
}
@@ -231,10 +231,6 @@ struct InnerIntersectionBuilder<'db> {
}
impl<'db> InnerIntersectionBuilder<'db> {
fn new() -> Self {
Self::default()
}
/// Adds a positive type to this intersection.
fn add_positive(&mut self, db: &'db dyn Db, new_positive: Type<'db>) {
if let Type::Intersection(other) = new_positive {
@@ -253,7 +249,7 @@ impl<'db> InnerIntersectionBuilder<'db> {
.iter()
.find(|element| element.is_boolean_literal())
{
*self = Self::new();
*self = Self::default();
self.positive.insert(Type::BooleanLiteral(!value));
return;
}
@@ -272,7 +268,7 @@ impl<'db> InnerIntersectionBuilder<'db> {
}
// A & B = Never if A and B are disjoint
if new_positive.is_disjoint_from(db, *existing_positive) {
*self = Self::new();
*self = Self::default();
self.positive.insert(Type::Never);
return;
}
@@ -285,7 +281,7 @@ impl<'db> InnerIntersectionBuilder<'db> {
for (index, existing_negative) in self.negative.iter().enumerate() {
// S & ~T = Never if S <: T
if new_positive.is_subtype_of(db, *existing_negative) {
*self = Self::new();
*self = Self::default();
self.positive.insert(Type::Never);
return;
}
@@ -326,7 +322,7 @@ impl<'db> InnerIntersectionBuilder<'db> {
.iter()
.any(|pos| *pos == KnownClass::Bool.to_instance(db)) =>
{
*self = Self::new();
*self = Self::default();
self.positive.insert(Type::BooleanLiteral(!bool));
}
_ => {
@@ -348,7 +344,7 @@ impl<'db> InnerIntersectionBuilder<'db> {
for existing_positive in &self.positive {
// S & ~T = Never if S <: T
if existing_positive.is_subtype_of(db, new_negative) {
*self = Self::new();
*self = Self::default();
self.positive.insert(Type::Never);
return;
}

View File

@@ -73,10 +73,6 @@ pub struct TypeCheckDiagnostics {
}
impl TypeCheckDiagnostics {
pub fn new() -> Self {
Self { inner: Vec::new() }
}
pub(super) fn push(&mut self, diagnostic: TypeCheckDiagnostic) {
self.inner.push(Arc::new(diagnostic));
}
@@ -148,7 +144,7 @@ impl<'db> TypeCheckDiagnosticsBuilder<'db> {
Self {
db,
file,
diagnostics: TypeCheckDiagnostics::new(),
diagnostics: TypeCheckDiagnostics::default(),
}
}

View File

@@ -38,7 +38,7 @@ use salsa::plumbing::AsId;
use crate::module_name::ModuleName;
use crate::module_resolver::{file_to_module, resolve_module};
use crate::semantic_index::ast_ids::{HasScopedAstId, HasScopedUseId, ScopedExpressionId};
use crate::semantic_index::ast_ids::{HasScopedExpressionId, HasScopedUseId, ScopedExpressionId};
use crate::semantic_index::definition::{
AssignmentDefinitionKind, Definition, DefinitionKind, DefinitionNodeKey,
ExceptHandlerDefinitionKind, TargetKind,
@@ -181,7 +181,7 @@ fn infer_unpack_types<'db>(db: &'db dyn Db, unpack: Unpack<'db>) -> UnpackResult
let scope = unpack.scope(db);
let result = infer_expression_types(db, value);
let value_ty = result.expression_ty(value.node_ref(db).scoped_ast_id(db, scope));
let value_ty = result.expression_ty(value.node_ref(db).scoped_expression_id(db, scope));
let mut unpacker = Unpacker::new(db, file);
unpacker.unpack(unpack.target(db), value_ty, scope);
@@ -409,7 +409,7 @@ impl<'db> TypeInferenceBuilder<'db> {
#[track_caller]
fn expression_ty(&self, expr: &ast::Expr) -> Type<'db> {
self.types
.expression_ty(expr.scoped_ast_id(self.db, self.scope()))
.expression_ty(expr.scoped_expression_id(self.db, self.scope()))
}
/// Infers types in the given [`InferenceRegion`].
@@ -954,7 +954,7 @@ impl<'db> TypeInferenceBuilder<'db> {
let function_ty = Type::FunctionLiteral(FunctionType::new(
self.db,
&*name.id,
&name.id,
function_kind,
body_scope,
decorator_tys,
@@ -1069,7 +1069,7 @@ impl<'db> TypeInferenceBuilder<'db> {
let maybe_known_class = KnownClass::try_from_file(self.db, self.file, name);
let class = Class::new(self.db, &*name.id, body_scope, maybe_known_class);
let class = Class::new(self.db, &name.id, body_scope, maybe_known_class);
let class_ty = Type::class_literal(class);
self.add_declaration_with_binding(class_node.into(), definition, class_ty, class_ty);
@@ -1215,9 +1215,10 @@ impl<'db> TypeInferenceBuilder<'db> {
is_async,
);
self.types
.expressions
.insert(target.scoped_ast_id(self.db, self.scope()), target_ty);
self.types.expressions.insert(
target.scoped_expression_id(self.db, self.scope()),
target_ty,
);
self.add_binding(target.into(), definition, target_ty);
}
@@ -1607,7 +1608,7 @@ impl<'db> TypeInferenceBuilder<'db> {
self.infer_standalone_expression(value);
let value_ty = self.expression_ty(value);
let name_ast_id = name.scoped_ast_id(self.db, self.scope());
let name_ast_id = name.scoped_expression_id(self.db, self.scope());
let target_ty = match assignment.target() {
TargetKind::Sequence(unpack) => {
@@ -2211,18 +2212,14 @@ impl<'db> TypeInferenceBuilder<'db> {
ty
}
fn store_expression_type(
&mut self,
expression: &impl HasScopedAstId<Id = ScopedExpressionId>,
ty: Type<'db>,
) {
fn store_expression_type(&mut self, expression: &impl HasScopedExpressionId, ty: Type<'db>) {
if self.deferred_state.in_string_annotation() {
// Avoid storing the type of expressions that are part of a string annotation because
// the expression ids don't exists in the semantic index. Instead, we'll store the type
// on the string expression itself that represents the annotation.
return;
}
let expr_id = expression.scoped_ast_id(self.db, self.scope());
let expr_id = expression.scoped_expression_id(self.db, self.scope());
let previous = self.types.expressions.insert(expr_id, ty);
assert_eq!(previous, None);
}
@@ -2288,6 +2285,12 @@ impl<'db> TypeInferenceBuilder<'db> {
} = expression;
let ty = self.infer_expression(expression);
if let Some(ref format_spec) = format_spec {
for element in format_spec.elements.expressions() {
self.infer_expression(&element.expression);
}
}
// TODO: handle format specifiers by calling a method
// (`Type::format`?) that handles the `__format__` method.
// Conversion flags should be handled before calling `__format__`.
@@ -2541,10 +2544,10 @@ impl<'db> TypeInferenceBuilder<'db> {
.parent_scope_id(self.scope().file_scope_id(self.db))
.expect("A comprehension should never be the top-level scope")
.to_scope_id(self.db, self.file);
result.expression_ty(iterable.scoped_ast_id(self.db, lookup_scope))
result.expression_ty(iterable.scoped_expression_id(self.db, lookup_scope))
} else {
self.extend(result);
result.expression_ty(iterable.scoped_ast_id(self.db, self.scope()))
result.expression_ty(iterable.scoped_expression_id(self.db, self.scope()))
};
let target_ty = if is_async {
@@ -2556,9 +2559,10 @@ impl<'db> TypeInferenceBuilder<'db> {
.unwrap_with_diagnostic(iterable.into(), &mut self.diagnostics)
};
self.types
.expressions
.insert(target.scoped_ast_id(self.db, self.scope()), target_ty);
self.types.expressions.insert(
target.scoped_expression_id(self.db, self.scope()),
target_ty,
);
self.add_binding(target.into(), definition, target_ty);
}
@@ -4445,11 +4449,18 @@ impl<'db> TypeInferenceBuilder<'db> {
element_types.push(element_ty);
}
if return_todo {
let ty = if return_todo {
Type::Todo
} else {
Type::tuple(self.db, &element_types)
}
};
// Here, we store the type for the inner `int, str` tuple-expression,
// while the type for the outer `tuple[int, str]` slice-expression is
// stored in the surrounding `infer_type_expression` call:
self.store_expression_type(tuple_slice, ty);
ty
}
single_element => {
let single_element_ty = self.infer_type_expression(single_element);
@@ -4465,8 +4476,8 @@ impl<'db> TypeInferenceBuilder<'db> {
/// Given the slice of a `type[]` annotation, return the type that the annotation represents
fn infer_subclass_of_type_expression(&mut self, slice: &ast::Expr) -> Type<'db> {
match slice {
ast::Expr::Name(name) => {
let name_ty = self.infer_name_expression(name);
ast::Expr::Name(_) => {
let name_ty = self.infer_expression(slice);
if let Some(ClassLiteralType { class }) = name_ty.into_class_literal() {
Type::subclass_of(class)
} else {
@@ -4526,6 +4537,10 @@ impl<'db> TypeInferenceBuilder<'db> {
Type::Unknown
}
},
KnownInstanceType::Optional => {
let param_type = self.infer_type_expression(parameters);
UnionType::from_elements(self.db, [param_type, Type::none(self.db)])
}
KnownInstanceType::TypeVar(_) => Type::Todo,
}
}
@@ -4539,8 +4554,15 @@ impl<'db> TypeInferenceBuilder<'db> {
ast::Expr::Subscript(ast::ExprSubscript { value, slice, .. }) => {
let value_ty = self.infer_expression(value);
if matches!(value_ty, Type::KnownInstance(KnownInstanceType::Literal)) {
self.infer_literal_parameter_type(slice)?
let ty = self.infer_literal_parameter_type(slice)?;
// This branch deals with annotations such as `Literal[Literal[1]]`.
// Here, we store the type for the inner `Literal[1]` expression:
self.store_expression_type(parameters, ty);
ty
} else {
self.store_expression_type(parameters, Type::Unknown);
return Err(vec![parameters]);
}
}
@@ -4558,15 +4580,27 @@ impl<'db> TypeInferenceBuilder<'db> {
}
}
if errors.is_empty() {
builder.build()
let union_type = builder.build();
// This branch deals with annotations such as `Literal[1, 2]`. Here, we
// store the type for the inner `1, 2` tuple-expression:
self.store_expression_type(parameters, union_type);
union_type
} else {
self.store_expression_type(parameters, Type::Unknown);
return Err(errors);
}
}
ast::Expr::StringLiteral(literal) => self.infer_string_literal_expression(literal),
ast::Expr::BytesLiteral(literal) => self.infer_bytes_literal_expression(literal),
ast::Expr::BooleanLiteral(literal) => self.infer_boolean_literal_expression(literal),
literal @ (ast::Expr::StringLiteral(_)
| ast::Expr::BytesLiteral(_)
| ast::Expr::BooleanLiteral(_)
| ast::Expr::NoneLiteral(_)) => self.infer_expression(literal),
literal @ ast::Expr::NumberLiteral(ref number) if number.value.is_int() => {
self.infer_expression(literal)
}
// For enum values
ast::Expr::Attribute(ast::ExprAttribute { value, attr, .. }) => {
let value_ty = self.infer_expression(value);
@@ -4576,7 +4610,6 @@ impl<'db> TypeInferenceBuilder<'db> {
.ignore_possibly_unbound()
.unwrap_or(Type::Unknown)
}
ast::Expr::NoneLiteral(_) => Type::none(self.db),
// for negative and positive numbers
ast::Expr::UnaryOp(ref u)
if matches!(u.op, UnaryOp::USub | UnaryOp::UAdd)
@@ -4584,10 +4617,8 @@ impl<'db> TypeInferenceBuilder<'db> {
{
self.infer_unary_expression(u)
}
ast::Expr::NumberLiteral(ref number) if number.value.is_int() => {
self.infer_number_literal_expression(number)
}
_ => {
self.infer_expression(parameters);
return Err(vec![parameters]);
}
})
@@ -4755,6 +4786,7 @@ enum ModuleNameResolutionError {
///
/// If the formatted string contains an expression (with a representation unknown at compile time),
/// infers an instance of `builtins.str`.
#[derive(Debug)]
struct StringPartsCollector {
concatenated: Option<String>,
expression: bool,

View File

@@ -371,8 +371,9 @@ impl<'db> ClassBase<'db> {
| Type::ModuleLiteral(_)
| Type::SubclassOf(_) => None,
Type::KnownInstance(known_instance) => match known_instance {
KnownInstanceType::Literal => None,
KnownInstanceType::TypeVar(_) => None,
KnownInstanceType::TypeVar(_)
| KnownInstanceType::Literal
| KnownInstanceType::Optional => None,
},
}
}

View File

@@ -1,4 +1,4 @@
use crate::semantic_index::ast_ids::HasScopedAstId;
use crate::semantic_index::ast_ids::HasScopedExpressionId;
use crate::semantic_index::constraint::{Constraint, ConstraintNode, PatternConstraint};
use crate::semantic_index::definition::Definition;
use crate::semantic_index::expression::Expression;
@@ -257,17 +257,26 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
expression: Expression<'db>,
is_positive: bool,
) -> Option<NarrowingConstraints<'db>> {
fn is_narrowing_target_candidate(expr: &ast::Expr) -> bool {
matches!(expr, ast::Expr::Name(_) | ast::Expr::Call(_))
}
let ast::ExprCompare {
range: _,
left,
ops,
comparators,
} = expr_compare;
if !left.is_name_expr() && comparators.iter().all(|c| !c.is_name_expr()) {
// If none of the comparators are name expressions,
// we have no symbol to narrow down the type of.
// Performance optimization: early return if there are no potential narrowing targets.
if !is_narrowing_target_candidate(left)
&& comparators
.iter()
.all(|c| !is_narrowing_target_candidate(c))
{
return None;
}
if !is_positive && comparators.len() > 1 {
// We can't negate a constraint made by a multi-comparator expression, since we can't
// know which comparison part is the one being negated.
@@ -283,42 +292,85 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
.tuple_windows::<(&ruff_python_ast::Expr, &ruff_python_ast::Expr)>();
let mut constraints = NarrowingConstraints::default();
for (op, (left, right)) in std::iter::zip(&**ops, comparator_tuples) {
if let ast::Expr::Name(ast::ExprName {
range: _,
id,
ctx: _,
}) = left
{
// SAFETY: we should always have a symbol for every Name node.
let symbol = self.symbols().symbol_id_by_name(id).unwrap();
let rhs_ty = inference.expression_ty(right.scoped_ast_id(self.db, scope));
let rhs_ty = inference.expression_ty(right.scoped_expression_id(self.db, scope));
match if is_positive { *op } else { op.negate() } {
ast::CmpOp::IsNot => {
if rhs_ty.is_singleton(self.db) {
let ty = IntersectionBuilder::new(self.db)
.add_negative(rhs_ty)
.build();
constraints.insert(symbol, ty);
} else {
// Non-singletons cannot be safely narrowed using `is not`
match left {
ast::Expr::Name(ast::ExprName {
range: _,
id,
ctx: _,
}) => {
let symbol = self
.symbols()
.symbol_id_by_name(id)
.expect("Should always have a symbol for every Name node");
match if is_positive { *op } else { op.negate() } {
ast::CmpOp::IsNot => {
if rhs_ty.is_singleton(self.db) {
let ty = IntersectionBuilder::new(self.db)
.add_negative(rhs_ty)
.build();
constraints.insert(symbol, ty);
} else {
// Non-singletons cannot be safely narrowed using `is not`
}
}
}
ast::CmpOp::Is => {
constraints.insert(symbol, rhs_ty);
}
ast::CmpOp::NotEq => {
if rhs_ty.is_single_valued(self.db) {
let ty = IntersectionBuilder::new(self.db)
.add_negative(rhs_ty)
.build();
constraints.insert(symbol, ty);
ast::CmpOp::Is => {
constraints.insert(symbol, rhs_ty);
}
ast::CmpOp::NotEq => {
if rhs_ty.is_single_valued(self.db) {
let ty = IntersectionBuilder::new(self.db)
.add_negative(rhs_ty)
.build();
constraints.insert(symbol, ty);
}
}
_ => {
// TODO other comparison types
}
}
_ => {
// TODO other comparison types
}
}
ast::Expr::Call(ast::ExprCall {
range: _,
func: callable,
arguments:
ast::Arguments {
args,
keywords,
range: _,
},
}) if rhs_ty.is_class_literal() && keywords.is_empty() => {
let [ast::Expr::Name(ast::ExprName { id, .. })] = &**args else {
continue;
};
let is_valid_constraint = if is_positive {
op == &ast::CmpOp::Is
} else {
op == &ast::CmpOp::IsNot
};
if !is_valid_constraint {
continue;
}
let callable_ty =
inference.expression_ty(callable.scoped_expression_id(self.db, scope));
if callable_ty
.into_class_literal()
.is_some_and(|c| c.class.is_known(self.db, KnownClass::Type))
{
let symbol = self
.symbols()
.symbol_id_by_name(id)
.expect("Should always have a symbol for every Name node");
constraints.insert(symbol, rhs_ty.to_instance(self.db));
}
}
_ => {}
}
}
Some(constraints)
@@ -336,7 +388,7 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
// TODO: add support for PEP 604 union types on the right hand side of `isinstance`
// and `issubclass`, for example `isinstance(x, str | (int | float))`.
match inference
.expression_ty(expr_call.func.scoped_ast_id(self.db, scope))
.expression_ty(expr_call.func.scoped_expression_id(self.db, scope))
.into_function_literal()
.and_then(|f| f.known(self.db))
.and_then(KnownFunction::constraint_function)
@@ -348,7 +400,7 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
let symbol = self.symbols().symbol_id_by_name(id).unwrap();
let class_info_ty =
inference.expression_ty(class_info.scoped_ast_id(self.db, scope));
inference.expression_ty(class_info.scoped_expression_id(self.db, scope));
let to_constraint = match function {
KnownConstraintFunction::IsInstance => {
@@ -414,7 +466,7 @@ impl<'db> NarrowingConstraintsBuilder<'db> {
// filter our arms with statically known truthiness
.filter(|expr| {
inference
.expression_ty(expr.scoped_ast_id(self.db, scope))
.expression_ty(expr.scoped_expression_id(self.db, scope))
.bool(self.db)
!= match expr_bool_op.op {
BoolOp::And => Truthiness::AlwaysTrue,

View File

@@ -4,7 +4,7 @@ use ruff_db::files::File;
use ruff_python_ast::{self as ast, AnyNodeRef};
use rustc_hash::FxHashMap;
use crate::semantic_index::ast_ids::{HasScopedAstId, ScopedExpressionId};
use crate::semantic_index::ast_ids::{HasScopedExpressionId, ScopedExpressionId};
use crate::semantic_index::symbol::ScopeId;
use crate::types::{Type, TypeCheckDiagnostics, TypeCheckDiagnosticsBuilder};
use crate::Db;
@@ -29,7 +29,7 @@ impl<'db> Unpacker<'db> {
match target {
ast::Expr::Name(target_name) => {
self.targets
.insert(target_name.scoped_ast_id(self.db, scope), value_ty);
.insert(target_name.scoped_expression_id(self.db, scope), value_ty);
}
ast::Expr::Starred(ast::ExprStarred { value, .. }) => {
self.unpack(value, value_ty, scope);

View File

@@ -68,7 +68,7 @@ impl Session {
let system = LSPSystem::new(index.clone());
// TODO(dhruvmanila): Get the values from the client settings
let metadata = WorkspaceMetadata::from_path(system_path, &system, None)?;
let metadata = WorkspaceMetadata::discover(system_path, &system, None)?;
// TODO(micha): Handle the case where the program settings are incorrect more gracefully.
workspaces.insert(path, RootDatabase::new(metadata, system)?);
}

View File

@@ -7,8 +7,8 @@ use lsp_types::Url;
use ruff_db::file_revision::FileRevision;
use ruff_db::system::walk_directory::WalkDirectoryBuilder;
use ruff_db::system::{
DirectoryEntry, FileType, Metadata, OsSystem, Result, System, SystemPath, SystemPathBuf,
SystemVirtualPath, SystemVirtualPathBuf,
DirectoryEntry, FileType, GlobError, Metadata, OsSystem, PatternError, Result, System,
SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf,
};
use ruff_notebook::{Notebook, NotebookError};
@@ -198,6 +198,16 @@ impl System for LSPSystem {
self.os_system.walk_directory(path)
}
fn glob(
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
PatternError,
> {
self.os_system.glob(pattern)
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -3,15 +3,15 @@ use std::any::Any;
use js_sys::Error;
use wasm_bindgen::prelude::*;
use red_knot_workspace::db::RootDatabase;
use red_knot_workspace::db::{Db, RootDatabase};
use red_knot_workspace::workspace::settings::Configuration;
use red_knot_workspace::workspace::WorkspaceMetadata;
use ruff_db::diagnostic::Diagnostic;
use ruff_db::files::{system_path_to_file, File};
use ruff_db::system::walk_directory::WalkDirectoryBuilder;
use ruff_db::system::{
DirectoryEntry, MemoryFileSystem, Metadata, System, SystemPath, SystemPathBuf,
SystemVirtualPath,
DirectoryEntry, GlobError, MemoryFileSystem, Metadata, PatternError, System, SystemPath,
SystemPathBuf, SystemVirtualPath,
};
use ruff_notebook::Notebook;
@@ -42,10 +42,10 @@ impl Workspace {
#[wasm_bindgen(constructor)]
pub fn new(root: &str, settings: &Settings) -> Result<Workspace, Error> {
let system = WasmSystem::new(SystemPath::new(root));
let workspace = WorkspaceMetadata::from_path(
let workspace = WorkspaceMetadata::discover(
SystemPath::new(root),
&system,
Some(Configuration {
Some(&Configuration {
target_version: Some(settings.target_version.into()),
..Configuration::default()
}),
@@ -184,8 +184,8 @@ impl Settings {
#[derive(Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord, Default)]
pub enum TargetVersion {
Py37,
#[default]
Py38,
#[default]
Py39,
Py310,
Py311,
@@ -226,7 +226,7 @@ impl System for WasmSystem {
}
fn canonicalize_path(&self, path: &SystemPath) -> ruff_db::system::Result<SystemPathBuf> {
Ok(self.fs.canonicalize(path))
self.fs.canonicalize(path)
}
fn read_to_string(&self, path: &SystemPath) -> ruff_db::system::Result<String> {
@@ -272,6 +272,13 @@ impl System for WasmSystem {
self.fs.walk_directory(path)
}
fn glob(
&self,
pattern: &str,
) -> Result<Box<dyn Iterator<Item = Result<SystemPathBuf, GlobError>>>, PatternError> {
Ok(Box::new(self.fs.glob(pattern)?))
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -284,3 +291,17 @@ impl System for WasmSystem {
fn not_found() -> std::io::Error {
std::io::Error::new(std::io::ErrorKind::NotFound, "No such file or directory")
}
#[cfg(test)]
mod tests {
use crate::TargetVersion;
use red_knot_python_semantic::PythonVersion;
#[test]
fn same_default_as_python_version() {
assert_eq!(
PythonVersion::from(TargetVersion::default()),
PythonVersion::default()
);
}
}

View File

@@ -15,22 +15,29 @@ license.workspace = true
red_knot_python_semantic = { workspace = true }
ruff_cache = { workspace = true }
ruff_db = { workspace = true, features = ["os", "cache"] }
ruff_python_ast = { workspace = true }
ruff_db = { workspace = true, features = ["os", "cache", "serde"] }
ruff_python_ast = { workspace = true, features = ["serde"] }
ruff_text_size = { workspace = true }
red_knot_vendored = { workspace = true }
anyhow = { workspace = true }
crossbeam = { workspace = true }
glob = { workspace = true }
notify = { workspace = true }
pep440_rs = { workspace = true }
rayon = { workspace = true }
rustc-hash = { workspace = true }
salsa = { workspace = true }
serde = { workspace = true }
thiserror = { workspace = true }
toml = { workspace = true }
tracing = { workspace = true }
[dev-dependencies]
red_knot_python_semantic = { workspace = true, features = ["serde"] }
ruff_db = { workspace = true, features = ["testing"] }
tempfile = { workspace = true }
glob = { workspace = true }
insta = { workspace = true, features = ["redactions", "ron"] }
[features]
default = ["zstd"]

View File

@@ -0,0 +1,3 @@
msg = "hello"
f"{msg!r:>{10+10}}"

View File

@@ -15,7 +15,9 @@ use ruff_db::{Db as SourceDb, Upcast};
mod changes;
#[salsa::db]
pub trait Db: SemanticDb + Upcast<dyn SemanticDb> {}
pub trait Db: SemanticDb + Upcast<dyn SemanticDb> {
fn workspace(&self) -> Workspace;
}
#[salsa::db]
pub struct RootDatabase {
@@ -45,11 +47,6 @@ impl RootDatabase {
Ok(db)
}
pub fn workspace(&self) -> Workspace {
// SAFETY: The workspace is always initialized in `new`.
self.workspace.unwrap()
}
/// Checks all open files in the workspace and its dependencies.
pub fn check(&self) -> Result<Vec<Box<dyn Diagnostic>>, Cancelled> {
self.with_db(|db| db.workspace().check(db))
@@ -153,7 +150,11 @@ impl salsa::Database for RootDatabase {
}
#[salsa::db]
impl Db for RootDatabase {}
impl Db for RootDatabase {
fn workspace(&self) -> Workspace {
self.workspace.unwrap()
}
}
#[cfg(test)]
pub(crate) mod tests {
@@ -168,6 +169,7 @@ pub(crate) mod tests {
use ruff_db::{Db as SourceDb, Upcast};
use crate::db::Db;
use crate::workspace::{Workspace, WorkspaceMetadata};
#[salsa::db]
pub(crate) struct TestDb {
@@ -176,17 +178,23 @@ pub(crate) mod tests {
files: Files,
system: TestSystem,
vendored: VendoredFileSystem,
workspace: Option<Workspace>,
}
impl TestDb {
pub(crate) fn new() -> Self {
Self {
pub(crate) fn new(workspace: WorkspaceMetadata) -> Self {
let mut db = Self {
storage: salsa::Storage::default(),
system: TestSystem::default(),
vendored: red_knot_vendored::file_system().clone(),
files: Files::default(),
events: Arc::default(),
}
workspace: None,
};
let workspace = Workspace::from_metadata(&db, workspace);
db.workspace = Some(workspace);
db
}
}
@@ -254,7 +262,11 @@ pub(crate) mod tests {
}
#[salsa::db]
impl Db for TestDb {}
impl Db for TestDb {
fn workspace(&self) -> Workspace {
self.workspace.unwrap()
}
}
#[salsa::db]
impl salsa::Database for TestDb {

View File

@@ -1,16 +1,15 @@
use crate::db::{Db, RootDatabase};
use crate::watch;
use crate::watch::{ChangeEvent, CreatedKind, DeletedKind};
use crate::workspace::settings::Configuration;
use crate::workspace::{Workspace, WorkspaceMetadata};
use red_knot_python_semantic::Program;
use ruff_db::files::{system_path_to_file, File, Files};
use ruff_db::system::walk_directory::WalkState;
use ruff_db::system::SystemPath;
use ruff_db::Db;
use ruff_db::Db as _;
use rustc_hash::FxHashSet;
use crate::db::RootDatabase;
use crate::watch;
use crate::watch::{CreatedKind, DeletedKind};
use crate::workspace::settings::Configuration;
use crate::workspace::WorkspaceMetadata;
impl RootDatabase {
#[tracing::instrument(level = "debug", skip(self, changes, base_configuration))]
pub fn apply_changes(
@@ -18,7 +17,7 @@ impl RootDatabase {
changes: Vec<watch::ChangeEvent>,
base_configuration: Option<&Configuration>,
) {
let workspace = self.workspace();
let mut workspace = self.workspace();
let workspace_path = workspace.root(self).to_path_buf();
let program = Program::get(self);
let custom_stdlib_versions_path = program
@@ -58,6 +57,12 @@ impl RootDatabase {
// Changes to ignore files or settings can change the workspace structure or add/remove files
// from packages.
if let Some(package) = workspace.package(self, path) {
if package.root(self) == workspace.root(self)
|| matches!(change, ChangeEvent::Deleted { .. })
{
workspace_change = true;
}
changed_packages.insert(package);
} else {
workspace_change = true;
@@ -151,18 +156,22 @@ impl RootDatabase {
}
if workspace_change {
match WorkspaceMetadata::from_path(
&workspace_path,
self.system(),
base_configuration.cloned(),
) {
match WorkspaceMetadata::discover(&workspace_path, self.system(), base_configuration) {
Ok(metadata) => {
tracing::debug!("Reloading workspace after structural change");
// TODO: Handle changes in the program settings.
workspace.reload(self, metadata);
if metadata.root() == workspace.root(self) {
tracing::debug!("Reloading workspace after structural change");
// TODO: Handle changes in the program settings.
workspace.reload(self, metadata);
} else {
tracing::debug!("Replace workspace after structural change");
workspace = Workspace::from_metadata(self, metadata);
self.workspace = Some(workspace);
}
}
Err(error) => {
tracing::error!("Failed to load workspace, keep old workspace: {error}");
tracing::error!(
"Failed to load workspace, keeping old workspace configuration: {error}"
);
}
}
@@ -227,6 +236,3 @@ impl RootDatabase {
}
}
}
#[cfg(test)]
mod tests {}

View File

@@ -210,7 +210,15 @@ impl Debouncer {
}
let kind = event.kind;
let path = match SystemPathBuf::from_path_buf(event.paths.into_iter().next().unwrap()) {
// There are cases where paths can be empty.
// https://github.com/astral-sh/ruff/issues/14222
let Some(path) = event.paths.into_iter().next() else {
tracing::debug!("Ignoring change event with kind '{kind:?}' without a path",);
return;
};
let path = match SystemPathBuf::from_path_buf(path) {
Ok(path) => path,
Err(path) => {
tracing::debug!(

View File

@@ -6,9 +6,9 @@ use tracing::info;
use red_knot_python_semantic::system_module_search_paths;
use ruff_cache::{CacheKey, CacheKeyHasher};
use ruff_db::system::{SystemPath, SystemPathBuf};
use ruff_db::Upcast;
use ruff_db::{Db as _, Upcast};
use crate::db::RootDatabase;
use crate::db::{Db, RootDatabase};
use crate::watch::Watcher;
/// Wrapper around a [`Watcher`] that watches the relevant paths of a workspace.
@@ -68,10 +68,9 @@ impl WorkspaceWatcher {
self.has_errored_paths = false;
let workspace_path = workspace_path
.as_utf8_path()
.canonicalize_utf8()
.map(SystemPathBuf::from_utf8_path_buf)
let workspace_path = db
.system()
.canonicalize_path(&workspace_path)
.unwrap_or(workspace_path);
// Find the non-overlapping module search paths and filter out paths that are already covered by the workspace.

View File

@@ -1,26 +1,28 @@
use rustc_hash::{FxBuildHasher, FxHashSet};
use salsa::{Durability, Setter as _};
use std::borrow::Cow;
use std::{collections::BTreeMap, sync::Arc};
use crate::db::Db;
use crate::db::RootDatabase;
use crate::workspace::files::{Index, Indexed, IndexedIter, PackageFiles};
pub use metadata::{PackageMetadata, WorkspaceMetadata};
pub use metadata::{PackageMetadata, WorkspaceDiscoveryError, WorkspaceMetadata};
use red_knot_python_semantic::types::check_types;
use red_knot_python_semantic::SearchPathSettings;
use ruff_db::diagnostic::{Diagnostic, ParseDiagnostic, Severity};
use ruff_db::parsed::parsed_module;
use ruff_db::source::{source_text, SourceTextError};
use ruff_db::system::FileType;
use ruff_db::{
files::{system_path_to_file, File},
system::{walk_directory::WalkState, SystemPath, SystemPathBuf},
};
use ruff_python_ast::{name::Name, PySourceType};
use ruff_text_size::TextRange;
use rustc_hash::{FxBuildHasher, FxHashSet};
use salsa::{Durability, Setter as _};
use std::borrow::Cow;
use std::iter::FusedIterator;
use std::{collections::BTreeMap, sync::Arc};
mod files;
mod metadata;
mod pyproject;
pub mod settings;
/// The project workspace as a Salsa ingredient.
@@ -81,7 +83,7 @@ pub struct Workspace {
/// The (first-party) packages in this workspace.
#[return_ref]
package_tree: BTreeMap<SystemPathBuf, Package>,
package_tree: PackageTree,
/// The unresolved search path configuration.
#[return_ref]
@@ -106,7 +108,6 @@ pub struct Package {
}
impl Workspace {
/// Discovers the closest workspace at `path` and returns its metadata.
pub fn from_metadata(db: &dyn Db, metadata: WorkspaceMetadata) -> Self {
let mut packages = BTreeMap::new();
@@ -114,10 +115,12 @@ impl Workspace {
packages.insert(package.root.clone(), Package::from_metadata(db, package));
}
let program_settings = metadata.settings.program;
Workspace::builder(
metadata.root,
packages,
metadata.settings.program.search_paths,
PackageTree(packages),
program_settings.search_paths,
)
.durability(Durability::MEDIUM)
.open_fileset_durability(Durability::LOW)
@@ -128,15 +131,11 @@ impl Workspace {
self.root_buf(db)
}
pub fn packages(self, db: &dyn Db) -> impl Iterator<Item = Package> + '_ {
self.package_tree(db).values().copied()
}
pub fn reload(self, db: &mut dyn Db, metadata: WorkspaceMetadata) {
tracing::debug!("Reloading workspace");
assert_eq!(self.root(db), metadata.root());
let mut old_packages = self.package_tree(db).clone();
let mut old_packages = self.package_tree(db).0.clone();
let mut new_packages = BTreeMap::new();
for package_metadata in metadata.packages {
@@ -157,13 +156,13 @@ impl Workspace {
.to(metadata.settings.program.search_paths);
}
self.set_package_tree(db).to(new_packages);
self.set_package_tree(db).to(PackageTree(new_packages));
}
pub fn update_package(self, db: &mut dyn Db, metadata: PackageMetadata) -> anyhow::Result<()> {
let path = metadata.root().to_path_buf();
if let Some(package) = self.package_tree(db).get(&path).copied() {
if let Some(package) = self.package_tree(db).get(&path) {
package.update(db, metadata);
Ok(())
} else {
@@ -171,20 +170,17 @@ impl Workspace {
}
}
pub fn packages(self, db: &dyn Db) -> &PackageTree {
self.package_tree(db)
}
/// Returns the closest package to which the first-party `path` belongs.
///
/// Returns `None` if the `path` is outside of any package or if `file` isn't a first-party file
/// (e.g. third-party dependencies or `excluded`).
pub fn package(self, db: &dyn Db, path: &SystemPath) -> Option<Package> {
pub fn package(self, db: &dyn Db, path: impl AsRef<SystemPath>) -> Option<Package> {
let packages = self.package_tree(db);
let (package_path, package) = packages.range(..=path.to_path_buf()).next_back()?;
if path.starts_with(package_path) {
Some(*package)
} else {
None
}
packages.get(path.as_ref())
}
/// Checks all open files in the workspace and its dependencies.
@@ -342,7 +338,7 @@ impl Package {
let _entered =
tracing::debug_span!("index_package_files", package = %self.name(db)).entered();
let files = discover_package_files(db, self.root(db));
let files = discover_package_files(db, self);
tracing::info!("Found {} files in package `{}`", files.len(), self.name(db));
vacant.set(files)
}
@@ -407,23 +403,33 @@ pub(super) fn check_file(db: &dyn Db, file: File) -> Vec<Box<dyn Diagnostic>> {
diagnostics
}
fn discover_package_files(db: &dyn Db, path: &SystemPath) -> FxHashSet<File> {
fn discover_package_files(db: &dyn Db, package: Package) -> FxHashSet<File> {
let paths = std::sync::Mutex::new(Vec::new());
let packages = db.workspace().packages(db);
db.system().walk_directory(path).run(|| {
db.system().walk_directory(package.root(db)).run(|| {
Box::new(|entry| {
match entry {
Ok(entry) => {
// Skip over any non python files to avoid creating too many entries in `Files`.
if entry.file_type().is_file()
&& entry
.path()
.extension()
.and_then(PySourceType::try_from_extension)
.is_some()
{
let mut paths = paths.lock().unwrap();
paths.push(entry.into_path());
match entry.file_type() {
FileType::File => {
if entry
.path()
.extension()
.and_then(PySourceType::try_from_extension)
.is_some()
{
let mut paths = paths.lock().unwrap();
paths.push(entry.into_path());
}
}
FileType::Directory | FileType::Symlink => {
// Don't traverse into nested packages (the workspace-package is an ancestor of all other packages)
if packages.get(entry.path()) != Some(package) {
return WalkState::Skip;
}
}
}
}
Err(error) => {
@@ -464,6 +470,7 @@ impl<'a> WorkspaceFiles<'a> {
WorkspaceFiles::PackageFiles(
workspace
.packages(db)
.iter()
.map(|package| package.files(db))
.collect(),
)
@@ -545,20 +552,78 @@ impl Diagnostic for IOErrorDiagnostic {
}
}
#[derive(Debug, Eq, PartialEq, Clone)]
pub struct PackageTree(BTreeMap<SystemPathBuf, Package>);
impl PackageTree {
pub fn get(&self, path: &SystemPath) -> Option<Package> {
let (package_path, package) = self.0.range(..=path.to_path_buf()).next_back()?;
if path.starts_with(package_path) {
Some(*package)
} else {
None
}
}
// The package table should never be empty, that's why `is_empty` makes little sense
#[allow(clippy::len_without_is_empty)]
pub fn len(&self) -> usize {
self.0.len()
}
pub fn iter(&self) -> PackageTreeIter {
PackageTreeIter(self.0.values())
}
}
impl<'a> IntoIterator for &'a PackageTree {
type Item = Package;
type IntoIter = PackageTreeIter<'a>;
fn into_iter(self) -> Self::IntoIter {
self.iter()
}
}
pub struct PackageTreeIter<'a>(std::collections::btree_map::Values<'a, SystemPathBuf, Package>);
impl Iterator for PackageTreeIter<'_> {
type Item = Package;
fn next(&mut self) -> Option<Self::Item> {
self.0.next().copied()
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.0.size_hint()
}
fn last(mut self) -> Option<Self::Item> {
self.0.next_back().copied()
}
}
impl ExactSizeIterator for PackageTreeIter<'_> {}
impl FusedIterator for PackageTreeIter<'_> {}
#[cfg(test)]
mod tests {
use crate::db::tests::TestDb;
use crate::workspace::check_file;
use crate::workspace::{check_file, WorkspaceMetadata};
use red_knot_python_semantic::types::check_types;
use ruff_db::diagnostic::Diagnostic;
use ruff_db::files::system_path_to_file;
use ruff_db::source::source_text;
use ruff_db::system::{DbWithTestSystem, SystemPath};
use ruff_db::system::{DbWithTestSystem, SystemPath, SystemPathBuf};
use ruff_db::testing::assert_function_query_was_not_run;
use ruff_python_ast::name::Name;
#[test]
fn check_file_skips_type_checking_when_file_cant_be_read() -> ruff_db::system::Result<()> {
let mut db = TestDb::new();
let workspace =
WorkspaceMetadata::single_package(Name::new_static("test"), SystemPathBuf::from("/"));
let mut db = TestDb::new(workspace);
let path = SystemPath::new("test.py");
db.write_file(path, "x = 10")?;

View File

@@ -232,21 +232,28 @@ impl Drop for IndexedMut<'_> {
mod tests {
use rustc_hash::FxHashSet;
use crate::db::tests::TestDb;
use crate::db::Db;
use crate::workspace::files::Index;
use crate::workspace::WorkspaceMetadata;
use ruff_db::files::system_path_to_file;
use ruff_db::system::{DbWithTestSystem, SystemPathBuf};
use ruff_python_ast::name::Name;
use crate::db::tests::TestDb;
use crate::workspace::files::Index;
use crate::workspace::Package;
#[test]
fn re_entrance() -> anyhow::Result<()> {
let mut db = TestDb::new();
let metadata = WorkspaceMetadata::single_package(
Name::new_static("test"),
SystemPathBuf::from("/test"),
);
let mut db = TestDb::new(metadata);
db.write_file("test.py", "")?;
let package = Package::new(&db, Name::new("test"), SystemPathBuf::from("/test"));
let package = db
.workspace()
.package(&db, "/test")
.expect("test package to exist");
let file = system_path_to_file(&db, "test.py").unwrap();

View File

@@ -1,67 +1,191 @@
use crate::workspace::settings::{Configuration, WorkspaceSettings};
use ruff_db::system::{System, SystemPath, SystemPathBuf};
use ruff_db::system::{GlobError, System, SystemPath, SystemPathBuf};
use ruff_python_ast::name::Name;
use rustc_hash::{FxBuildHasher, FxHashMap, FxHashSet};
use thiserror::Error;
#[derive(Debug)]
use crate::workspace::pyproject::{PyProject, PyProjectError, Workspace};
use crate::workspace::settings::{Configuration, WorkspaceSettings};
#[derive(Debug, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct WorkspaceMetadata {
pub(super) root: SystemPathBuf,
/// The (first-party) packages in this workspace.
pub(super) packages: Vec<PackageMetadata>,
/// The resolved settings for this workspace.
pub(super) settings: WorkspaceSettings,
}
/// A first-party package in a workspace.
#[derive(Debug)]
#[derive(Debug, Clone, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct PackageMetadata {
pub(super) name: Name,
/// The path to the root directory of the package.
pub(super) root: SystemPathBuf,
// TODO: Add the loaded package configuration (not the nested ruff settings)
pub(super) configuration: Configuration,
}
impl WorkspaceMetadata {
/// Creates a workspace that consists of a single package located at `root`.
pub fn single_package(name: Name, root: SystemPathBuf) -> Self {
let package = PackageMetadata {
name,
root: root.clone(),
configuration: Configuration::default(),
};
let packages = vec![package];
let settings = packages[0]
.configuration
.to_workspace_settings(&root, &packages);
Self {
root,
packages,
settings,
}
}
/// Discovers the closest workspace at `path` and returns its metadata.
pub fn from_path(
///
/// 1. Traverse upwards in the `path`'s ancestor chain and find the first `pyproject.toml`.
/// 1. If the `pyproject.toml` contains no `knot.workspace` table, then keep traversing the `path`'s ancestor
/// chain until we find one or reach the root.
/// 1. If we've found a workspace, then resolve the workspace's members and assert that the closest
/// package (the first found package without a `knot.workspace` table is a member. If not, create
/// a single package workspace for the closest package.
/// 1. If there's no `pyrpoject.toml` with a `knot.workspace` table, then create a single-package workspace.
/// 1. If no ancestor directory contains any `pyproject.toml`, create an ad-hoc workspace for `path`
/// that consists of a single package and uses the default settings.
pub fn discover(
path: &SystemPath,
system: &dyn System,
base_configuration: Option<Configuration>,
) -> anyhow::Result<WorkspaceMetadata> {
assert!(
system.is_directory(path),
"Workspace root path must be a directory"
);
tracing::debug!("Searching for workspace in '{path}'");
base_configuration: Option<&Configuration>,
) -> Result<WorkspaceMetadata, WorkspaceDiscoveryError> {
tracing::debug!("Searching for a workspace in '{path}'");
let root = path.to_path_buf();
// TODO: Discover package name from `pyproject.toml`.
let package_name: Name = path.file_name().unwrap_or("<root>").into();
let package = PackageMetadata {
name: package_name,
root: root.clone(),
};
// TODO: Load the configuration from disk.
let mut configuration = Configuration::default();
if let Some(base_configuration) = base_configuration {
configuration.extend(base_configuration);
if !system.is_directory(path) {
return Err(WorkspaceDiscoveryError::NotADirectory(path.to_path_buf()));
}
// TODO: Respect the package configurations when resolving settings (e.g. for the target version).
let settings = configuration.into_workspace_settings(&root);
let mut closest_package: Option<PackageMetadata> = None;
let workspace = WorkspaceMetadata {
root,
packages: vec![package],
settings,
for ancestor in path.ancestors() {
let pyproject_path = ancestor.join("pyproject.toml");
if let Ok(pyproject_str) = system.read_to_string(&pyproject_path) {
let pyproject = PyProject::from_str(&pyproject_str).map_err(|error| {
WorkspaceDiscoveryError::InvalidPyProject {
path: pyproject_path,
source: Box::new(error),
}
})?;
let workspace_table = pyproject.workspace().cloned();
let package = PackageMetadata::from_pyproject(
pyproject,
ancestor.to_path_buf(),
base_configuration,
);
if let Some(workspace_table) = workspace_table {
let workspace_root = ancestor;
tracing::debug!("Found workspace at '{}'", workspace_root);
match collect_packages(
package,
&workspace_table,
closest_package,
base_configuration,
system,
)? {
CollectedPackagesOrStandalone::Packages(mut packages) => {
let mut by_name =
FxHashMap::with_capacity_and_hasher(packages.len(), FxBuildHasher);
let mut workspace_package = None;
for package in &packages {
if let Some(conflicting) = by_name.insert(package.name(), package) {
return Err(WorkspaceDiscoveryError::DuplicatePackageNames {
name: package.name().clone(),
first: conflicting.root().to_path_buf(),
second: package.root().to_path_buf(),
});
}
if package.root() == workspace_root {
workspace_package = Some(package);
} else if !package.root().starts_with(workspace_root) {
return Err(WorkspaceDiscoveryError::PackageOutsideWorkspace {
package_name: package.name().clone(),
package_root: package.root().to_path_buf(),
workspace_root: workspace_root.to_path_buf(),
});
}
}
let workspace_package = workspace_package
.expect("workspace package to be part of the workspace's packages");
let settings = workspace_package
.configuration
.to_workspace_settings(workspace_root, &packages);
packages.sort_unstable_by(|a, b| a.root().cmp(b.root()));
return Ok(Self {
root: workspace_root.to_path_buf(),
packages,
settings,
});
}
CollectedPackagesOrStandalone::Standalone(package) => {
closest_package = Some(package);
break;
}
}
}
// Not a workspace itself, keep looking for an enclosing workspace.
if closest_package.is_none() {
closest_package = Some(package);
}
}
}
// No workspace found, but maybe a pyproject.toml was found.
let package = if let Some(enclosing_package) = closest_package {
tracing::debug!("Single package workspace at '{}'", enclosing_package.root());
enclosing_package
} else {
tracing::debug!("The ancestor directories contain no `pyproject.toml`. Falling back to a virtual project.");
// Create a package with a default configuration
PackageMetadata {
name: path.file_name().unwrap_or("root").into(),
root: path.to_path_buf(),
// TODO create the configuration from the pyproject toml
configuration: base_configuration.cloned().unwrap_or_default(),
}
};
Ok(workspace)
let root = package.root().to_path_buf();
let packages = vec![package];
let settings = packages[0]
.configuration
.to_workspace_settings(&root, &packages);
Ok(Self {
root,
packages,
settings,
})
}
pub fn root(&self) -> &SystemPath {
@@ -78,6 +202,30 @@ impl WorkspaceMetadata {
}
impl PackageMetadata {
pub(crate) fn from_pyproject(
pyproject: PyProject,
root: SystemPathBuf,
base_configuration: Option<&Configuration>,
) -> Self {
let name = pyproject.project.and_then(|project| project.name);
let name = name
.map(|name| Name::new(&*name))
.unwrap_or_else(|| Name::new(root.file_name().unwrap_or("root")));
// TODO: load configuration from pyrpoject.toml
let mut configuration = Configuration::default();
if let Some(base_configuration) = base_configuration {
configuration.extend(base_configuration.clone());
}
PackageMetadata {
name,
root,
configuration,
}
}
pub fn name(&self) -> &Name {
&self.name
}
@@ -86,3 +234,577 @@ impl PackageMetadata {
&self.root
}
}
fn collect_packages(
workspace_package: PackageMetadata,
workspace_table: &Workspace,
closest_package: Option<PackageMetadata>,
base_configuration: Option<&Configuration>,
system: &dyn System,
) -> Result<CollectedPackagesOrStandalone, WorkspaceDiscoveryError> {
let workspace_root = workspace_package.root().to_path_buf();
let mut member_paths = FxHashSet::default();
for glob in workspace_table.members() {
let full_glob = workspace_package.root().join(glob);
let matches = system.glob(full_glob.as_str()).map_err(|error| {
WorkspaceDiscoveryError::InvalidMembersPattern {
raw_glob: glob.clone(),
source: error,
}
})?;
for result in matches {
let path = result?;
let normalized = SystemPath::absolute(path, &workspace_root);
// Skip over non-directory entry. E.g.finder might end up creating a `.DS_STORE` file
// that ends up matching `/projects/*`.
if system.is_directory(&normalized) {
member_paths.insert(normalized);
} else {
tracing::debug!("Ignoring non-directory workspace member '{normalized}'");
}
}
}
// The workspace root is always a member. Don't re-add it
let mut packages = vec![workspace_package];
member_paths.remove(&workspace_root);
// Add the package that is closest to the current working directory except
// if that package isn't a workspace member, then fallback to creating a single
// package workspace.
if let Some(closest_package) = closest_package {
// the closest `pyproject.toml` isn't a member of this workspace because it is
// explicitly included or simply not listed.
// Create a standalone workspace.
if !member_paths.remove(closest_package.root())
|| workspace_table.is_excluded(closest_package.root(), &workspace_root)?
{
tracing::debug!(
"Ignoring workspace '{workspace_root}' because package '{package}' is not a member",
package = closest_package.name()
);
return Ok(CollectedPackagesOrStandalone::Standalone(closest_package));
}
tracing::debug!("adding package '{}'", closest_package.name());
packages.push(closest_package);
}
// Add all remaining member paths
for member_path in member_paths {
if workspace_table.is_excluded(&member_path, workspace_root.as_path())? {
tracing::debug!("Ignoring excluded member '{member_path}'");
continue;
}
let pyproject_path = member_path.join("pyproject.toml");
let pyproject_str = match system.read_to_string(&pyproject_path) {
Ok(pyproject_str) => pyproject_str,
Err(error) => {
if error.kind() == std::io::ErrorKind::NotFound
&& member_path
.file_name()
.is_some_and(|name| name.starts_with('.'))
{
tracing::debug!(
"Ignore member '{member_path}' because it has no pyproject.toml and is hidden",
);
continue;
}
return Err(WorkspaceDiscoveryError::MemberFailedToReadPyProject {
package_root: member_path,
source: error,
});
}
};
let pyproject = PyProject::from_str(&pyproject_str).map_err(|error| {
WorkspaceDiscoveryError::InvalidPyProject {
source: Box::new(error),
path: pyproject_path,
}
})?;
if pyproject.workspace().is_some() {
return Err(WorkspaceDiscoveryError::NestedWorkspaces {
package_root: member_path,
});
}
let package = PackageMetadata::from_pyproject(pyproject, member_path, base_configuration);
tracing::debug!(
"Adding package '{}' at '{}'",
package.name(),
package.root()
);
packages.push(package);
}
Ok(CollectedPackagesOrStandalone::Packages(packages))
}
enum CollectedPackagesOrStandalone {
Packages(Vec<PackageMetadata>),
Standalone(PackageMetadata),
}
#[derive(Debug, Error)]
pub enum WorkspaceDiscoveryError {
#[error("workspace path '{0}' is not a directory")]
NotADirectory(SystemPathBuf),
#[error("nested workspaces aren't supported but the package located at '{package_root}' defines a `knot.workspace` table")]
NestedWorkspaces { package_root: SystemPathBuf },
#[error("the workspace contains two packages named '{name}': '{first}' and '{second}'")]
DuplicatePackageNames {
name: Name,
first: SystemPathBuf,
second: SystemPathBuf,
},
#[error("the package '{package_name}' located at '{package_root}' is outside the workspace's root directory '{workspace_root}'")]
PackageOutsideWorkspace {
workspace_root: SystemPathBuf,
package_name: Name,
package_root: SystemPathBuf,
},
#[error(
"failed to read the `pyproject.toml` for the package located at '{package_root}': {source}"
)]
MemberFailedToReadPyProject {
package_root: SystemPathBuf,
source: std::io::Error,
},
#[error("{path} is not a valid `pyproject.toml`: {source}")]
InvalidPyProject {
source: Box<PyProjectError>,
path: SystemPathBuf,
},
#[error("invalid glob '{raw_glob}' in `tool.knot.workspace.members`: {source}")]
InvalidMembersPattern {
source: glob::PatternError,
raw_glob: String,
},
#[error("failed to match member glob: {error}")]
FailedToMatchGlob {
#[from]
error: GlobError,
},
}
#[cfg(test)]
mod tests {
//! Integration tests for workspace discovery
use crate::snapshot_workspace;
use anyhow::Context;
use insta::assert_ron_snapshot;
use ruff_db::system::{SystemPathBuf, TestSystem};
use crate::workspace::{WorkspaceDiscoveryError, WorkspaceMetadata};
#[test]
fn package_without_pyproject() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([(root.join("foo.py"), ""), (root.join("bar.py"), "")])
.context("Failed to write files")?;
let workspace = WorkspaceMetadata::discover(&root, &system, None)
.context("Failed to discover workspace")?;
assert_eq!(workspace.root(), &*root);
snapshot_workspace!(workspace);
Ok(())
}
#[test]
fn single_package() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "backend"
"#,
),
(root.join("db/__init__.py"), ""),
])
.context("Failed to write files")?;
let workspace = WorkspaceMetadata::discover(&root, &system, None)
.context("Failed to discover workspace")?;
assert_eq!(workspace.root(), &*root);
snapshot_workspace!(workspace);
// Discovering the same package from a subdirectory should give the same result
let from_src = WorkspaceMetadata::discover(&root.join("db"), &system, None)
.context("Failed to discover workspace from src sub-directory")?;
assert_eq!(from_src, workspace);
Ok(())
}
#[test]
fn workspace_members() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
exclude = ["packages/excluded"]
"#,
),
(
root.join("packages/a/pyproject.toml"),
r#"
[project]
name = "member-a"
"#,
),
(
root.join("packages/x/pyproject.toml"),
r#"
[project]
name = "member-x"
"#,
),
])
.context("Failed to write files")?;
let workspace = WorkspaceMetadata::discover(&root, &system, None)
.context("Failed to discover workspace")?;
assert_eq!(workspace.root(), &*root);
snapshot_workspace!(workspace);
// Discovering the same package from a member should give the same result
let from_src = WorkspaceMetadata::discover(&root.join("packages/a"), &system, None)
.context("Failed to discover workspace from src sub-directory")?;
assert_eq!(from_src, workspace);
Ok(())
}
#[test]
fn workspace_excluded() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
exclude = ["packages/excluded"]
"#,
),
(
root.join("packages/a/pyproject.toml"),
r#"
[project]
name = "member-a"
"#,
),
(
root.join("packages/excluded/pyproject.toml"),
r#"
[project]
name = "member-x"
"#,
),
])
.context("Failed to write files")?;
let workspace = WorkspaceMetadata::discover(&root, &system, None)
.context("Failed to discover workspace")?;
assert_eq!(workspace.root(), &*root);
snapshot_workspace!(workspace);
// Discovering the `workspace` for `excluded` should discover a single-package workspace
let excluded_workspace =
WorkspaceMetadata::discover(&root.join("packages/excluded"), &system, None)
.context("Failed to discover workspace from src sub-directory")?;
assert_ne!(excluded_workspace, workspace);
Ok(())
}
#[test]
fn workspace_non_unique_member_names() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(
root.join("packages/a/pyproject.toml"),
r#"
[project]
name = "a"
"#,
),
(
root.join("packages/b/pyproject.toml"),
r#"
[project]
name = "a"
"#,
),
])
.context("Failed to write files")?;
let error = WorkspaceMetadata::discover(&root, &system, None).expect_err(
"Discovery should error because the workspace contains two packages with the same names.",
);
assert_error_eq(&error, "the workspace contains two packages named 'a': '/app/packages/a' and '/app/packages/b'");
Ok(())
}
#[test]
fn nested_workspaces() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(
root.join("packages/a/pyproject.toml"),
r#"
[project]
name = "nested-workspace"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
])
.context("Failed to write files")?;
let error = WorkspaceMetadata::discover(&root, &system, None).expect_err(
"Discovery should error because the workspace has a package that itself is a workspace",
);
assert_error_eq(&error, "nested workspaces aren't supported but the package located at '/app/packages/a' defines a `knot.workspace` table");
Ok(())
}
#[test]
fn member_missing_pyproject_toml() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(root.join("packages/a/test.py"), ""),
])
.context("Failed to write files")?;
let error = WorkspaceMetadata::discover(&root, &system, None)
.expect_err("Discovery should error because member `a` has no `pypyroject.toml`");
assert_error_eq(&error, "failed to read the `pyproject.toml` for the package located at '/app/packages/a': No such file or directory");
Ok(())
}
/// Folders that match the members pattern but don't have a pyproject.toml
/// aren't valid members and discovery fails. However, don't fail
/// if the folder name indicates that it is a hidden folder that might
/// have been crated by another tool
#[test]
fn member_pattern_matching_hidden_folder() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(root.join("packages/.hidden/a.py"), ""),
])
.context("Failed to write files")?;
let workspace = WorkspaceMetadata::discover(&root, &system, None)?;
snapshot_workspace!(workspace);
Ok(())
}
#[test]
fn member_pattern_matching_file() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["packages/*"]
"#,
),
(root.join("packages/.DS_STORE"), ""),
])
.context("Failed to write files")?;
let workspace = WorkspaceMetadata::discover(&root, &system, None)?;
snapshot_workspace!(&workspace);
Ok(())
}
#[test]
fn workspace_root_not_an_ancestor_of_member() -> anyhow::Result<()> {
let system = TestSystem::default();
let root = SystemPathBuf::from("/app");
system
.memory_file_system()
.write_files([
(
root.join("pyproject.toml"),
r#"
[project]
name = "workspace-root"
[tool.knot.workspace]
members = ["../packages/*"]
"#,
),
(
root.join("../packages/a/pyproject.toml"),
r#"
[project]
name = "a"
"#,
),
])
.context("Failed to write files")?;
let error = WorkspaceMetadata::discover(&root, &system, None).expect_err(
"Discovery should error because member `a` is outside the workspace's directory`",
);
assert_error_eq(&error, "the package 'a' located at '/packages/a' is outside the workspace's root directory '/app'");
Ok(())
}
#[track_caller]
fn assert_error_eq(error: &WorkspaceDiscoveryError, message: &str) {
assert_eq!(error.to_string().replace('\\', "/"), message);
}
/// Snapshots a workspace but with all paths using unix separators.
#[macro_export]
macro_rules! snapshot_workspace {
($workspace:expr) => {{
assert_ron_snapshot!($workspace,{
".root" => insta::dynamic_redaction(|content, _content_path| {
content.as_str().unwrap().replace("\\", "/")
}),
".packages[].root" => insta::dynamic_redaction(|content, _content_path| {
content.as_str().unwrap().replace("\\", "/")
}),
});
}};
}
}

View File

@@ -0,0 +1,108 @@
mod package_name;
use pep440_rs::{Version, VersionSpecifiers};
use serde::Deserialize;
use thiserror::Error;
use crate::workspace::metadata::WorkspaceDiscoveryError;
pub(crate) use package_name::PackageName;
use ruff_db::system::SystemPath;
/// A `pyproject.toml` as specified in PEP 517.
#[derive(Deserialize, Debug, Default, Clone)]
#[serde(rename_all = "kebab-case")]
pub(crate) struct PyProject {
/// PEP 621-compliant project metadata.
pub project: Option<Project>,
/// Tool-specific metadata.
pub tool: Option<Tool>,
}
impl PyProject {
pub(crate) fn workspace(&self) -> Option<&Workspace> {
self.tool
.as_ref()
.and_then(|tool| tool.knot.as_ref())
.and_then(|knot| knot.workspace.as_ref())
}
}
#[derive(Error, Debug)]
pub enum PyProjectError {
#[error(transparent)]
TomlSyntax(#[from] toml::de::Error),
}
impl PyProject {
pub(crate) fn from_str(content: &str) -> Result<Self, PyProjectError> {
toml::from_str(content).map_err(PyProjectError::TomlSyntax)
}
}
/// PEP 621 project metadata (`project`).
///
/// See <https://packaging.python.org/en/latest/specifications/pyproject-toml>.
#[derive(Deserialize, Debug, Clone, PartialEq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "kebab-case")]
pub(crate) struct Project {
/// The name of the project
///
/// Note: Intentionally option to be more permissive during deserialization.
/// `PackageMetadata::from_pyproject` reports missing names.
pub name: Option<PackageName>,
/// The version of the project
pub version: Option<Version>,
/// The Python versions this project is compatible with.
pub requires_python: Option<VersionSpecifiers>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
pub(crate) struct Tool {
pub knot: Option<Knot>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
#[serde(rename_all = "kebab-case", deny_unknown_fields)]
pub(crate) struct Knot {
pub(crate) workspace: Option<Workspace>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
#[serde(rename_all = "kebab-case", deny_unknown_fields)]
pub(crate) struct Workspace {
pub(crate) members: Option<Vec<String>>,
pub(crate) exclude: Option<Vec<String>>,
}
impl Workspace {
pub(crate) fn members(&self) -> &[String] {
self.members.as_deref().unwrap_or_default()
}
pub(crate) fn exclude(&self) -> &[String] {
self.exclude.as_deref().unwrap_or_default()
}
pub(crate) fn is_excluded(
&self,
path: &SystemPath,
workspace_root: &SystemPath,
) -> Result<bool, WorkspaceDiscoveryError> {
for exclude in self.exclude() {
let full_glob =
glob::Pattern::new(workspace_root.join(exclude).as_str()).map_err(|error| {
WorkspaceDiscoveryError::InvalidMembersPattern {
raw_glob: exclude.clone(),
source: error,
}
})?;
if full_glob.matches_path(path.as_std_path()) {
return Ok(true);
}
}
Ok(false)
}
}

View File

@@ -0,0 +1,140 @@
use serde::{Deserialize, Deserializer, Serialize};
use std::ops::Deref;
use thiserror::Error;
/// The normalized name of a package.
///
/// Converts the name to lowercase and collapses runs of `-`, `_`, and `.` down to a single `-`.
/// For example, `---`, `.`, and `__` are all converted to a single `-`.
///
/// See: <https://packaging.python.org/en/latest/specifications/name-normalization/>
#[derive(Debug, Default, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)]
pub(crate) struct PackageName(String);
impl PackageName {
/// Create a validated, normalized package name.
pub(crate) fn new(name: String) -> Result<Self, InvalidPackageNameError> {
if name.is_empty() {
return Err(InvalidPackageNameError::Empty);
}
if name.starts_with(['-', '_', '.']) {
return Err(InvalidPackageNameError::NonAlphanumericStart(
name.chars().next().unwrap(),
));
}
if name.ends_with(['-', '_', '.']) {
return Err(InvalidPackageNameError::NonAlphanumericEnd(
name.chars().last().unwrap(),
));
}
let Some(start) = name.find(|c: char| {
!c.is_ascii() || c.is_ascii_uppercase() || matches!(c, '-' | '_' | '.')
}) else {
return Ok(Self(name));
};
let (already_normalized, maybe_normalized) = name.split_at(start);
let mut normalized = String::with_capacity(name.len());
normalized.push_str(already_normalized);
let mut last = None;
for c in maybe_normalized.chars() {
if !c.is_ascii() {
return Err(InvalidPackageNameError::InvalidCharacter(c));
}
if c.is_ascii_uppercase() {
normalized.push(c.to_ascii_lowercase());
} else if matches!(c, '-' | '_' | '.') {
if matches!(last, Some('-' | '_' | '.')) {
// Only keep a single instance of `-`, `_` and `.`
} else {
normalized.push('-');
}
} else {
normalized.push(c);
}
last = Some(c);
}
Ok(Self(normalized))
}
/// Returns the underlying package name.
pub(crate) fn as_str(&self) -> &str {
&self.0
}
}
impl From<PackageName> for String {
fn from(value: PackageName) -> Self {
value.0
}
}
impl<'de> Deserialize<'de> for PackageName {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
Self::new(s).map_err(serde::de::Error::custom)
}
}
impl std::fmt::Display for PackageName {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
impl Deref for PackageName {
type Target = str;
fn deref(&self) -> &Self::Target {
self.as_str()
}
}
#[derive(Error, Debug)]
pub(crate) enum InvalidPackageNameError {
#[error("name must start with letter or number but it starts with '{0}'")]
NonAlphanumericStart(char),
#[error("name must end with letter or number but it ends with '{0}'")]
NonAlphanumericEnd(char),
#[error("valid name consists only of ASCII letters and numbers, period, underscore and hyphen but name contains '{0}'"
)]
InvalidCharacter(char),
#[error("name must not be empty")]
Empty,
}
#[cfg(test)]
mod tests {
use super::PackageName;
#[test]
fn normalize() {
let inputs = [
"friendly-bard",
"Friendly-Bard",
"FRIENDLY-BARD",
"friendly.bard",
"friendly_bard",
"friendly--bard",
"friendly-.bard",
"FrIeNdLy-._.-bArD",
];
for input in inputs {
assert_eq!(
PackageName::new(input.to_string()).unwrap(),
PackageName::new("friendly-bard".to_string()).unwrap(),
);
}
}
}

View File

@@ -1,10 +1,12 @@
use crate::workspace::PackageMetadata;
use red_knot_python_semantic::{ProgramSettings, PythonVersion, SearchPathSettings, SitePackages};
use ruff_db::system::{SystemPath, SystemPathBuf};
/// The resolved configurations.
///
/// The main difference to [`Configuration`] is that default values are filled in.
#[derive(Debug, Clone)]
#[derive(Debug, Clone, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct WorkspaceSettings {
pub(super) program: ProgramSettings,
}
@@ -16,7 +18,8 @@ impl WorkspaceSettings {
}
/// The configuration for the workspace or a package.
#[derive(Debug, Default, Clone)]
#[derive(Debug, Default, Clone, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Configuration {
pub target_version: Option<PythonVersion>,
pub search_paths: SearchPathConfiguration,
@@ -29,17 +32,22 @@ impl Configuration {
self.search_paths.extend(with.search_paths);
}
pub fn into_workspace_settings(self, workspace_root: &SystemPath) -> WorkspaceSettings {
pub fn to_workspace_settings(
&self,
workspace_root: &SystemPath,
_packages: &[PackageMetadata],
) -> WorkspaceSettings {
WorkspaceSettings {
program: ProgramSettings {
target_version: self.target_version.unwrap_or_default(),
search_paths: self.search_paths.into_settings(workspace_root),
search_paths: self.search_paths.to_settings(workspace_root),
},
}
}
}
#[derive(Debug, Default, Clone, Eq, PartialEq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct SearchPathConfiguration {
/// List of user-provided paths that should take first priority in the module resolution.
/// Examples in other type checkers are mypy's MYPYPATH environment variable,
@@ -59,15 +67,19 @@ pub struct SearchPathConfiguration {
}
impl SearchPathConfiguration {
pub fn into_settings(self, workspace_root: &SystemPath) -> SearchPathSettings {
let site_packages = self.site_packages.unwrap_or(SitePackages::Known(vec![]));
pub fn to_settings(&self, workspace_root: &SystemPath) -> SearchPathSettings {
let site_packages = self
.site_packages
.clone()
.unwrap_or(SitePackages::Known(vec![]));
SearchPathSettings {
extra_paths: self.extra_paths.unwrap_or_default(),
extra_paths: self.extra_paths.clone().unwrap_or_default(),
src_root: self
.clone()
.src_root
.unwrap_or_else(|| workspace_root.to_path_buf()),
custom_typeshed: self.custom_typeshed,
custom_typeshed: self.custom_typeshed.clone(),
site_packages,
}
}

View File

@@ -0,0 +1,37 @@
---
source: crates/red_knot_workspace/src/workspace/metadata.rs
expression: "&workspace"
snapshot_kind: text
---
WorkspaceMetadata(
root: "/app",
packages: [
PackageMetadata(
name: Name("workspace-root"),
root: "/app",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
],
settings: WorkspaceSettings(
program: ProgramSettings(
target_version: PythonVersion(
major: 3,
minor: 9,
),
search_paths: SearchPathSettings(
extra_paths: [],
src_root: "/app",
custom_typeshed: None,
site_packages: Known([]),
),
),
),
)

View File

@@ -0,0 +1,37 @@
---
source: crates/red_knot_workspace/src/workspace/metadata.rs
expression: workspace
snapshot_kind: text
---
WorkspaceMetadata(
root: "/app",
packages: [
PackageMetadata(
name: Name("workspace-root"),
root: "/app",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
],
settings: WorkspaceSettings(
program: ProgramSettings(
target_version: PythonVersion(
major: 3,
minor: 9,
),
search_paths: SearchPathSettings(
extra_paths: [],
src_root: "/app",
custom_typeshed: None,
site_packages: Known([]),
),
),
),
)

View File

@@ -0,0 +1,37 @@
---
source: crates/red_knot_workspace/src/workspace/metadata.rs
expression: workspace
snapshot_kind: text
---
WorkspaceMetadata(
root: "/app",
packages: [
PackageMetadata(
name: Name("app"),
root: "/app",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
],
settings: WorkspaceSettings(
program: ProgramSettings(
target_version: PythonVersion(
major: 3,
minor: 9,
),
search_paths: SearchPathSettings(
extra_paths: [],
src_root: "/app",
custom_typeshed: None,
site_packages: Known([]),
),
),
),
)

View File

@@ -0,0 +1,37 @@
---
source: crates/red_knot_workspace/src/workspace/metadata.rs
expression: workspace
snapshot_kind: text
---
WorkspaceMetadata(
root: "/app",
packages: [
PackageMetadata(
name: Name("backend"),
root: "/app",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
],
settings: WorkspaceSettings(
program: ProgramSettings(
target_version: PythonVersion(
major: 3,
minor: 9,
),
search_paths: SearchPathSettings(
extra_paths: [],
src_root: "/app",
custom_typeshed: None,
site_packages: Known([]),
),
),
),
)

View File

@@ -0,0 +1,50 @@
---
source: crates/red_knot_workspace/src/workspace/metadata.rs
expression: workspace
snapshot_kind: text
---
WorkspaceMetadata(
root: "/app",
packages: [
PackageMetadata(
name: Name("workspace-root"),
root: "/app",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
PackageMetadata(
name: Name("member-a"),
root: "/app/packages/a",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
],
settings: WorkspaceSettings(
program: ProgramSettings(
target_version: PythonVersion(
major: 3,
minor: 9,
),
search_paths: SearchPathSettings(
extra_paths: [],
src_root: "/app",
custom_typeshed: None,
site_packages: Known([]),
),
),
),
)

View File

@@ -0,0 +1,63 @@
---
source: crates/red_knot_workspace/src/workspace/metadata.rs
expression: workspace
snapshot_kind: text
---
WorkspaceMetadata(
root: "/app",
packages: [
PackageMetadata(
name: Name("workspace-root"),
root: "/app",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
PackageMetadata(
name: Name("member-a"),
root: "/app/packages/a",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
PackageMetadata(
name: Name("member-x"),
root: "/app/packages/x",
configuration: Configuration(
target_version: None,
search_paths: SearchPathConfiguration(
extra_paths: None,
src_root: None,
custom_typeshed: None,
site_packages: None,
),
),
),
],
settings: WorkspaceSettings(
program: ProgramSettings(
target_version: PythonVersion(
major: 3,
minor: 9,
),
search_paths: SearchPathSettings(
extra_paths: [],
src_root: "/app",
custom_typeshed: None,
site_packages: Known([]),
),
),
),
)

View File

@@ -1,49 +1,167 @@
use std::fs;
use std::path::PathBuf;
use anyhow::{anyhow, Context};
use red_knot_python_semantic::{HasTy, SemanticModel};
use red_knot_workspace::db::RootDatabase;
use red_knot_workspace::workspace::WorkspaceMetadata;
use ruff_db::files::{system_path_to_file, File};
use ruff_db::parsed::parsed_module;
use ruff_db::system::{OsSystem, SystemPath, SystemPathBuf};
use ruff_db::system::{SystemPath, SystemPathBuf, TestSystem};
use ruff_python_ast::visitor::source_order;
use ruff_python_ast::visitor::source_order::SourceOrderVisitor;
use ruff_python_ast::{self as ast, Alias, Expr, Parameter, ParameterWithDefault, Stmt};
fn setup_db(workspace_root: &SystemPath) -> anyhow::Result<RootDatabase> {
let system = OsSystem::new(workspace_root);
let workspace = WorkspaceMetadata::from_path(workspace_root, &system, None)?;
fn setup_db(workspace_root: &SystemPath, system: TestSystem) -> anyhow::Result<RootDatabase> {
let workspace = WorkspaceMetadata::discover(workspace_root, &system, None)?;
RootDatabase::new(workspace, system)
}
/// Test that all snippets in testcorpus can be checked without panic
fn get_workspace_root() -> anyhow::Result<SystemPathBuf> {
Ok(SystemPathBuf::from(String::from_utf8(
std::process::Command::new("cargo")
.args(["locate-project", "--workspace", "--message-format", "plain"])
.output()?
.stdout,
)?)
.parent()
.unwrap()
.to_owned())
}
/// Test that all snippets in testcorpus can be checked without panic (except for [`KNOWN_FAILURES`])
#[test]
#[allow(clippy::print_stdout)]
fn corpus_no_panic() -> anyhow::Result<()> {
let root = SystemPathBuf::from_path_buf(tempfile::TempDir::new()?.into_path()).unwrap();
let db = setup_db(&root)?;
let crate_root = String::from(env!("CARGO_MANIFEST_DIR"));
run_corpus_tests(&format!("{crate_root}/resources/test/corpus/**/*.py"))
}
let corpus = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("resources/test/corpus");
#[test]
fn parser_no_panic() -> anyhow::Result<()> {
let workspace_root = get_workspace_root()?;
run_corpus_tests(&format!(
"{workspace_root}/crates/ruff_python_parser/resources/**/*.py"
))
}
for path in fs::read_dir(&corpus)? {
let source = path?.path();
println!("checking {source:?}");
let source_fn = source.file_name().unwrap().to_str().unwrap();
let py_dest = root.join(source_fn);
fs::copy(&source, py_dest.as_std_path())?;
// this test is only asserting that we can pull every expression type without a panic
// (and some non-expressions that clearly define a single type)
let file = system_path_to_file(&db, py_dest).unwrap();
pull_types(&db, file);
#[test]
#[ignore = "Enable running once there are fewer failures"]
fn linter_af_no_panic() -> anyhow::Result<()> {
let workspace_root = get_workspace_root()?;
run_corpus_tests(&format!(
"{workspace_root}/crates/ruff_linter/resources/test/fixtures/[a-f]*/**/*.py"
))
}
// try the file as a stub also
println!("re-checking as .pyi");
let pyi_dest = root.join(format!("{source_fn}i"));
std::fs::copy(source, pyi_dest.as_std_path())?;
let file = system_path_to_file(&db, pyi_dest).unwrap();
pull_types(&db, file);
#[test]
#[ignore = "Enable running once there are fewer failures"]
fn linter_gz_no_panic() -> anyhow::Result<()> {
let workspace_root = get_workspace_root()?;
run_corpus_tests(&format!(
"{workspace_root}/crates/ruff_linter/resources/test/fixtures/[g-z]*/**/*.py"
))
}
#[test]
#[ignore = "Enable running once there are fewer failures"]
fn linter_stubs_no_panic() -> anyhow::Result<()> {
let workspace_root = get_workspace_root()?;
run_corpus_tests(&format!(
"{workspace_root}/crates/ruff_linter/resources/test/fixtures/**/*.pyi"
))
}
#[test]
#[ignore = "Enable running over typeshed stubs once there are fewer failures"]
fn typeshed_no_panic() -> anyhow::Result<()> {
let workspace_root = get_workspace_root()?;
run_corpus_tests(&format!(
"{workspace_root}/crates/red_knot_vendored/vendor/typeshed/**/*.pyi"
))
}
#[allow(clippy::print_stdout)]
fn run_corpus_tests(pattern: &str) -> anyhow::Result<()> {
let root = SystemPathBuf::from("/src");
let system = TestSystem::default();
let memory_fs = system.memory_file_system();
memory_fs.create_directory_all(root.as_ref())?;
let mut db = setup_db(&root, system.clone())?;
let workspace_root = get_workspace_root()?;
let workspace_root = workspace_root.to_string();
let corpus = glob::glob(pattern).context("Failed to compile pattern")?;
for path in corpus {
let path = path.context("Failed to glob path")?;
let path = SystemPathBuf::from_path_buf(path).map_err(|path| {
anyhow!(
"Failed to convert path '{path}' to system path",
path = path.display()
)
})?;
let relative_path = path.strip_prefix(&workspace_root)?;
let (py_expected_to_fail, pyi_expected_to_fail) = KNOWN_FAILURES
.iter()
.find_map(|(path, py_fail, pyi_fail)| {
if *path == relative_path.as_str().replace('\\', "/") {
Some((*py_fail, *pyi_fail))
} else {
None
}
})
.unwrap_or((false, false));
let source = path.as_path();
let source_filename = source.file_name().unwrap();
let code = std::fs::read_to_string(source)?;
let mut check_with_file_name = |path: &SystemPath| {
memory_fs.write_file(path, &code).unwrap();
File::sync_path(&mut db, path);
// this test is only asserting that we can pull every expression type without a panic
// (and some non-expressions that clearly define a single type)
let file = system_path_to_file(&db, path).unwrap();
let result = std::panic::catch_unwind(|| pull_types(&db, file));
let expected_to_fail = if path.extension().map(|e| e == "pyi").unwrap_or(false) {
pyi_expected_to_fail
} else {
py_expected_to_fail
};
if let Err(err) = result {
if !expected_to_fail {
println!("Check failed for {relative_path:?}. Consider fixing it or adding it to KNOWN_FAILURES");
std::panic::resume_unwind(err);
}
} else {
assert!(!expected_to_fail, "Expected to panic, but did not. Consider removing this path from KNOWN_FAILURES");
}
memory_fs.remove_file(path).unwrap();
file.sync(&mut db);
};
if source.extension() == Some("pyi") {
println!("checking {relative_path}");
let pyi_dest = root.join(source_filename);
check_with_file_name(&pyi_dest);
} else {
println!("checking {relative_path}");
let py_dest = root.join(source_filename);
check_with_file_name(&py_dest);
let pyi_dest = root.join(format!("{source_filename}i"));
println!("re-checking as stub file: {pyi_dest}");
check_with_file_name(&pyi_dest);
}
}
Ok(())
}
@@ -144,3 +262,28 @@ impl SourceOrderVisitor<'_> for PullTypesVisitor<'_> {
source_order::walk_alias(self, alias);
}
}
/// Whether or not the .py/.pyi version of this file is expected to fail
const KNOWN_FAILURES: &[(&str, bool, bool)] = &[
// Probably related to missing support for type aliases / type params:
("crates/ruff_python_parser/resources/inline/err/type_param_invalid_bound_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_param_spec_invalid_default_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_type_var_invalid_default_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_type_var_missing_default.py", true, true),
("crates/ruff_python_parser/resources/inline/err/type_param_type_var_tuple_invalid_default_expr.py", true, true),
("crates/ruff_python_parser/resources/inline/ok/type_param_param_spec.py", true, true),
("crates/ruff_python_parser/resources/inline/ok/type_param_type_var_tuple.py", true, true),
("crates/ruff_python_parser/resources/inline/ok/type_param_type_var.py", true, true),
("crates/ruff_python_parser/resources/valid/statement/type.py", true, true),
("crates/ruff_linter/resources/test/fixtures/flake8_type_checking/TCH004_15.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F401_19.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_14.py", false, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_15.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_17.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_20.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F821_26.py", true, false),
// Fails for unknown reasons:
("crates/ruff_linter/resources/test/fixtures/pyflakes/F632.py", true, true),
("crates/ruff_linter/resources/test/fixtures/pyflakes/F811_19.py", true, false),
("crates/ruff_linter/resources/test/fixtures/pyupgrade/UP039.py", true, false),
];

View File

@@ -1,5 +1,6 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
0.0.0

View File

@@ -1,5 +1,6 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
0.0.0 (53b0f5d92 2023-10-19)

View File

@@ -1,5 +1,6 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
0.0.0+24 (53b0f5d92 2023-10-19)

View File

@@ -1,6 +1,7 @@
---
source: crates/ruff/src/version.rs
expression: version
snapshot_kind: text
---
{
"version": "0.0.0",

View File

@@ -8,7 +8,7 @@ const BIN_NAME: &str = "ruff";
#[test]
fn lint_select() {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).arg("config").arg("lint.select"), @r###"
Command::new(get_cargo_bin(BIN_NAME)).arg("config").arg("lint.select"), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -29,14 +29,14 @@ fn lint_select() {
```
----- stderr -----
"###
"#
);
}
#[test]
fn lint_select_json() {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).arg("config").arg("lint.select").arg("--output-format").arg("json"), @r###"
Command::new(get_cargo_bin(BIN_NAME)).arg("config").arg("lint.select").arg("--output-format").arg("json"), @r##"
success: true
exit_code: 0
----- stdout -----
@@ -50,6 +50,6 @@ fn lint_select_json() {
}
----- stderr -----
"###
"##
);
}

View File

@@ -30,7 +30,7 @@ if condition:
print('Hy "Micha"') # Should not change quotes
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -45,7 +45,7 @@ if condition:
print('Hy "Micha"') # Should not change quotes
----- stderr -----
"###);
"#);
}
#[test]
@@ -65,7 +65,7 @@ bar = "needs formatting"
)?;
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--no-cache", "--check"]).current_dir(tempdir.path()), @r###"
.args(["format", "--isolated", "--no-cache", "--check"]).current_dir(tempdir.path()), @r"
success: false
exit_code: 1
----- stdout -----
@@ -74,7 +74,7 @@ bar = "needs formatting"
2 files would be reformatted
----- stderr -----
"###);
");
Ok(())
}
@@ -84,7 +84,7 @@ fn format_warn_stdin_filename_with_files() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "foo.py"])
.arg("foo.py")
.pass_stdin("foo = 1"), @r###"
.pass_stdin("foo = 1"), @r"
success: true
exit_code: 0
----- stdout -----
@@ -92,13 +92,13 @@ fn format_warn_stdin_filename_with_files() {
----- stderr -----
warning: Ignoring file foo.py in favor of standard input.
"###);
");
}
#[test]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo.toml", "."]), @r###"
.args(["format", "--config", "foo.toml", "."]), @r"
success: false
exit_code: 2
----- stdout -----
@@ -114,13 +114,13 @@ fn nonexistent_config_file() {
The path `foo.toml` does not point to a configuration file
For more information, try '--help'.
"###);
");
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--config", "foo = bar", "."]), @r###"
.args(["format", "--config", "foo = bar", "."]), @r#"
success: false
exit_code: 2
----- stdout -----
@@ -142,7 +142,7 @@ fn config_override_rejected_if_invalid_toml() {
expected `"`, `'`
For more information, try '--help'.
"###);
"#);
}
#[test]
@@ -161,19 +161,18 @@ fn too_many_config_files() -> Result<()> {
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
.arg("."), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
"###);
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
");
});
Ok(())
}
@@ -191,7 +190,7 @@ fn config_file_and_isolated() -> Result<()> {
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg("."), @r###"
.arg("."), @r"
success: false
exit_code: 2
----- stdout -----
@@ -203,8 +202,7 @@ fn config_file_and_isolated() -> Result<()> {
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
"###);
");
});
Ok(())
}
@@ -226,7 +224,7 @@ def foo():
// This overrides the long line length set in the config file
.args(["--config", "line-length=80"])
.arg("-")
.pass_stdin(fixture), @r###"
.pass_stdin(fixture), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -236,7 +234,7 @@ def foo():
)
----- stderr -----
"###);
"#);
Ok(())
}
@@ -259,7 +257,7 @@ def foo():
// ...but this overrides them both:
.args(["--line-length", "100"])
.arg("-")
.pass_stdin(fixture), @r###"
.pass_stdin(fixture), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -267,7 +265,7 @@ def foo():
print("looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong string")
----- stderr -----
"###);
"#);
Ok(())
}
@@ -302,7 +300,7 @@ if condition:
print("Should change quotes")
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -316,7 +314,7 @@ if condition:
print('Should change quotes')
----- stderr -----
"###);
"#);
Ok(())
}
@@ -357,7 +355,7 @@ def f(x):
>>> foo, bar, quux = this_is_a_long_line(lion, hippo, lemur, bear)
'''
pass
"), @r###"
"), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -403,7 +401,7 @@ def f(x):
pass
----- stderr -----
"###);
"#);
Ok(())
}
@@ -424,14 +422,14 @@ fn mixed_line_endings() -> Result<()> {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.args(["format", "--no-cache", "--diff", "--isolated"])
.arg("."), @r###"
.arg("."), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
2 files already formatted
"###);
");
Ok(())
}
@@ -490,7 +488,7 @@ OTHER = "OTHER"
// Explicitly pass test.py, should be formatted regardless of it being excluded by format.exclude
.arg(test_path.file_name().unwrap())
// Format all other files in the directory, should respect the `exclude` and `format.exclude` options
.arg("."), @r###"
.arg("."), @r"
success: false
exit_code: 1
----- stdout -----
@@ -499,7 +497,7 @@ OTHER = "OTHER"
2 files would be reformatted
----- stderr -----
"###);
");
Ok(())
}
@@ -517,14 +515,14 @@ from module import =
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.args(["format", "--no-cache", "--isolated", "--check"])
.arg("main.py"), @r###"
.arg("main.py"), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: Failed to parse main.py:2:20: Expected an import name
"###);
");
Ok(())
}
@@ -546,7 +544,7 @@ if __name__ == "__main__":
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.args(["format", "--no-cache", "--isolated", "--check"])
.arg("main.py"), @r###"
.arg("main.py"), @r"
success: false
exit_code: 1
----- stdout -----
@@ -554,31 +552,31 @@ if __name__ == "__main__":
1 file would be reformatted
----- stderr -----
"###);
");
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.args(["format", "--no-cache", "--isolated"])
.arg("main.py"), @r###"
.arg("main.py"), @r"
success: true
exit_code: 0
----- stdout -----
1 file reformatted
----- stderr -----
"###);
");
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.args(["format", "--no-cache", "--isolated"])
.arg("main.py"), @r###"
.arg("main.py"), @r"
success: true
exit_code: 0
----- stdout -----
1 file left unchanged
----- stderr -----
"###);
");
Ok(())
}
@@ -638,7 +636,7 @@ OTHER = "OTHER"
// Explicitly pass test.py, should be respect the `format.exclude` when `--force-exclude` is present
.arg(test_path.file_name().unwrap())
// Format all other files in the directory, should respect the `exclude` and `format.exclude` options
.arg("."), @r###"
.arg("."), @r"
success: false
exit_code: 1
----- stdout -----
@@ -646,7 +644,7 @@ OTHER = "OTHER"
1 file would be reformatted
----- stderr -----
"###);
");
Ok(())
}
@@ -673,7 +671,7 @@ from test import say_hy
if __name__ == '__main__':
say_hy("dear Ruff contributor")
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -686,7 +684,7 @@ if __name__ == '__main__':
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'ignore' -> 'lint.ignore'
"###);
"#);
Ok(())
}
@@ -713,7 +711,7 @@ from test import say_hy
if __name__ == '__main__':
say_hy("dear Ruff contributor")
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -727,7 +725,7 @@ if __name__ == '__main__':
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'ignore' -> 'lint.ignore'
"###);
"#);
Ok(())
}
@@ -770,7 +768,7 @@ if condition:
print("Should change quotes")
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -786,7 +784,7 @@ if condition:
----- stderr -----
warning: The following rule may cause conflicts when used with the formatter: `COM812`. To avoid unexpected behavior, we recommend disabling this rule, either by removing it from the `select` or `extend-select` configuration, or adding it to the `ignore` configuration.
"###);
"#);
Ok(())
}
@@ -811,7 +809,7 @@ tab-size = 2
.pass_stdin(r"
if True:
pass
"), @r###"
"), @r"
success: false
exit_code: 2
----- stdout -----
@@ -824,8 +822,7 @@ if True:
1 |
| ^
unknown field `tab-size`
"###);
");
});
Ok(())
}
@@ -851,7 +848,7 @@ format = "json"
.arg("-")
.pass_stdin(r"
import os
"), @r###"
"), @r#"
success: false
exit_code: 2
----- stdout -----
@@ -864,8 +861,7 @@ format = "json"
2 | format = "json"
| ^^^^^^
invalid type: string "json", expected struct FormatOptions
"###);
"#);
});
Ok(())
}
@@ -912,7 +908,7 @@ def say_hy(name: str):
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--no-cache", "--config"])
.arg(&ruff_toml)
.arg(test_path), @r###"
.arg(test_path), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -929,7 +925,7 @@ def say_hy(name: str):
warning: The isort option `isort.lines-between-types` with a value greater than 1 is incompatible with the formatter. To avoid unexpected behavior, we recommend setting the option to one of: `1` or `0` (default).
warning: The isort option `isort.force-wrap-aliases` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.force-wrap-aliases=false` or `format.skip-magic-trailing-comma=false`.
warning: The isort option `isort.split-on-trailing-comma` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.split-on-trailing-comma=false` or `format.skip-magic-trailing-comma=false`.
"###);
"#);
Ok(())
}
@@ -970,7 +966,7 @@ indent-style = "tab"
.arg("-")
.pass_stdin(r#"
def say_hy(name: str):
print(f"Hy {name}")"#), @r###"
print(f"Hy {name}")"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -988,7 +984,7 @@ def say_hy(name: str):
warning: The isort option `isort.lines-between-types` with a value greater than 1 is incompatible with the formatter. To avoid unexpected behavior, we recommend setting the option to one of: `1` or `0` (default).
warning: The isort option `isort.force-wrap-aliases` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.force-wrap-aliases=false` or `format.skip-magic-trailing-comma=false`.
warning: The isort option `isort.split-on-trailing-comma` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.split-on-trailing-comma=false` or `format.skip-magic-trailing-comma=false`.
"###);
"#);
Ok(())
}
@@ -1032,14 +1028,14 @@ def say_hy(name: str):
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--no-cache", "--config"])
.arg(&ruff_toml)
.arg(test_path), @r###"
.arg(test_path), @r"
success: true
exit_code: 0
----- stdout -----
1 file reformatted
----- stderr -----
"###);
");
Ok(())
}
@@ -1074,14 +1070,14 @@ def say_hy(name: str):
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--no-cache", "--config"])
.arg(&ruff_toml)
.arg(test_path), @r###"
.arg(test_path), @r"
success: true
exit_code: 0
----- stdout -----
1 file reformatted
----- stderr -----
"###);
");
Ok(())
}
@@ -1109,7 +1105,7 @@ def say_hy(name: str):
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--no-cache", "--config"])
.arg(&ruff_toml)
.arg(test_path), @r###"
.arg(test_path), @r"
success: true
exit_code: 0
----- stdout -----
@@ -1119,7 +1115,7 @@ def say_hy(name: str):
warning: `one-blank-line-before-class` (D203) and `no-blank-line-before-class` (D211) are incompatible. Ignoring `one-blank-line-before-class`.
warning: `multi-line-summary-first-line` (D212) and `multi-line-summary-second-line` (D213) are incompatible. Ignoring `multi-line-summary-second-line`.
warning: The following rules may cause conflicts when used with the formatter: `COM812`, `ISC001`. To avoid unexpected behavior, we recommend disabling these rules, either by removing them from the `select` or `extend-select` configuration, or adding them to the `ignore` configuration.
"###);
");
Ok(())
}
@@ -1138,7 +1134,7 @@ fn test_diff() {
]}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).args(args).args(paths),
@r###"
@r"
success: false
exit_code: 1
----- stdout -----
@@ -1186,7 +1182,7 @@ fn test_diff() {
----- stderr -----
2 files would be reformatted, 1 file already formatted
"###);
");
});
}
@@ -1201,7 +1197,7 @@ fn test_diff_no_change() {
]}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).args(args).args(paths),
@r###"
@r"
success: false
exit_code: 1
----- stdout -----
@@ -1216,7 +1212,7 @@ fn test_diff_no_change() {
----- stderr -----
1 file would be reformatted
"###
"
);
});
}
@@ -1235,7 +1231,7 @@ fn test_diff_stdin_unformatted() {
let unformatted = fs::read(fixtures.join("unformatted.py")).unwrap();
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).args(args).pass_stdin(unformatted),
@r###"
@r"
success: false
exit_code: 1
----- stdout -----
@@ -1249,7 +1245,7 @@ fn test_diff_stdin_unformatted() {
----- stderr -----
"###);
");
}
#[test]
@@ -1259,13 +1255,13 @@ fn test_diff_stdin_formatted() {
let unformatted = fs::read(fixtures.join("formatted.py")).unwrap();
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).args(args).pass_stdin(unformatted),
@r###"
@r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
"###);
");
}
#[test]
@@ -1275,7 +1271,7 @@ fn test_notebook_trailing_semicolon() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--stdin-filename", "test.ipynb"])
.arg("-")
.pass_stdin(unformatted), @r###"
.pass_stdin(unformatted), @r##"
success: true
exit_code: 0
----- stdout -----
@@ -1694,7 +1690,7 @@ fn test_notebook_trailing_semicolon() {
}
----- stderr -----
"###);
"##);
}
#[test]
@@ -1756,14 +1752,14 @@ include = ["*.ipy"]
.arg("--no-cache")
.args(["--config", &ruff_toml.file_name().unwrap().to_string_lossy()])
.args(["--extension", "ipy:ipynb"])
.arg("."), @r###"
.arg("."), @r"
success: true
exit_code: 0
----- stdout -----
1 file reformatted
----- stderr -----
"###);
");
Ok(())
}
@@ -1776,7 +1772,7 @@ fn range_formatting() {
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -1789,7 +1785,7 @@ def foo(arg1, arg2,):
----- stderr -----
"###);
"#);
}
#[test]
@@ -1799,7 +1795,7 @@ fn range_formatting_unicode() {
.arg("-")
.pass_stdin(r#"
def foo(arg1="👋🏽" ): print("Format this" )
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -1808,7 +1804,7 @@ def foo(arg1="👋🏽" ): print("Format this" )
print("Format this")
----- stderr -----
"###);
"#);
}
#[test]
@@ -1839,7 +1835,7 @@ def file2(arg1, arg2,):
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["format", "--isolated", "--range=1:8-1:15"])
.arg(file1)
.arg(file2), @r###"
.arg(file2), @r"
success: false
exit_code: 2
----- stdout -----
@@ -1847,7 +1843,7 @@ def file2(arg1, arg2,):
----- stderr -----
ruff failed
Cause: The `--range` option is only supported when formatting a single file but the specified paths resolve to 2 files.
"###);
");
Ok(())
}
@@ -1861,7 +1857,7 @@ fn range_formatting_out_of_bounds() {
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -1871,7 +1867,7 @@ def foo(arg1, arg2,):
----- stderr -----
"###);
"#);
}
#[test]
@@ -1883,7 +1879,7 @@ fn range_start_larger_than_end() {
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
@@ -1893,7 +1889,7 @@ def foo(arg1, arg2,):
tip: Try switching start and end: '50:1-90:1'
For more information, try '--help'.
"###);
");
}
#[test]
@@ -1905,7 +1901,7 @@ fn range_line_numbers_only() {
def foo(arg1, arg2,):
print("Shouldn't format this" )
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -1918,7 +1914,7 @@ def foo(arg1, arg2,):
----- stderr -----
"###);
"#);
}
#[test]
@@ -1930,7 +1926,7 @@ fn range_start_only() {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
"#), @r#"
success: true
exit_code: 0
----- stdout -----
@@ -1939,7 +1935,7 @@ def foo(arg1, arg2,):
print("Should format this")
----- stderr -----
"###);
"#);
}
#[test]
@@ -1975,7 +1971,7 @@ fn range_missing_line() {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
@@ -1985,7 +1981,7 @@ def foo(arg1, arg2,):
tip: The format is 'line:column'.
For more information, try '--help'.
"###);
");
}
#[test]
@@ -1997,7 +1993,7 @@ fn zero_line_number() {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
@@ -2008,7 +2004,7 @@ def foo(arg1, arg2,):
tip: Try 1:2 instead.
For more information, try '--help'.
"###);
");
}
#[test]
@@ -2020,7 +2016,7 @@ fn column_and_line_zero() {
def foo(arg1, arg2,):
print("Should format this" )
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
@@ -2031,7 +2027,7 @@ def foo(arg1, arg2,):
tip: Try 1:1 instead.
For more information, try '--help'.
"###);
");
}
#[test]
@@ -2075,12 +2071,12 @@ fn range_formatting_notebook() {
"nbformat": 4,
"nbformat_minor": 5
}
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: Failed to format main.ipynb: Range formatting isn't supported for notebooks.
"###);
");
}

File diff suppressed because it is too large Load Diff

View File

@@ -42,7 +42,7 @@ inline-quotes = "single"
.arg(&ruff_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -56,7 +56,7 @@ inline-quotes = "single"
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
");
});
Ok(())
@@ -85,7 +85,7 @@ inline-quotes = "single"
.arg("--config")
.arg(&ruff_toml)
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -96,7 +96,7 @@ inline-quotes = "single"
[*] 2 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -125,7 +125,7 @@ inline-quotes = "single"
.arg("--config")
.arg(&ruff_toml)
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -138,7 +138,7 @@ inline-quotes = "single"
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
");
});
Ok(())
@@ -171,7 +171,7 @@ inline-quotes = "single"
.arg("--config")
.arg(&ruff_toml)
.arg("-")
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r###"
.pass_stdin(r#"a = "abcba".strip("aba")"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -184,7 +184,7 @@ inline-quotes = "single"
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'flake8-quotes' -> 'lint.flake8-quotes'
"###);
");
});
Ok(())
@@ -252,7 +252,7 @@ OTHER = "OTHER"
// Explicitly pass test.py, should be linted regardless of it being excluded by lint.exclude
.arg(test_path.file_name().unwrap())
// Lint all other files in the directory, should respect the `exclude` and `lint.exclude` options
.arg("."), @r###"
.arg("."), @r"
success: false
exit_code: 1
----- stdout -----
@@ -265,7 +265,7 @@ OTHER = "OTHER"
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
");
});
Ok(())
@@ -302,7 +302,7 @@ from test import say_hy
if __name__ == "__main__":
say_hy("dear Ruff contributor")
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -314,7 +314,7 @@ if __name__ == "__main__":
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
");
});
Ok(())
@@ -349,7 +349,7 @@ max-line-length = 100
_ = "---------------------------------------------------------------------------亜亜亜亜亜亜"
# longer than 100
_ = "---------------------------------------------------------------------------亜亜亜亜亜亜亜亜亜亜亜亜亜亜"
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -360,7 +360,7 @@ _ = "---------------------------------------------------------------------------
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `[TMP]/ruff.toml`:
- 'select' -> 'lint.select'
- 'pycodestyle' -> 'lint.pycodestyle'
"###);
");
});
Ok(())
@@ -397,7 +397,7 @@ from test import say_hy
if __name__ == "__main__":
say_hy("dear Ruff contributor")
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -408,7 +408,7 @@ if __name__ == "__main__":
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
");
});
Ok(())
@@ -445,7 +445,7 @@ from test import say_hy
if __name__ == "__main__":
say_hy("dear Ruff contributor")
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -456,7 +456,7 @@ if __name__ == "__main__":
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `ruff.toml`:
- 'extend-select' -> 'lint.extend-select'
"###);
");
});
Ok(())
@@ -493,7 +493,7 @@ ignore = ["D203", "D212"]
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(sub_dir)
.args(STDIN_BASE_OPTIONS)
, @r###"
, @r"
success: true
exit_code: 0
----- stdout -----
@@ -501,7 +501,7 @@ ignore = ["D203", "D212"]
----- stderr -----
warning: No Python files found under the given path(s)
"###);
");
});
Ok(())
@@ -511,7 +511,7 @@ ignore = ["D203", "D212"]
fn nonexistent_config_file() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo.toml", "."]), @r###"
.args(["--config", "foo.toml", "."]), @r"
success: false
exit_code: 2
----- stdout -----
@@ -527,14 +527,14 @@ fn nonexistent_config_file() {
The path `foo.toml` does not point to a configuration file
For more information, try '--help'.
"###);
");
}
#[test]
fn config_override_rejected_if_invalid_toml() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "foo = bar", "."]), @r###"
.args(["--config", "foo = bar", "."]), @r#"
success: false
exit_code: 2
----- stdout -----
@@ -556,7 +556,7 @@ fn config_override_rejected_if_invalid_toml() {
expected `"`, `'`
For more information, try '--help'.
"###);
"#);
}
#[test]
@@ -575,19 +575,18 @@ fn too_many_config_files() -> Result<()> {
.arg(&ruff_dot_toml)
.arg("--config")
.arg(&ruff2_dot_toml)
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
.arg("."), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
----- stderr -----
ruff failed
Cause: You cannot specify more than one configuration file on the command line.
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
"###);
tip: remove either `--config=[TMP]/ruff.toml` or `--config=[TMP]/ruff2.toml`.
For more information, try `--help`.
");
});
Ok(())
}
@@ -596,7 +595,7 @@ fn too_many_config_files() -> Result<()> {
fn extend_passed_via_config_argument() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args(["--config", "extend = 'foo.toml'", "."]), @r###"
.args(["--config", "extend = 'foo.toml'", "."]), @r"
success: false
exit_code: 2
----- stdout -----
@@ -607,7 +606,7 @@ fn extend_passed_via_config_argument() {
tip: Cannot include `extend` in a --config flag value
For more information, try '--help'.
"###);
");
}
#[test]
@@ -623,20 +622,19 @@ fn config_file_and_isolated() -> Result<()> {
.arg("--config")
.arg(&ruff_dot_toml)
.arg("--isolated")
.arg("."), @r###"
success: false
exit_code: 2
----- stdout -----
.arg("."), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
ruff failed
Cause: The argument `--config=[TMP]/ruff.toml` cannot be used with `--isolated`
----- stderr -----
ruff failed
Cause: The argument `--config=[TMP]/ruff.toml` cannot be used with `--isolated`
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
"###);
tip: You cannot specify a configuration file and also specify `--isolated`,
as `--isolated` causes ruff to ignore all configuration files.
For more information, try `--help`.
");
});
Ok(())
}
@@ -681,7 +679,7 @@ x = "longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssss
.args(["--config", "lint.extend-select=['E501', 'F841']"])
.args(["--config", "lint.isort.combine-as-imports = false"])
.arg("-")
.pass_stdin(fixture), @r###"
.pass_stdin(fixture), @r"
success: false
exit_code: 1
----- stdout -----
@@ -691,7 +689,7 @@ x = "longer_than_90_charactersssssssssssssssssssssssssssssssssssssssssssssssssss
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
Ok(())
}
@@ -700,7 +698,7 @@ fn valid_toml_but_nonexistent_option_provided_via_config_argument() {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(STDIN_BASE_OPTIONS)
.args([".", "--config", "extend-select=['F481']"]), // No such code as F481!
@r###"
@r"
success: false
exit_code: 2
----- stdout -----
@@ -717,7 +715,7 @@ fn valid_toml_but_nonexistent_option_provided_via_config_argument() {
Unknown rule selector: `F481`
For more information, try '--help'.
"###);
");
}
#[test]
@@ -727,7 +725,7 @@ fn each_toml_option_requires_a_new_flag_1() {
// commas can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'], line-length=90"]),
@r###"
@r"
success: false
exit_code: 2
----- stdout -----
@@ -748,7 +746,7 @@ fn each_toml_option_requires_a_new_flag_1() {
expected newline, `#`
For more information, try '--help'.
"###);
");
}
#[test]
@@ -758,7 +756,7 @@ fn each_toml_option_requires_a_new_flag_2() {
// spaces *also* can't be used to delimit different config overrides;
// you need a new --config flag for each override
.args([".", "--config", "extend-select=['F841'] line-length=90"]),
@r###"
@r"
success: false
exit_code: 2
----- stdout -----
@@ -779,7 +777,7 @@ fn each_toml_option_requires_a_new_flag_2() {
expected newline, `#`
For more information, try '--help'.
"###);
");
}
#[test]
@@ -806,7 +804,7 @@ select=["E501"]
.arg(&ruff_toml)
.args(["--config", "line-length=110"])
.arg("-")
.pass_stdin(fixture), @r###"
.pass_stdin(fixture), @r"
success: false
exit_code: 1
----- stdout -----
@@ -814,7 +812,7 @@ select=["E501"]
Found 1 error.
----- stderr -----
"###);
");
Ok(())
}
@@ -831,14 +829,14 @@ fn complex_config_setting_overridden_via_cli() -> Result<()> {
.args(["--config", "lint.per-file-ignores = {'generated.py' = ['N801']}"])
.args(["--stdin-filename", "generated.py"])
.arg("-")
.pass_stdin(fixture), @r###"
.pass_stdin(fixture), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
"###);
");
Ok(())
}
@@ -848,7 +846,7 @@ fn deprecated_config_option_overridden_via_cli() {
.args(STDIN_BASE_OPTIONS)
.args(["--config", "select=['N801']", "-"])
.pass_stdin("class lowercase: ..."),
@r###"
@r"
success: false
exit_code: 1
----- stdout -----
@@ -858,7 +856,7 @@ fn deprecated_config_option_overridden_via_cli() {
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in your `--config` CLI arguments:
- 'select' -> 'lint.select'
"###);
");
}
#[test]
@@ -922,7 +920,7 @@ include = ["*.ipy"]
.args(STDIN_BASE_OPTIONS)
.args(["--config", &ruff_toml.file_name().unwrap().to_string_lossy()])
.args(["--extension", "ipy:ipynb"])
.arg("."), @r###"
.arg("."), @r"
success: false
exit_code: 1
----- stdout -----
@@ -931,7 +929,7 @@ include = ["*.ipy"]
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -960,7 +958,7 @@ external = ["AAA"]
.pass_stdin(r#"
# flake8: noqa: AAA101, BBB102
import os
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -970,7 +968,7 @@ import os
----- stderr -----
warning: Invalid rule code provided to `# ruff: noqa` at -:2: BBB102
"###);
");
});
Ok(())
@@ -999,7 +997,7 @@ required-version = "0.1.0"
.arg("-")
.pass_stdin(r#"
import os
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
@@ -1007,7 +1005,7 @@ import os
----- stderr -----
ruff failed
Cause: Required version `==0.1.0` does not match the running version `[VERSION]`
"###);
");
});
Ok(())
@@ -1038,7 +1036,7 @@ required-version = "{version}"
.arg("-")
.pass_stdin(r#"
import os
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1047,7 +1045,7 @@ import os
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -1078,7 +1076,7 @@ required-version = ">{version}"
.arg("-")
.pass_stdin(r#"
import os
"#), @r###"
"#), @r"
success: false
exit_code: 2
----- stdout -----
@@ -1086,7 +1084,7 @@ import os
----- stderr -----
ruff failed
Cause: Required version `>[VERSION]` does not match the running version `[VERSION]`
"###);
");
});
Ok(())
@@ -1115,7 +1113,7 @@ required-version = ">=0.1.0"
.arg("-")
.pass_stdin(r#"
import os
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1124,7 +1122,7 @@ import os
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -1156,7 +1154,7 @@ import os
def func():
x = 1
"#), @r###"
"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1165,7 +1163,7 @@ def func():
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
Ok(())
}
@@ -1194,7 +1192,7 @@ fn negated_per_file_ignores() -> Result<()> {
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
, @r"
success: false
exit_code: 1
----- stdout -----
@@ -1203,7 +1201,7 @@ fn negated_per_file_ignores() -> Result<()> {
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
Ok(())
}
@@ -1233,7 +1231,7 @@ fn negated_per_file_ignores_absolute() -> Result<()> {
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
, @r"
success: false
exit_code: 1
----- stdout -----
@@ -1242,7 +1240,7 @@ fn negated_per_file_ignores_absolute() -> Result<()> {
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
}
@@ -1272,14 +1270,14 @@ fn negated_per_file_ignores_overlap() -> Result<()> {
.arg("--select")
.arg("RUF901")
.current_dir(&tempdir)
, @r###"
, @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
"###);
");
Ok(())
}
@@ -1311,7 +1309,7 @@ import os # F401
def function():
import os # F811
print(os.name)
"#), @r###"
"#), @r"
success: true
exit_code: 0
----- stdout -----
@@ -1323,7 +1321,7 @@ def function():
----- stderr -----
Found 1 error (1 fixed, 0 remaining).
"###);
");
});
Ok(())
@@ -1363,22 +1361,22 @@ def first_square():
.arg("-")
.pass_stdin(r#"
"#), @r###"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 1 noqa directive.
"###);
");
});
let test_code = std::fs::read_to_string(&test_path).expect("should read test file");
insta::assert_snapshot!(test_code, @r###"
insta::assert_snapshot!(test_code, @r"
def first_square():
return [x * x for x in range(20)][0] # noqa: RUF015
"###);
");
Ok(())
}
@@ -1418,23 +1416,22 @@ def unused(x):
.arg("-")
.pass_stdin(r#"
"#), @r###"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 1 noqa directive.
"###);
");
});
let test_code = std::fs::read_to_string(&test_path).expect("should read test file");
insta::assert_snapshot!(test_code, @r###"
insta::assert_snapshot!(test_code, @r"
def unused(x): # noqa: ANN001, ANN201, D103
pass
"###);
");
Ok(())
}
@@ -1474,24 +1471,23 @@ import a
.arg("-")
.pass_stdin(r#"
"#), @r###"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 1 noqa directive.
"###);
");
});
let test_code = std::fs::read_to_string(&test_path).expect("should read test file");
insta::assert_snapshot!(test_code, @r###"
insta::assert_snapshot!(test_code, @r"
import z # noqa: I001
import c
import a
"###);
");
Ok(())
}
@@ -1531,23 +1527,22 @@ def unused(x): # noqa: ANN001, ARG001, D103
.arg("-")
.pass_stdin(r#"
"#), @r###"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 1 noqa directive.
"###);
");
});
let test_code = std::fs::read_to_string(&test_path).expect("should read test file");
insta::assert_snapshot!(test_code, @r###"
insta::assert_snapshot!(test_code, @r"
def unused(x): # noqa: ANN001, ANN201, ARG001, D103
pass
"###);
");
Ok(())
}
@@ -1592,19 +1587,19 @@ print(
.arg("-")
.pass_stdin(r#"
"#), @r###"
"#), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 1 noqa directive.
"###);
");
});
let test_code = std::fs::read_to_string(&test_path).expect("should read test file");
insta::assert_snapshot!(test_code, @r###"
insta::assert_snapshot!(test_code, @r#"
print(
"""First line
second line
@@ -1612,7 +1607,7 @@ print(
%s""" # noqa: UP031
% name
)
"###);
"#);
Ok(())
}
@@ -1656,14 +1651,14 @@ def first_square():
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.current_dir(tempdir.path())
.args(STDIN_BASE_OPTIONS)
.args(["--add-noqa"]), @r###"
.args(["--add-noqa"]), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Added 1 noqa directive.
"###);
");
});
Ok(())
@@ -1693,7 +1688,7 @@ select = ["UP006"]
.arg(&ruff_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r###"
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1702,7 +1697,7 @@ select = ["UP006"]
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
let pyproject_toml = tempdir.path().join("pyproject.toml");
@@ -1725,14 +1720,14 @@ select = ["UP006"]
.arg(&pyproject_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r###"
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
"###);
");
});
Ok(())
@@ -1762,7 +1757,7 @@ select = ["UP006"]
.arg(&pyproject_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r###"
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1771,7 +1766,7 @@ select = ["UP006"]
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -1801,7 +1796,7 @@ select = ["UP006"]
.arg(&pyproject_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r###"
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1810,7 +1805,7 @@ select = ["UP006"]
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -1840,7 +1835,7 @@ select = ["UP006"]
.arg(&pyproject_toml)
.args(["--stdin-filename", "test.py"])
.arg("-")
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r###"
.pass_stdin(r#"from typing import List; foo: List[int]"#), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1849,7 +1844,7 @@ select = ["UP006"]
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
});
Ok(())
@@ -1904,7 +1899,7 @@ fn checks_notebooks_in_stable() -> anyhow::Result<()> {
.arg("--select")
.arg("F401")
.current_dir(&tempdir)
, @r###"
, @r"
success: false
exit_code: 1
----- stdout -----
@@ -1913,7 +1908,7 @@ fn checks_notebooks_in_stable() -> anyhow::Result<()> {
[*] 1 fixable with the `--fix` option.
----- stderr -----
"###);
");
Ok(())
}
@@ -1942,14 +1937,14 @@ fn nested_implicit_namespace_package() -> Result<()> {
.arg("--select")
.arg("INP")
.current_dir(&tempdir)
, @r###"
, @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
"###);
");
insta::with_settings!({filters => vec![(r"\\", "/")]}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
@@ -1958,7 +1953,7 @@ fn nested_implicit_namespace_package() -> Result<()> {
.arg("INP")
.arg("--preview")
.current_dir(&tempdir)
, @r###"
, @r"
success: false
exit_code: 1
----- stdout -----
@@ -1966,7 +1961,7 @@ fn nested_implicit_namespace_package() -> Result<()> {
Found 1 error.
----- stderr -----
"###);
");
});
Ok(())

View File

@@ -29,7 +29,7 @@ fn check_project_include_defaults() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r###"
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r"
success: true
exit_code: 0
----- stdout -----
@@ -41,7 +41,7 @@ fn check_project_include_defaults() {
----- stderr -----
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `nested-project/pyproject.toml`:
- 'select' -> 'lint.select'
"###);
");
});
}
@@ -53,14 +53,14 @@ fn check_project_respects_direct_paths() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files", "b.py"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r###"
.args(["check", "--show-files", "b.py"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r"
success: true
exit_code: 0
----- stdout -----
[BASEPATH]/include-test/b.py
----- stderr -----
"###);
");
});
}
@@ -72,14 +72,14 @@ fn check_project_respects_subdirectory_includes() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files", "subdirectory"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r###"
.args(["check", "--show-files", "subdirectory"]).current_dir(Path::new("./resources/test/fixtures/include-test")), @r"
success: true
exit_code: 0
----- stdout -----
[BASEPATH]/include-test/subdirectory/c.py
----- stderr -----
"###);
");
});
}
@@ -91,13 +91,13 @@ fn check_project_from_project_subdirectory_respects_includes() {
filters => TEST_FILTERS.to_vec()
}, {
assert_cmd_snapshot!(Command::new(get_cargo_bin(BIN_NAME))
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test/subdirectory")), @r###"
.args(["check", "--show-files"]).current_dir(Path::new("./resources/test/fixtures/include-test/subdirectory")), @r"
success: true
exit_code: 0
----- stdout -----
[BASEPATH]/include-test/subdirectory/c.py
----- stderr -----
"###);
");
});
}

View File

@@ -5,6 +5,7 @@ info:
args:
- rule
- F401
snapshot_kind: text
---
success: true
exit_code: 0

View File

@@ -6,6 +6,7 @@ info:
- check
- "--show-settings"
- unformatted.py
snapshot_kind: text
---
success: true
exit_code: 0

View File

@@ -16,14 +16,14 @@ const VERSION_FILTER: [(&str, &str); 1] = [(
fn version_basics() {
insta::with_settings!({filters => VERSION_FILTER.to_vec()}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).arg("version"), @r###"
Command::new(get_cargo_bin(BIN_NAME)).arg("version"), @r"
success: true
exit_code: 0
----- stdout -----
ruff [VERSION]
----- stderr -----
"###
"
);
});
}
@@ -42,14 +42,14 @@ fn config_option_allowed_but_ignored() -> Result<()> {
.arg("version")
.arg("--config")
.arg(&ruff_dot_toml)
.args(["--config", "lint.isort.extra-standard-library = ['foo', 'bar']"]), @r###"
.args(["--config", "lint.isort.extra-standard-library = ['foo', 'bar']"]), @r"
success: true
exit_code: 0
----- stdout -----
ruff [VERSION]
----- stderr -----
"###
"
);
});
Ok(())
@@ -60,7 +60,7 @@ fn config_option_ignored_but_validated() {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME))
.arg("version")
.args(["--config", "foo = bar"]), @r###"
.args(["--config", "foo = bar"]), @r#"
success: false
exit_code: 2
----- stdout -----
@@ -82,7 +82,7 @@ fn config_option_ignored_but_validated() {
expected `"`, `'`
For more information, try '--help'.
"###
"#
);
});
}
@@ -92,14 +92,14 @@ fn config_option_ignored_but_validated() {
fn isolated_option_allowed() {
insta::with_settings!({filters => VERSION_FILTER.to_vec()}, {
assert_cmd_snapshot!(
Command::new(get_cargo_bin(BIN_NAME)).arg("version").arg("--isolated"), @r###"
Command::new(get_cargo_bin(BIN_NAME)).arg("version").arg("--isolated"), @r"
success: true
exit_code: 0
----- stdout -----
ruff [VERSION]
----- stderr -----
"###
"
);
});
}

View File

@@ -2,7 +2,7 @@
use rayon::ThreadPoolBuilder;
use red_knot_python_semantic::PythonVersion;
use red_knot_workspace::db::RootDatabase;
use red_knot_workspace::db::{Db, RootDatabase};
use red_knot_workspace::watch::{ChangeEvent, ChangedKind};
use red_knot_workspace::workspace::settings::Configuration;
use red_knot_workspace::workspace::WorkspaceMetadata;
@@ -75,10 +75,10 @@ fn setup_case() -> Case {
.unwrap();
let src_root = SystemPath::new("/src");
let metadata = WorkspaceMetadata::from_path(
let metadata = WorkspaceMetadata::discover(
src_root,
&system,
Some(Configuration {
Some(&Configuration {
target_version: Some(PythonVersion::PY312),
..Configuration::default()
}),
@@ -123,15 +123,9 @@ fn benchmark_incremental(criterion: &mut Criterion) {
fn setup() -> Case {
let case = setup_case();
let result: Vec<_> = case
.db
.check()
.unwrap()
.into_iter()
.map(|diagnostic| diagnostic.display(&case.db).to_string())
.collect();
let result: Vec<_> = case.db.check().unwrap();
assert_eq!(result, EXPECTED_DIAGNOSTICS);
assert_diagnostics(&case.db, result);
case.fs
.write_file(
@@ -174,19 +168,29 @@ fn benchmark_cold(criterion: &mut Criterion) {
setup_case,
|case| {
let Case { db, .. } = case;
let result: Vec<_> = db
.check()
.unwrap()
.into_iter()
.map(|diagnostic| diagnostic.display(db).to_string())
.collect();
let result: Vec<_> = db.check().unwrap();
assert_eq!(result, EXPECTED_DIAGNOSTICS);
assert_diagnostics(db, result);
},
BatchSize::SmallInput,
);
});
}
#[track_caller]
fn assert_diagnostics(db: &dyn Db, diagnostics: Vec<Box<dyn Diagnostic>>) {
let normalized: Vec<_> = diagnostics
.into_iter()
.map(|diagnostic| {
diagnostic
.display(db.upcast())
.to_string()
.replace('\\', "/")
})
.collect();
assert_eq!(&normalized, EXPECTED_DIAGNOSTICS);
}
criterion_group!(check_file, benchmark_cold, benchmark_incremental);
criterion_main!(check_file);

View File

@@ -22,7 +22,9 @@ ruff_text_size = { workspace = true }
camino = { workspace = true }
countme = { workspace = true }
dashmap = { workspace = true }
dunce = { workspace = true }
filetime = { workspace = true }
glob = { workspace = true }
ignore = { workspace = true, optional = true }
matchit = { workspace = true }
salsa = { workspace = true }

View File

@@ -1,9 +1,12 @@
use std::fmt::Debug;
pub use glob::PatternError;
pub use memory_fs::MemoryFileSystem;
#[cfg(feature = "os")]
pub use os::OsSystem;
use ruff_notebook::{Notebook, NotebookError};
use std::error::Error;
use std::fmt::Debug;
use std::path::{Path, PathBuf};
use std::{fmt, io};
pub use test::{DbWithTestSystem, TestSystem};
use walk_directory::WalkDirectoryBuilder;
@@ -51,6 +54,10 @@ pub trait System: Debug {
/// * `path` does not exist.
/// * A non-final component in `path` is not a directory.
/// * the symlink target path is not valid Unicode.
///
/// ## Windows long-paths
/// Unlike `std::fs::canonicalize`, this function does remove UNC prefixes if possible.
/// See [dunce::canonicalize] for more information.
fn canonicalize_path(&self, path: &SystemPath) -> Result<SystemPathBuf>;
/// Reads the content of the file at `path` into a [`String`].
@@ -126,6 +133,19 @@ pub trait System: Debug {
/// yields a single entry for that file.
fn walk_directory(&self, path: &SystemPath) -> WalkDirectoryBuilder;
/// Return an iterator that produces all the `Path`s that match the given
/// pattern using default match options, which may be absolute or relative to
/// the current working directory.
///
/// This may return an error if the pattern is invalid.
fn glob(
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
PatternError,
>;
fn as_any(&self) -> &dyn std::any::Any;
fn as_any_mut(&mut self) -> &mut dyn std::any::Any;
@@ -204,3 +224,59 @@ impl DirectoryEntry {
self.file_type
}
}
/// A glob iteration error.
///
/// This is typically returned when a particular path cannot be read
/// to determine if its contents match the glob pattern. This is possible
/// if the program lacks the appropriate permissions, for example.
#[derive(Debug)]
pub struct GlobError {
path: PathBuf,
error: GlobErrorKind,
}
impl GlobError {
/// The Path that the error corresponds to.
pub fn path(&self) -> &Path {
&self.path
}
pub fn kind(&self) -> &GlobErrorKind {
&self.error
}
}
impl Error for GlobError {}
impl fmt::Display for GlobError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.error {
GlobErrorKind::IOError(error) => {
write!(
f,
"attempting to read `{}` resulted in an error: {error}",
self.path.display(),
)
}
GlobErrorKind::NonUtf8Path => {
write!(f, "`{}` is not a valid UTF-8 path", self.path.display(),)
}
}
}
}
impl From<glob::GlobError> for GlobError {
fn from(value: glob::GlobError) -> Self {
Self {
path: value.path().to_path_buf(),
error: GlobErrorKind::IOError(value.into_error()),
}
}
}
#[derive(Debug)]
pub enum GlobErrorKind {
IOError(io::Error),
NonUtf8Path,
}

View File

@@ -9,13 +9,13 @@ use rustc_hash::FxHashMap;
use ruff_notebook::{Notebook, NotebookError};
use crate::system::{
walk_directory, DirectoryEntry, FileType, Metadata, Result, SystemPath, SystemPathBuf,
SystemVirtualPath, SystemVirtualPathBuf,
walk_directory, DirectoryEntry, FileType, GlobError, GlobErrorKind, Metadata, Result,
SystemPath, SystemPathBuf, SystemVirtualPath, SystemVirtualPathBuf,
};
use super::walk_directory::{
DirectoryWalker, WalkDirectoryBuilder, WalkDirectoryConfiguration, WalkDirectoryVisitor,
WalkDirectoryVisitorBuilder, WalkState,
DirectoryWalker, ErrorKind, WalkDirectoryBuilder, WalkDirectoryConfiguration,
WalkDirectoryVisitor, WalkDirectoryVisitorBuilder, WalkState,
};
/// File system that stores all content in memory.
@@ -94,8 +94,11 @@ impl MemoryFileSystem {
metadata(self, path.as_ref())
}
pub fn canonicalize(&self, path: impl AsRef<SystemPath>) -> SystemPathBuf {
SystemPathBuf::from_utf8_path_buf(self.normalize_path(path))
pub fn canonicalize(&self, path: impl AsRef<SystemPath>) -> Result<SystemPathBuf> {
let path = path.as_ref();
// Mimic the behavior of a real FS where canonicalize errors if the `path` doesn't exist
self.metadata(path)?;
Ok(SystemPathBuf::from_utf8_path_buf(self.normalize_path(path)))
}
pub fn is_file(&self, path: impl AsRef<SystemPath>) -> bool {
@@ -230,6 +233,46 @@ impl MemoryFileSystem {
WalkDirectoryBuilder::new(path, MemoryWalker { fs: self.clone() })
}
pub fn glob(
&self,
pattern: &str,
) -> std::result::Result<
impl Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>,
glob::PatternError,
> {
// Very naive implementation that iterates over all files and collects all that match the given pattern.
let normalized = self.normalize_path(pattern);
let pattern = glob::Pattern::new(normalized.as_str())?;
let matches = std::sync::Mutex::new(Vec::new());
self.walk_directory("/").standard_filters(false).run(|| {
Box::new(|entry| {
match entry {
Ok(entry) => {
if pattern.matches_path(entry.path().as_std_path()) {
matches.lock().unwrap().push(Ok(entry.into_path()));
}
}
Err(error) => match error.kind {
ErrorKind::Loop { .. } => {
unreachable!("Loops aren't possible in the memory file system because it doesn't support symlinks.")
}
ErrorKind::Io { err, path } => {
matches.lock().unwrap().push(Err(GlobError { path: path.expect("walk_directory to always set a path").into_std_path_buf(), error: GlobErrorKind::IOError(err)}));
}
ErrorKind::NonUtf8Path { path } => {
matches.lock().unwrap().push(Err(GlobError { path, error: GlobErrorKind::NonUtf8Path}));
}
},
}
WalkState::Continue
})
});
Ok(matches.into_inner().unwrap().into_iter())
}
pub fn remove_file(&self, path: impl AsRef<SystemPath>) -> Result<()> {
fn remove_file(fs: &MemoryFileSystem, path: &SystemPath) -> Result<()> {
let mut by_path = fs.inner.by_path.write().unwrap();
@@ -629,7 +672,7 @@ impl DirectoryWalker for MemoryWalker {
visitor.visit(Err(walk_directory::Error {
depth: Some(depth),
kind: walk_directory::ErrorKind::Io {
path: None,
path: Some(path.clone()),
err: error,
},
}));
@@ -676,6 +719,7 @@ fn now() -> FileTime {
#[cfg(test)]
mod tests {
use std::io::ErrorKind;
use std::time::Duration;
use crate::system::walk_directory::tests::DirectoryEntryToString;
@@ -1149,4 +1193,26 @@ mod tests {
Ok(())
}
#[test]
fn glob() -> std::io::Result<()> {
let root = SystemPath::new("/src");
let fs = MemoryFileSystem::with_current_directory(root);
fs.write_files([
(root.join("foo.py"), "print('foo')"),
(root.join("a/bar.py"), "print('bar')"),
(root.join("a/.baz.py"), "print('baz')"),
])?;
let mut matches = fs.glob("/src/a/**").unwrap().flatten().collect::<Vec<_>>();
matches.sort_unstable();
assert_eq!(matches, vec![root.join("a/.baz.py"), root.join("a/bar.py")]);
let matches = fs.glob("**/bar.py").unwrap().flatten().collect::<Vec<_>>();
assert_eq!(matches, vec![root.join("a/bar.py")]);
Ok(())
}
}

View File

@@ -6,8 +6,8 @@ use filetime::FileTime;
use ruff_notebook::{Notebook, NotebookError};
use crate::system::{
DirectoryEntry, FileType, Metadata, Result, System, SystemPath, SystemPathBuf,
SystemVirtualPath,
DirectoryEntry, FileType, GlobError, GlobErrorKind, Metadata, Result, System, SystemPath,
SystemPathBuf, SystemVirtualPath,
};
use super::walk_directory::{
@@ -64,9 +64,11 @@ impl System for OsSystem {
}
fn canonicalize_path(&self, path: &SystemPath) -> Result<SystemPathBuf> {
path.as_utf8_path()
.canonicalize_utf8()
.map(SystemPathBuf::from_utf8_path_buf)
path.as_utf8_path().canonicalize_utf8().map(|path| {
SystemPathBuf::from_utf8_path_buf(path)
.simplified()
.to_path_buf()
})
}
fn read_to_string(&self, path: &SystemPath) -> Result<String> {
@@ -104,6 +106,30 @@ impl System for OsSystem {
WalkDirectoryBuilder::new(path, OsDirectoryWalker {})
}
fn glob(
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
glob::PatternError,
> {
glob::glob(pattern).map(|inner| {
let iterator = inner.map(|result| {
let path = result?;
let system_path = SystemPathBuf::from_path_buf(path).map_err(|path| GlobError {
path,
error: GlobErrorKind::NonUtf8Path,
})?;
Ok(system_path)
});
let boxed: Box<dyn Iterator<Item = _>> = Box::new(iterator);
boxed
})
}
fn as_any(&self) -> &dyn Any {
self
}

View File

@@ -1,12 +1,8 @@
// TODO support untitled files for the LSP use case. Wrap a `str` and `String`
// The main question is how `as_std_path` would work for untitled files, that can only exist in the LSP case
// but there's no compile time guarantee that a [`OsSystem`] never gets an untitled file path.
use camino::{Utf8Path, Utf8PathBuf};
use std::borrow::Borrow;
use std::fmt::Formatter;
use std::ops::Deref;
use std::path::{Path, StripPrefixError};
use std::path::{Path, PathBuf, StripPrefixError};
/// A slice of a path on [`System`](super::System) (akin to [`str`]).
///
@@ -23,6 +19,32 @@ impl SystemPath {
unsafe { &*(path as *const Utf8Path as *const SystemPath) }
}
/// Takes any path, and when possible, converts Windows UNC paths to regular paths.
/// If the path can't be converted, it's returned unmodified.
///
/// On non-Windows this is no-op.
///
/// `\\?\C:\Windows` will be converted to `C:\Windows`,
/// but `\\?\C:\COM` will be left as-is (due to a reserved filename).
///
/// Use this to pass arbitrary paths to programs that may not be UNC-aware.
///
/// It's generally safe to pass UNC paths to legacy programs, because
/// these paths contain a reserved prefix, so will gracefully fail
/// if used with legacy APIs that don't support UNC.
///
/// This function does not perform any I/O.
///
/// Currently paths with unpaired surrogates aren't converted even if they
/// could be, due to limitations of Rust's `OsStr` API.
///
/// To check if a path remained as UNC, use `path.as_os_str().as_encoded_bytes().starts_with(b"\\\\")`.
#[inline]
pub fn simplified(&self) -> &SystemPath {
// SAFETY: simplified only trims the path, that means the returned path must be a valid UTF-8 path.
SystemPath::from_std_path(dunce::simplified(self.as_std_path())).unwrap()
}
/// Extracts the file extension, if possible.
///
/// The extension is:
@@ -123,6 +145,39 @@ impl SystemPath {
self.0.parent().map(SystemPath::new)
}
/// Produces an iterator over `SystemPath` and its ancestors.
///
/// The iterator will yield the `SystemPath` that is returned if the [`parent`] method is used zero
/// or more times. That means, the iterator will yield `&self`, `&self.parent().unwrap()`,
/// `&self.parent().unwrap().parent().unwrap()` and so on. If the [`parent`] method returns
/// [`None`], the iterator will do likewise. The iterator will always yield at least one value,
/// namely `&self`.
///
/// # Examples
///
/// ```
/// use ruff_db::system::SystemPath;
///
/// let mut ancestors = SystemPath::new("/foo/bar").ancestors();
/// assert_eq!(ancestors.next(), Some(SystemPath::new("/foo/bar")));
/// assert_eq!(ancestors.next(), Some(SystemPath::new("/foo")));
/// assert_eq!(ancestors.next(), Some(SystemPath::new("/")));
/// assert_eq!(ancestors.next(), None);
///
/// let mut ancestors = SystemPath::new("../foo/bar").ancestors();
/// assert_eq!(ancestors.next(), Some(SystemPath::new("../foo/bar")));
/// assert_eq!(ancestors.next(), Some(SystemPath::new("../foo")));
/// assert_eq!(ancestors.next(), Some(SystemPath::new("..")));
/// assert_eq!(ancestors.next(), Some(SystemPath::new("")));
/// assert_eq!(ancestors.next(), None);
/// ```
///
/// [`parent`]: SystemPath::parent
#[inline]
pub fn ancestors(&self) -> impl Iterator<Item = &SystemPath> {
self.0.ancestors().map(SystemPath::new)
}
/// Produces an iterator over the [`camino::Utf8Component`]s of the path.
///
/// When parsing the path, there is a small amount of normalization:
@@ -473,6 +528,10 @@ impl SystemPathBuf {
self.0
}
pub fn into_std_path_buf(self) -> PathBuf {
self.0.into_std_path_buf()
}
#[inline]
pub fn as_path(&self) -> &SystemPath {
SystemPath::new(&self.0)
@@ -491,6 +550,12 @@ impl From<&str> for SystemPathBuf {
}
}
impl From<String> for SystemPathBuf {
fn from(value: String) -> Self {
SystemPathBuf::from_utf8_path_buf(Utf8PathBuf::from(value))
}
}
impl Default for SystemPathBuf {
fn default() -> Self {
Self::new()

View File

@@ -1,14 +1,14 @@
use glob::PatternError;
use ruff_notebook::{Notebook, NotebookError};
use ruff_python_trivia::textwrap;
use std::any::Any;
use std::panic::RefUnwindSafe;
use std::sync::Arc;
use ruff_notebook::{Notebook, NotebookError};
use ruff_python_trivia::textwrap;
use crate::files::File;
use crate::system::{
DirectoryEntry, MemoryFileSystem, Metadata, Result, System, SystemPath, SystemPathBuf,
SystemVirtualPath,
DirectoryEntry, GlobError, MemoryFileSystem, Metadata, Result, System, SystemPath,
SystemPathBuf, SystemVirtualPath,
};
use crate::Db;
@@ -21,7 +21,7 @@ use super::walk_directory::WalkDirectoryBuilder;
///
/// ## Warning
/// Don't use this system for production code. It's intended for testing only.
#[derive(Default, Debug)]
#[derive(Default, Debug, Clone)]
pub struct TestSystem {
inner: TestSystemInner,
}
@@ -121,6 +121,22 @@ impl System for TestSystem {
}
}
fn glob(
&self,
pattern: &str,
) -> std::result::Result<
Box<dyn Iterator<Item = std::result::Result<SystemPathBuf, GlobError>>>,
PatternError,
> {
match &self.inner {
TestSystemInner::Stub(fs) => {
let iterator = fs.glob(pattern)?;
Ok(Box::new(iterator))
}
TestSystemInner::System(system) => system.glob(pattern),
}
}
fn as_any(&self) -> &dyn Any {
self
}
@@ -142,7 +158,7 @@ impl System for TestSystem {
fn canonicalize_path(&self, path: &SystemPath) -> Result<SystemPathBuf> {
match &self.inner {
TestSystemInner::System(fs) => fs.canonicalize_path(path),
TestSystemInner::Stub(fs) => Ok(fs.canonicalize(path)),
TestSystemInner::Stub(fs) => fs.canonicalize(path),
}
}
}
@@ -229,7 +245,7 @@ pub trait DbWithTestSystem: Db + Sized {
}
}
#[derive(Debug)]
#[derive(Debug, Clone)]
enum TestSystemInner {
Stub(MemoryFileSystem),
System(Arc<dyn System + RefUnwindSafe + Send + Sync>),

View File

@@ -213,6 +213,7 @@ impl Default for LoggingBuilder {
}
}
#[must_use = "Dropping the guard unregisters the tracing subscriber."]
pub struct LoggingGuard {
_guard: tracing::subscriber::DefaultGuard,
}

View File

@@ -418,7 +418,7 @@ pub(crate) mod tests {
#[test]
fn filesystem_debug_implementation_alternate() {
assert_snapshot!(format!("{:#?}", mock_typeshed()), @r###"
assert_snapshot!(format!("{:#?}", mock_typeshed()), @r#"
VendoredFileSystem {
inner_mutex_poisoned: false,
paths: [
@@ -454,7 +454,7 @@ pub(crate) mod tests {
},
},
}
"###);
"#);
}
fn test_directory(dirname: &str) {

View File

@@ -13,6 +13,12 @@ datetime.datetime.min.replace(hour=...)
datetime.datetime.max.replace(tzinfo=...)
datetime.datetime.min.replace(tzinfo=...)
datetime.datetime.max.time()
datetime.datetime.min.time()
datetime.datetime.max.time(foo=...)
datetime.datetime.min.time(foo=...)
from datetime import datetime
@@ -28,3 +34,9 @@ datetime.min.replace(hour=...)
# No error
datetime.max.replace(tzinfo=...)
datetime.min.replace(tzinfo=...)
datetime.max.time()
datetime.min.time()
datetime.max.time(foo=...)
datetime.min.time(foo=...)

View File

@@ -235,3 +235,15 @@ if typing.TYPE_CHECKING:
def contains_meaningful_ellipsis() -> list[int]:
"""Allow this in a TYPE_CHECKING block."""
...
# https://github.com/astral-sh/ruff/issues/12616
class PotentialDocstring1:
pass
"""
Lorem ipsum dolor sit amet.
"""
class PotentialDocstring2:
...
'Lorem ipsum dolor sit amet.'

View File

@@ -39,9 +39,38 @@ async def f4(**kwargs: int | int | float) -> None:
...
def f5(
arg: Union[ # comment
float, # another
complex, int]
) -> None:
...
def f6(
arg: (
int | # comment
float | # another
complex
)
) -> None:
...
class Foo:
def good(self, arg: int) -> None:
...
def bad(self, arg: int | float | complex) -> None:
...
def bad2(self, arg: int | Union[float, complex]) -> None:
...
def bad3(self, arg: Union[Union[float, complex], int]) -> None:
...
def bad4(self, arg: Union[float | complex, int]) -> None:
...
def bad5(self, arg: int | (float | complex)) -> None:
...

View File

@@ -32,8 +32,29 @@ def f3(arg1: int, *args: Union[int | int | float]) -> None: ... # PYI041
async def f4(**kwargs: int | int | float) -> None: ... # PYI041
def f5(
arg: Union[ # comment
float, # another
complex, int]
) -> None: ... # PYI041
def f6(
arg: (
int | # comment
float | # another
complex
)
) -> None: ... # PYI041
class Foo:
def good(self, arg: int) -> None: ...
def bad(self, arg: int | float | complex) -> None: ... # PYI041
def bad2(self, arg: int | Union[float, complex]) -> None: ... # PYI041
def bad3(self, arg: Union[Union[float, complex], int]) -> None: ... # PYI041
def bad4(self, arg: Union[float | complex, int]) -> None: ... # PYI041
def bad5(self, arg: int | (float | complex)) -> None: ... # PYI041

View File

@@ -0,0 +1,60 @@
from typing import Literal
def func1(arg1: Literal[None]):
...
def func2(arg1: Literal[None] | int):
...
def func3() -> Literal[None]:
...
def func4(arg1: Literal[int, None, float]):
...
def func5(arg1: Literal[None, None]):
...
def func6(arg1: Literal[
"hello",
None # Comment 1
, "world"
]):
...
def func7(arg1: Literal[
None # Comment 1
]):
...
# OK
def good_func(arg1: Literal[int] | None):
...
# From flake8-pyi
Literal[None] # Y061 None inside "Literal[]" expression. Replace with "None"
Literal[True, None] # Y061 None inside "Literal[]" expression. Replace with "Literal[True] | None"
###
# The following rules here are slightly subtle,
# but make sense when it comes to giving the best suggestions to users of flake8-pyi.
###
# If Y061 and Y062 both apply, but all the duplicate members are None,
# only emit Y061...
Literal[None, None] # Y061 None inside "Literal[]" expression. Replace with "None"
Literal[1, None, "foo", None] # Y061 None inside "Literal[]" expression. Replace with "Literal[1, 'foo'] | None"
# ... but if Y061 and Y062 both apply
# and there are no None members in the Literal[] slice,
# only emit Y062:
Literal[None, True, None, True] # Y062 Duplicate "Literal[]" member "True"

View File

@@ -0,0 +1,37 @@
from typing import Literal
def func1(arg1: Literal[None]): ...
def func2(arg1: Literal[None] | int): ...
def func3() -> Literal[None]: ...
def func4(arg1: Literal[int, None, float]): ...
def func5(arg1: Literal[None, None]): ...
def func6(arg1: Literal[
"hello",
None # Comment 1
, "world"
]): ...
def func7(arg1: Literal[
None # Comment 1
]): ...
# OK
def good_func(arg1: Literal[int] | None): ...
# From flake8-pyi
Literal[None] # PYI061 None inside "Literal[]" expression. Replace with "None"
Literal[True, None] # PYI061 None inside "Literal[]" expression. Replace with "Literal[True] | None"

View File

@@ -65,3 +65,11 @@ foo == "a" or foo == "b" or "c" != bar and "d" != bar # Multiple targets
foo == "a" or ("c" != bar and "d" != bar) or foo == "b" # Multiple targets
foo == "a" and "c" != bar or foo == "b" and "d" != bar # Multiple targets
foo == 1 or foo == True # Different types, same hashed value
foo == 1 or foo == 1.0 # Different types, same hashed value
foo == False or foo == 0 # Different types, same hashed value
foo == 0.0 or foo == 0j # Different types, same hashed value

View File

@@ -100,3 +100,5 @@ class Thing:
blah = lambda: {"a": 1}.__delitem__("a") # OK
blah = dict[{"a": 1}.__delitem__("a")] # OK
"abc".__contains__("a")

View File

@@ -50,3 +50,10 @@ def func() -> Generator[str, None, None]:
async def func() -> AsyncGenerator[str, None]:
yield "hello"
async def func() -> AsyncGenerator[ # type: ignore
str,
None
]:
yield "hello"

View File

@@ -0,0 +1,33 @@
from typing import Literal
def func1(arg1: Literal[True, False]):
...
def func2(arg1: Literal[True, False, True]):
...
def func3() -> Literal[True, False]:
...
def func4(arg1: Literal[True, False] | bool):
...
def func5(arg1: Literal[False, True]):
...
def func6(arg1: Literal[True, False, "hello", "world"]):
...
# ok
def good_func1(arg1: bool):
...
def good_func2(arg1: Literal[True]):
...

View File

@@ -0,0 +1,20 @@
from typing import Literal
def func1(arg1: Literal[True, False]): ...
def func2(arg1: Literal[True, False, True]): ...
def func3() -> Literal[True, False]: ...
def func4(arg1: Literal[True, False] | bool): ...
def func5(arg1: Literal[False, True]): ...
def func6(arg1: Literal[True, False, "hello", "world"]): ...
# ok
def good_func1(arg1: bool): ...
def good_func2(arg1: Literal[True]): ...

View File

@@ -0,0 +1,55 @@
import re
import regex
# Errors
re.compile('single free-spacing', flags=re.X)
re.findall('si\ngle')
re.finditer("dou\ble")
re.fullmatch('''t\riple single''')
re.match("""\triple double""")
re.search('two', 'args')
re.split("raw", r'second')
re.sub(u'''nicode''', u"f(?i)rst")
re.subn(b"""ytes are""", f"\u006e")
regex.compile('single free-spacing', flags=regex.X)
regex.findall('si\ngle')
regex.finditer("dou\ble")
regex.fullmatch('''t\riple single''')
regex.match("""\triple double""")
regex.search('two', 'args')
regex.split("raw", r'second')
regex.sub(u'''nicode''', u"f(?i)rst")
regex.subn(b"""ytes are""", f"\u006e")
regex.template("""(?m)
(?:ulti)?
(?=(?<!(?<=(?!l)))
l(?i:ne)
""", flags = regex.X)
# No errors
re.compile(R'uppercase')
re.findall(not_literal)
re.finditer(0, literal_but_not_string)
re.fullmatch() # no first argument
re.match('string' f'''concatenation''')
re.search(R"raw" r'concatenation')
re.split(rf"multiple", f"""lags""")
re.sub(FR'ee', '''as in free speech''')
re.subn(br"""eak your machine with rm -""", rf"""/""")
regex.compile(R'uppercase')
regex.findall(not_literal)
regex.finditer(0, literal_but_not_string)
regex.fullmatch() # no first argument
regex.match('string' f'''concatenation''')
regex.search(R"raw" r'concatenation')
regex.split(rf"multiple", f"""lags""")
regex.sub(FR'ee', '''as in free speech''')
regex.subn(br"""eak your machine with rm -""", rf"""/""")
regex.splititer(both, non_literal)
regex.subf(f, lambda _: r'means', '"format"')
regex.subfn(fn, f'''a$1n't''', lambda: "'function'")

View File

@@ -0,0 +1,93 @@
import re
re.compile(
'implicit'
'concatenation'
)
re.findall(
r'''
multiline
'''
"""
concatenation
"""
)
re.finditer(
f'(?P<{group}>Dynamic'
r'\s+group'
'name)'
)
re.fullmatch(
u'n'r'''eadable'''
f'much?'
)
re.match(
b'reak'
br'eak'
)
re.search(
r''u''
'''okay?'''
)
re.split(''U"""w"""U'')
re.sub(
"I''m o"
'utta ideas'
)
re.subn("()"r' am I'"??")
import regex
regex.compile(
'implicit'
'concatenation'
)
regex.findall(
r'''
multiline
'''
"""
concatenation
"""
)
regex.finditer(
f'(?P<{group}>Dynamic'
r'\s+group'
'name)'
)
regex.fullmatch(
u'n'r'''eadable'''
f'much?'
)
regex.match(
b'reak'
br'eak'
)
regex.search(
r''u''
'''okay?'''
)
regex.split(''U"""w"""U'')
regex.sub(
"I''m o"
'utta ideas'
)
regex.subn("()"r' am I'"??")
regex.template(
r'''kitty says'''
r""r''r""r'aw'r""
)
regex.splititer(
r'r+r*r?'
)
regex.subf(
rb"haha"
br"ust go"
br''br""br''
)
regex.subfn(br'I\s\nee*d\s[O0o]me\x20\Qoffe\E, ' br'b')

View File

@@ -0,0 +1,17 @@
__version__ = (0, 1, 0)
tuple(map(int, __version__.split(".")))
list(map(int, __version__.split(".")))
# `sep` passed as keyword argument
for part in map(int, __version__.split(sep=".")):
print(part)
# Comma
tuple(map(int, __version__.split(",")))
list(map(int, __version__.split(",")))
# Multiple arguments
tuple(map(int, __version__.split(".", 1)))
list(map(int, __version__.split(".", maxsplit=2)))

View File

@@ -0,0 +1,27 @@
from foo import __version__
import bar
tuple(map(int, __version__.split(".")))
list(map(int, __version__.split(".")))
tuple(map(int, bar.__version__.split(".")))
list(map(int, bar.__version__.split(".")))
# `sep` passed as keyword argument
for part in map(int, bar.__version__.split(sep=".")):
print(part)
for part in map(int, __version__.split(sep=".")):
print(part)
# Comma
tuple(map(int, __version__.split(",")))
list(map(int, __version__.split(",")))
tuple(map(int, bar.__version__.split(",")))
list(map(int, bar.__version__.split(",")))
# Multiple arguments
tuple(map(int, __version__.split(",", 1)))
list(map(int, __version__.split(",", maxsplit = 2)))
tuple(map(int, bar.__version__.split(",", 1)))
list(map(int, bar.__version__.split(",", maxsplit = 2)))

View File

@@ -104,9 +104,21 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
}
// Ex) Literal[...]
if checker.enabled(Rule::DuplicateLiteralMember) {
if checker.any_enabled(&[
Rule::DuplicateLiteralMember,
Rule::RedundantBoolLiteral,
Rule::RedundantNoneLiteral,
]) {
if !checker.semantic.in_nested_literal() {
flake8_pyi::rules::duplicate_literal_member(checker, expr);
if checker.enabled(Rule::DuplicateLiteralMember) {
flake8_pyi::rules::duplicate_literal_member(checker, expr);
}
if checker.enabled(Rule::RedundantBoolLiteral) {
ruff::rules::redundant_bool_literal(checker, expr);
}
if checker.enabled(Rule::RedundantNoneLiteral) {
flake8_pyi::rules::redundant_none_literal(checker, expr);
}
}
}
@@ -1043,6 +1055,12 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::UnsafeMarkupUse) {
ruff::rules::unsafe_markup_call(checker, call);
}
if checker.enabled(Rule::MapIntVersionParsing) {
ruff::rules::map_int_version_parsing(checker, call);
}
if checker.enabled(Rule::UnrawRePattern) {
ruff::rules::unraw_re_pattern(checker, call);
}
}
Expr::Dict(dict) => {
if checker.any_enabled(&[

View File

@@ -786,6 +786,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Flake8Pyi, "058") => (RuleGroup::Stable, rules::flake8_pyi::rules::GeneratorReturnFromIterMethod),
(Flake8Pyi, "057") => (RuleGroup::Stable, rules::flake8_pyi::rules::ByteStringUsage),
(Flake8Pyi, "059") => (RuleGroup::Preview, rules::flake8_pyi::rules::GenericNotLastBaseClass),
(Flake8Pyi, "061") => (RuleGroup::Preview, rules::flake8_pyi::rules::RedundantNoneLiteral),
(Flake8Pyi, "062") => (RuleGroup::Stable, rules::flake8_pyi::rules::DuplicateLiteralMember),
(Flake8Pyi, "063") => (RuleGroup::Preview, rules::flake8_pyi::rules::PrePep570PositionalArgument),
(Flake8Pyi, "064") => (RuleGroup::Preview, rules::flake8_pyi::rules::RedundantFinalLiteral),
@@ -969,6 +970,9 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Ruff, "034") => (RuleGroup::Preview, rules::ruff::rules::UselessIfElse),
(Ruff, "035") => (RuleGroup::Preview, rules::ruff::rules::UnsafeMarkupUse),
(Ruff, "036") => (RuleGroup::Preview, rules::ruff::rules::NoneNotAtEndOfUnion),
(Ruff, "038") => (RuleGroup::Preview, rules::ruff::rules::RedundantBoolLiteral),
(Ruff, "048") => (RuleGroup::Preview, rules::ruff::rules::MapIntVersionParsing),
(Ruff, "039") => (RuleGroup::Preview, rules::ruff::rules::UnrawRePattern),
(Ruff, "100") => (RuleGroup::Stable, rules::ruff::rules::UnusedNOQA),
(Ruff, "101") => (RuleGroup::Stable, rules::ruff::rules::RedirectedNOQA),

View File

@@ -1,5 +1,6 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
None

View File

@@ -1,5 +1,6 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
None

View File

@@ -1,6 +1,7 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
Some(
ShebangDirective(

View File

@@ -1,6 +1,7 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
Some(
ShebangDirective(

View File

@@ -1,5 +1,6 @@
---
source: crates/ruff_linter/src/comments/shebang.rs
expression: "ShebangDirective::try_extract(source)"
snapshot_kind: text
---
None

View File

@@ -22,18 +22,18 @@ use crate::fix::codemods::CodegenStylist;
use crate::line_width::{IndentWidth, LineLength, LineWidthBuilder};
use crate::Locator;
/// Return the `Fix` to use when deleting a `Stmt`.
/// Return the [`Edit`] to use when deleting a [`Stmt`].
///
/// In some cases, this is as simple as deleting the `Range` of the `Stmt`
/// In some cases, this is as simple as deleting the [`TextRange`] of the [`Stmt`]
/// itself. However, there are a few exceptions:
/// - If the `Stmt` is _not_ the terminal statement in a multi-statement line,
/// - If the [`Stmt`] is _not_ the terminal statement in a multi-statement line,
/// we need to delete up to the start of the next statement (and avoid
/// deleting any content that precedes the statement).
/// - If the `Stmt` is the terminal statement in a multi-statement line, we need
/// - If the [`Stmt`] is the terminal statement in a multi-statement line, we need
/// to avoid deleting any content that precedes the statement.
/// - If the `Stmt` has no trailing and leading content, then it's convenient to
/// - If the [`Stmt`] has no trailing and leading content, then it's convenient to
/// remove the entire start and end lines.
/// - If the `Stmt` is the last statement in its parent body, replace it with a
/// - If the [`Stmt`] is the last statement in its parent body, replace it with a
/// `pass` instead.
pub(crate) fn delete_stmt(
stmt: &Stmt,

View File

@@ -1,8 +1,8 @@
---
source: crates/ruff_linter/src/message/azure.rs
expression: content
snapshot_kind: text
---
##vso[task.logissue type=error;sourcepath=fib.py;linenumber=1;columnnumber=8;code=F401;]`os` imported but unused
##vso[task.logissue type=error;sourcepath=fib.py;linenumber=6;columnnumber=5;code=F841;]Local variable `x` is assigned to but never used
##vso[task.logissue type=error;sourcepath=undef.py;linenumber=1;columnnumber=4;code=F821;]Undefined name `a`

View File

@@ -1,6 +1,7 @@
---
source: crates/ruff_linter/src/message/azure.rs
expression: content
snapshot_kind: text
---
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=1;columnnumber=15;]SyntaxError: Expected one or more symbol names after import
##vso[task.logissue type=error;sourcepath=syntax_errors.py;linenumber=3;columnnumber=12;]SyntaxError: Expected ')', found newline

View File

@@ -1,8 +1,8 @@
---
source: crates/ruff_linter/src/message/github.rs
expression: content
snapshot_kind: text
---
::error title=Ruff (F401),file=fib.py,line=1,col=8,endLine=1,endColumn=10::fib.py:1:8: F401 `os` imported but unused
::error title=Ruff (F841),file=fib.py,line=6,col=5,endLine=6,endColumn=6::fib.py:6:5: F841 Local variable `x` is assigned to but never used
::error title=Ruff (F821),file=undef.py,line=1,col=4,endLine=1,endColumn=5::undef.py:1:4: F821 Undefined name `a`

Some files were not shown because too many files have changed in this diff Show More