Compare commits

..

2 Commits

Author SHA1 Message Date
Zanie Blue
48eb23b488 Empty commit to test performance 2024-11-19 19:07:54 -06:00
Zanie Blue
7f624cd0bb Improve CI caching of fuzz dependencies 2024-11-19 18:09:03 -06:00
5457 changed files with 101549 additions and 316227 deletions

View File

@@ -8,7 +8,3 @@ benchmark = "bench -p ruff_benchmark --bench linter --bench formatter --"
# See: https://github.com/astral-sh/ruff/issues/11503
[target.'cfg(all(target_env="msvc", target_os = "windows"))']
rustflags = ["-C", "target-feature=+crt-static"]
[target.'wasm32-unknown-unknown']
# See https://docs.rs/getrandom/latest/getrandom/#webassembly-support
rustflags = ["--cfg", 'getrandom_backend="wasm_js"']

View File

@@ -6,10 +6,3 @@ failure-output = "immediate-final"
fail-fast = false
status-level = "skip"
# Mark tests that take longer than 1s as slow.
# Terminate after 60s as a stop-gap measure to terminate on deadlock.
slow-timeout = { period = "1s", terminate-after = 60 }
# Show slow jobs in the final summary
final-status-level = "slow"

11
.gitattributes vendored
View File

@@ -12,16 +12,7 @@ crates/ruff_python_parser/resources/invalid/re_lexing/line_continuation_windows_
crates/ruff_python_parser/resources/invalid/re_lex_logical_token_windows_eol.py text eol=crlf
crates/ruff_python_parser/resources/invalid/re_lex_logical_token_mac_eol.py text eol=cr
crates/ruff_linter/resources/test/fixtures/ruff/RUF046_CR.py text eol=cr
crates/ruff_linter/resources/test/fixtures/ruff/RUF046_LF.py text eol=lf
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP018_CR.py text eol=cr
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP018_LF.py text eol=lf
crates/ruff_python_parser/resources/inline linguist-generated=true
ruff.schema.json -diff linguist-generated=true text=auto eol=lf
ty.schema.json -diff linguist-generated=true text=auto eol=lf
crates/ruff_python_ast/src/generated.rs -diff linguist-generated=true text=auto eol=lf
crates/ruff_python_formatter/src/generated.rs -diff linguist-generated=true text=auto eol=lf
ruff.schema.json linguist-generated=true text=auto eol=lf
*.md.snap linguist-language=Markdown

13
.github/CODEOWNERS vendored
View File

@@ -9,16 +9,13 @@
/crates/ruff_formatter/ @MichaReiser
/crates/ruff_python_formatter/ @MichaReiser
/crates/ruff_python_parser/ @MichaReiser @dhruvmanila
/crates/ruff_annotate_snippets/ @BurntSushi
# flake8-pyi
/crates/ruff_linter/src/rules/flake8_pyi/ @AlexWaygood
# Script for fuzzing the parser/ty etc.
/python/py-fuzzer/ @AlexWaygood
# Script for fuzzing the parser
/scripts/fuzz-parser/ @AlexWaygood
# ty
/crates/ty* @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ruff_db/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/scripts/ty_benchmark/ @carljm @MichaReiser @AlexWaygood @sharkdp @dcreager
/crates/ty_python_semantic @carljm @AlexWaygood @sharkdp @dcreager
# red-knot
/crates/red_knot* @carljm @MichaReiser @AlexWaygood @sharkdp
/crates/ruff_db/ @carljm @MichaReiser @AlexWaygood @sharkdp

12
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,12 @@
<!--
Thank you for taking the time to report an issue! We're glad to have you involved with Ruff.
If you're filing a bug report, please consider including the following information:
* List of keywords you searched for before creating this issue. Write them down here so that others can find this issue more easily and help provide feedback.
e.g. "RUF001", "unused variable", "Jupyter notebook"
* A minimal code snippet that reproduces the bug.
* The command you invoked (e.g., `ruff /path/to/file.py --fix`), ideally including the `--isolated` flag.
* The current Ruff settings (any relevant sections from your `pyproject.toml`).
* The current Ruff version (`ruff --version`).
-->

View File

@@ -1,31 +0,0 @@
name: Bug report
description: Report an error or unexpected behavior
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report an issue! We're glad to have you involved with Ruff.
**Before reporting, please make sure to search through [existing issues](https://github.com/astral-sh/ruff/issues?q=is:issue+is:open+label:bug) (including [closed](https://github.com/astral-sh/ruff/issues?q=is:issue%20state:closed%20label:bug)).**
- type: textarea
attributes:
label: Summary
description: |
A clear and concise description of the bug, including a minimal reproducible example.
Be sure to include the command you invoked (e.g., `ruff check /path/to/file.py --fix`), ideally including the `--isolated` flag and
the current Ruff settings (e.g., relevant sections from your `pyproject.toml`).
If possible, try to include the [playground](https://play.ruff.rs) link that reproduces this issue.
validations:
required: true
- type: input
attributes:
label: Version
description: What version of ruff are you using? (see `ruff version`)
placeholder: e.g., ruff 0.9.3 (90589372d 2025-01-23)
validations:
required: false

View File

@@ -1,10 +0,0 @@
name: Rule request
description: Anything related to lint rules (proposing new rules, changes to existing rules, auto-fixes, etc.)
body:
- type: textarea
attributes:
label: Summary
description: |
A clear and concise description of the relevant request. If applicable, please describe the current behavior as well.
validations:
required: true

View File

@@ -1,18 +0,0 @@
name: Question
description: Ask a question about Ruff
labels: ["question"]
body:
- type: textarea
attributes:
label: Question
description: Describe your question in detail.
validations:
required: true
- type: input
attributes:
label: Version
description: What version of ruff are you using? (see `ruff version`)
placeholder: e.g., ruff 0.9.3 (90589372d 2025-01-23)
validations:
required: false

View File

@@ -1,11 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: Report an issue with ty
url: https://github.com/astral-sh/ty/issues/new/choose
about: Please report issues for our type checker ty in the ty repository.
- name: Documentation
url: https://docs.astral.sh/ruff
about: Please consult the documentation before creating an issue.
- name: Community
url: https://discord.com/invite/astral-sh
about: Join our Discord community to ask questions and collaborate.

View File

@@ -1,9 +1,8 @@
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing, please consider the following:
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix with `[ty]` for ty pull
requests.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

View File

@@ -1,11 +0,0 @@
# Configuration for the actionlint tool, which we run via pre-commit
# to verify the correctness of the syntax in our GitHub Actions workflows.
self-hosted-runner:
# Various runners we use that aren't recognized out-of-the-box by actionlint:
labels:
- depot-ubuntu-latest-8
- depot-ubuntu-22.04-16
- depot-ubuntu-22.04-32
- github-windows-2025-x86_64-8
- github-windows-2025-x86_64-16

View File

@@ -1,7 +0,0 @@
#:schema ../ty.schema.json
# Configuration overrides for the mypy primer run
# Enable off-by-default rules.
[rules]
possibly-unresolved-reference = "warn"
unused-ignore-comment = "warn"

View File

@@ -40,23 +40,12 @@
enabled: true,
},
packageRules: [
// Pin GitHub Actions to immutable SHAs.
{
matchDepTypes: ["action"],
pinDigests: true,
},
// Annotate GitHub Actions SHAs with a SemVer version.
{
extends: ["helpers:pinGitHubActionDigests"],
extractVersion: "^(?<version>v?\\d+\\.\\d+\\.\\d+)$",
versioning: "regex:^v?(?<major>\\d+)(\\.(?<minor>\\d+)\\.(?<patch>\\d+))?$",
},
{
// Group upload/download artifact updates, the versions are dependent
groupName: "Artifact GitHub Actions dependencies",
matchManagers: ["github-actions"],
matchDatasources: ["gitea-tags", "github-tags"],
matchPackageNames: ["actions/.*-artifact"],
matchPackagePatterns: ["actions/.*-artifact"],
description: "Weekly update of artifact-related GitHub Actions dependencies",
},
{
@@ -72,7 +61,7 @@
{
// Disable updates of `zip-rs`; intentionally pinned for now due to ownership change
// See: https://github.com/astral-sh/uv/issues/3642
matchPackageNames: ["zip"],
matchPackagePatterns: ["zip"],
matchManagers: ["cargo"],
enabled: false,
},
@@ -81,7 +70,7 @@
// with `mkdocs-material-insider`.
// See: https://squidfunk.github.io/mkdocs-material/insiders/upgrade/
matchManagers: ["pip_requirements"],
matchPackageNames: ["mkdocs-material"],
matchPackagePatterns: ["mkdocs-material"],
enabled: false,
},
{
@@ -98,15 +87,22 @@
{
groupName: "Monaco",
matchManagers: ["npm"],
matchPackageNames: ["monaco"],
matchPackagePatterns: ["monaco"],
description: "Weekly update of the Monaco editor",
},
{
groupName: "strum",
matchManagers: ["cargo"],
matchPackageNames: ["strum"],
matchPackagePatterns: ["strum"],
description: "Weekly update of strum dependencies",
}
},
{
groupName: "ESLint",
matchManagers: ["npm"],
matchPackageNames: ["eslint"],
allowedVersions: "<9",
description: "Constraint ESLint to version 8 until TypeScript-eslint supports ESLint 9", // https://github.com/typescript-eslint/typescript-eslint/issues/8211
},
],
vulnerabilityAlerts: {
commitMessageSuffix: "",

View File

@@ -23,12 +23,10 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
env:
PACKAGE_NAME: ruff
MODULE_NAME: ruff
PYTHON_VERSION: "3.13"
PYTHON_VERSION: "3.11"
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
@@ -39,27 +37,26 @@ jobs:
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-build') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build sdist"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
command: sdist
args: --out dist
- name: "Test sdist"
run: |
pip install dist/"${PACKAGE_NAME}"-*.tar.gz --force-reinstall
"${MODULE_NAME}" --help
python -m "${MODULE_NAME}" --help
pip install dist/${{ env.PACKAGE_NAME }}-*.tar.gz --force-reinstall
${{ env.MODULE_NAME }} --help
python -m ${{ env.MODULE_NAME }} --help
- name: "Upload sdist"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-sdist
path: dist
@@ -68,23 +65,22 @@ jobs:
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-build') }}
runs-on: macos-14
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - x86_64"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: x86_64
args: --release --locked --out dist
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-macos-x86_64
path: dist
@@ -99,7 +95,7 @@ jobs:
tar czvf $ARCHIVE_FILE $ARCHIVE_NAME
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-macos-x86_64
path: |
@@ -110,28 +106,27 @@ jobs:
if: ${{ !contains(github.event.pull_request.labels.*.name, 'no-build') }}
runs-on: macos-14
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: arm64
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - aarch64"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: aarch64
args: --release --locked --out dist
- name: "Test wheel - aarch64"
run: |
pip install dist/"${PACKAGE_NAME}"-*.whl --force-reinstall
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-aarch64-apple-darwin
path: dist
@@ -146,7 +141,7 @@ jobs:
tar czvf $ARCHIVE_FILE $ARCHIVE_NAME
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-aarch64-apple-darwin
path: |
@@ -166,18 +161,17 @@ jobs:
- target: aarch64-pc-windows-msvc
arch: x64
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: ${{ matrix.platform.arch }}
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: ${{ matrix.platform.target }}
args: --release --locked --out dist
@@ -188,11 +182,11 @@ jobs:
if: ${{ !startsWith(matrix.platform.target, 'aarch64') }}
shell: bash
run: |
python -m pip install dist/"${PACKAGE_NAME}"-*.whl --force-reinstall
"${MODULE_NAME}" --help
python -m "${MODULE_NAME}" --help
python -m pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
${{ env.MODULE_NAME }} --help
python -m ${{ env.MODULE_NAME }} --help
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.platform.target }}
path: dist
@@ -203,7 +197,7 @@ jobs:
7z a $ARCHIVE_FILE ./target/${{ matrix.platform.target }}/release/ruff.exe
sha256sum $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-${{ matrix.platform.target }}
path: |
@@ -219,18 +213,17 @@ jobs:
- x86_64-unknown-linux-gnu
- i686-unknown-linux-gnu
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: auto
@@ -238,11 +231,11 @@ jobs:
- name: "Test wheel"
if: ${{ startsWith(matrix.target, 'x86_64') }}
run: |
pip install dist/"${PACKAGE_NAME}"-*.whl --force-reinstall
"${MODULE_NAME}" --help
python -m "${MODULE_NAME}" --help
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
${{ env.MODULE_NAME }} --help
python -m ${{ env.MODULE_NAME }} --help
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.target }}
path: dist
@@ -260,7 +253,7 @@ jobs:
tar czvf $ARCHIVE_FILE $ARCHIVE_NAME
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-${{ matrix.target }}
path: |
@@ -294,24 +287,23 @@ jobs:
arch: arm
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: ${{ matrix.platform.target }}
manylinux: auto
docker-options: ${{ matrix.platform.maturin_docker_options }}
args: --release --locked --out dist
- uses: uraimo/run-on-arch-action@d94c13912ea685de38fccc1109385b83fd79427d # v3.0.1
if: ${{ matrix.platform.arch != 'ppc64' && matrix.platform.arch != 'ppc64le'}}
- uses: uraimo/run-on-arch-action@v2
if: matrix.platform.arch != 'ppc64'
name: Test wheel
with:
arch: ${{ matrix.platform.arch == 'arm' && 'armv6' || matrix.platform.arch }}
@@ -325,7 +317,7 @@ jobs:
pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
ruff --help
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.platform.target }}
path: dist
@@ -343,7 +335,7 @@ jobs:
tar czvf $ARCHIVE_FILE $ARCHIVE_NAME
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-${{ matrix.platform.target }}
path: |
@@ -359,25 +351,24 @@ jobs:
- x86_64-unknown-linux-musl
- i686-unknown-linux-musl
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: ${{ matrix.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist
- name: "Test wheel"
if: matrix.target == 'x86_64-unknown-linux-musl'
uses: addnab/docker-run-action@4f65fabd2431ebc8d299f8e5a018d79a769ae185 # v3
uses: addnab/docker-run-action@v3
with:
image: alpine:latest
options: -v ${{ github.workspace }}:/io -w /io
@@ -387,7 +378,7 @@ jobs:
.venv/bin/pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
.venv/bin/${{ env.MODULE_NAME }} --help
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.target }}
path: dist
@@ -405,7 +396,7 @@ jobs:
tar czvf $ARCHIVE_FILE $ARCHIVE_NAME
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-${{ matrix.target }}
path: |
@@ -425,23 +416,22 @@ jobs:
arch: armv7
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels"
uses: PyO3/maturin-action@aef21716ff3dcae8a1c301d23ec3e4446972a6e3 # v1.49.1
uses: PyO3/maturin-action@v1
with:
target: ${{ matrix.platform.target }}
manylinux: musllinux_1_2
args: --release --locked --out dist
docker-options: ${{ matrix.platform.maturin_docker_options }}
- uses: uraimo/run-on-arch-action@d94c13912ea685de38fccc1109385b83fd79427d # v3.0.1
- uses: uraimo/run-on-arch-action@v2
name: Test wheel
with:
arch: ${{ matrix.platform.arch }}
@@ -454,7 +444,7 @@ jobs:
.venv/bin/pip3 install ${{ env.PACKAGE_NAME }} --no-index --find-links dist/ --force-reinstall
.venv/bin/${{ env.MODULE_NAME }} --help
- name: "Upload wheels"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.platform.target }}
path: dist
@@ -472,7 +462,7 @@ jobs:
tar czvf $ARCHIVE_FILE $ARCHIVE_NAME
shasum -a 256 $ARCHIVE_FILE > $ARCHIVE_FILE.sha256
- name: "Upload binary"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: artifacts-${{ matrix.platform.target }}
path: |

View File

@@ -33,14 +33,13 @@ jobs:
- linux/amd64
- linux/arm64
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
submodules: recursive
persist-credentials: false
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -48,13 +47,11 @@ jobs:
- name: Check tag consistency
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
env:
TAG: ${{ inputs.plan != '' && fromJson(inputs.plan).announcement_tag || 'dry-run' }}
run: |
version=$(grep -m 1 "^version = " pyproject.toml | sed -e 's/version = "\(.*\)"/\1/g')
if [ "${TAG}" != "${version}" ]; then
version=$(grep "version = " pyproject.toml | sed -e 's/version = "\(.*\)"/\1/g')
if [ "${{ fromJson(inputs.plan).announcement_tag }}" != "${version}" ]; then
echo "The input tag does not match the version from pyproject.toml:" >&2
echo "${TAG}" >&2
echo "${{ fromJson(inputs.plan).announcement_tag }}" >&2
echo "${version}" >&2
exit 1
else
@@ -63,7 +60,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@v5
with:
images: ${{ env.RUFF_BASE_IMG }}
# Defining this makes sure the org.opencontainers.image.version OCI label becomes the actual release version and not the branch name
@@ -74,12 +71,12 @@ jobs:
- name: Normalize Platform Pair (replace / with -)
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_TUPLE=${platform//\//-}" >> "$GITHUB_ENV"
echo "PLATFORM_TUPLE=${platform//\//-}" >> $GITHUB_ENV
# Adapted from https://docs.docker.com/build/ci/github-actions/multi-platform/
- name: Build and push by digest
id: build
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
uses: docker/build-push-action@v6
with:
context: .
platforms: ${{ matrix.platform }}
@@ -89,14 +86,13 @@ jobs:
outputs: type=image,name=${{ env.RUFF_BASE_IMG }},push-by-digest=true,name-canonical=true,push=${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
- name: Export digests
env:
digest: ${{ steps.build.outputs.digest }}
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload digests
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM_TUPLE }}
path: /tmp/digests/*
@@ -113,17 +109,17 @@ jobs:
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
steps:
- name: Download digests
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/setup-buildx-action@v3
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@v5
with:
images: ${{ env.RUFF_BASE_IMG }}
# Order is on purpose such that the label org.opencontainers.image.version has the first pattern with the full version
@@ -131,7 +127,7 @@ jobs:
type=pep440,pattern={{ version }},value=${{ fromJson(inputs.plan).announcement_tag }}
type=pep440,pattern={{ major }}.{{ minor }},value=${{ fromJson(inputs.plan).announcement_tag }}
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -144,10 +140,9 @@ jobs:
# The printf will expand the base image with the `<RUFF_BASE_IMG>@sha256:<sha256> ...` for each sha256 in the directory
# The final command becomes `docker buildx imagetools create -t tag1 -t tag2 ... <RUFF_BASE_IMG>@sha256:<sha256_1> <RUFF_BASE_IMG>@sha256:<sha256_2> ...`
run: |
# shellcheck disable=SC2046
docker buildx imagetools create \
$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf "${RUFF_BASE_IMG}@sha256:%s " *)
$(printf '${{ env.RUFF_BASE_IMG }}@sha256:%s ' *)
docker-publish-extra:
name: Publish additional Docker image based on ${{ matrix.image-mapping }}
@@ -163,13 +158,13 @@ jobs:
# Mapping of base image followed by a comma followed by one or more base tags (comma separated)
# Note, org.opencontainers.image.version label will use the first base tag (use the most specific tag first)
image-mapping:
- alpine:3.21,alpine3.21,alpine
- alpine:3.20,alpine3.20,alpine
- debian:bookworm-slim,bookworm-slim,debian-slim
- buildpack-deps:bookworm,bookworm,debian
steps:
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -177,8 +172,6 @@ jobs:
- name: Generate Dynamic Dockerfile Tags
shell: bash
env:
TAG_VALUE: ${{ fromJson(inputs.plan).announcement_tag }}
run: |
set -euo pipefail
@@ -188,7 +181,7 @@ jobs:
# Generate Dockerfile content
cat <<EOF > Dockerfile
FROM ${BASE_IMAGE}
COPY --from=${RUFF_BASE_IMG}:latest /ruff /usr/local/bin/ruff
COPY --from=${{ env.RUFF_BASE_IMG }}:latest /ruff /usr/local/bin/ruff
ENTRYPOINT []
CMD ["/usr/local/bin/ruff"]
EOF
@@ -199,8 +192,8 @@ jobs:
# Loop through all base tags and append its docker metadata pattern to the list
# Order is on purpose such that the label org.opencontainers.image.version has the first pattern with the full version
IFS=','; for TAG in ${BASE_TAGS}; do
TAG_PATTERNS="${TAG_PATTERNS}type=pep440,pattern={{ version }},suffix=-${TAG},value=${TAG_VALUE}\n"
TAG_PATTERNS="${TAG_PATTERNS}type=pep440,pattern={{ major }}.{{ minor }},suffix=-${TAG},value=${TAG_VALUE}\n"
TAG_PATTERNS="${TAG_PATTERNS}type=pep440,pattern={{ version }},suffix=-${TAG},value=${{ fromJson(inputs.plan).announcement_tag }}\n"
TAG_PATTERNS="${TAG_PATTERNS}type=pep440,pattern={{ major }}.{{ minor }},suffix=-${TAG},value=${{ fromJson(inputs.plan).announcement_tag }}\n"
TAG_PATTERNS="${TAG_PATTERNS}type=raw,value=${TAG}\n"
done
@@ -208,18 +201,18 @@ jobs:
TAG_PATTERNS="${TAG_PATTERNS%\\n}"
# Export image cache name
echo "IMAGE_REF=${BASE_IMAGE//:/-}" >> "$GITHUB_ENV"
echo "IMAGE_REF=${BASE_IMAGE//:/-}" >> $GITHUB_ENV
# Export tag patterns using the multiline env var syntax
{
echo "TAG_PATTERNS<<EOF"
echo -e "${TAG_PATTERNS}"
echo EOF
} >> "$GITHUB_ENV"
} >> $GITHUB_ENV
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@v5
# ghcr.io prefers index level annotations
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
@@ -231,7 +224,7 @@ jobs:
${{ env.TAG_PATTERNS }}
- name: Build and push
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
@@ -256,17 +249,17 @@ jobs:
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
steps:
- name: Download digests
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- uses: docker/setup-buildx-action@v3
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
@@ -276,7 +269,7 @@ jobs:
type=pep440,pattern={{ version }},value=${{ fromJson(inputs.plan).announcement_tag }}
type=pep440,pattern={{ major }}.{{ minor }},value=${{ fromJson(inputs.plan).announcement_tag }}
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -291,9 +284,7 @@ jobs:
# The final command becomes `docker buildx imagetools create -t tag1 -t tag2 ... <RUFF_BASE_IMG>@sha256:<sha256_1> <RUFF_BASE_IMG>@sha256:<sha256_2> ...`
run: |
readarray -t lines <<< "$DOCKER_METADATA_OUTPUT_ANNOTATIONS"; annotations=(); for line in "${lines[@]}"; do annotations+=(--annotation "$line"); done
# shellcheck disable=SC2046
docker buildx imagetools create \
"${annotations[@]}" \
$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf "${RUFF_BASE_IMG}@sha256:%s " *)
$(printf '${{ env.RUFF_BASE_IMG }}@sha256:%s ' *)

File diff suppressed because it is too large Load Diff

View File

@@ -31,31 +31,25 @@ jobs:
# Don't run the cron job on forks:
if: ${{ github.repository == 'astral-sh/ruff' || github.event_name != 'schedule' }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
persist-credentials: false
- uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
python-version: "3.12"
- name: Install uv
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Install Python requirements
run: uv pip install -r scripts/fuzz-parser/requirements.txt --system
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@e16410e7f8d9e167b74ad5697a9089a35126eb50 # v1
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
uses: rui314/setup-mold@v1
- uses: Swatinem/rust-cache@v2
- name: Build ruff
# A debug build means the script runs slower once it gets started,
# but this is outweighed by the fact that a release build takes *much* longer to compile in CI
run: cargo build --locked
- name: Fuzz
run: |
# shellcheck disable=SC2046
(
uvx \
--python=3.12 \
--from=./python/py-fuzzer \
fuzz \
--test-executable=target/debug/ruff \
--bin=ruff \
$(shuf -i 0-9999999999999999999 -n 1000)
)
run: python scripts/fuzz-parser/fuzz.py $(shuf -i 0-9999999999999999999 -n 1000) --test-executable target/debug/ruff
create-issue-on-failure:
name: Create an issue if the daily fuzz surfaced any bugs
@@ -65,7 +59,7 @@ jobs:
permissions:
issues: write
steps:
- uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
@@ -73,6 +67,6 @@ jobs:
owner: "astral-sh",
repo: "ruff",
title: `Daily parser fuzz failed on ${new Date().toDateString()}`,
body: "Run listed here: https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}",
body: "Runs listed here: https://github.com/astral-sh/ruff/actions/workflows/daily_fuzz.yml",
labels: ["bug", "parser", "fuzzer"],
})

View File

@@ -1,100 +0,0 @@
name: Run mypy_primer
permissions: {}
on:
pull_request:
paths:
- "crates/ty*/**"
- "crates/ruff_db"
- "crates/ruff_python_ast"
- "crates/ruff_python_parser"
- ".github/workflows/mypy_primer.yaml"
- ".github/workflows/mypy_primer_comment.yaml"
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
RUST_BACKTRACE: 1
jobs:
mypy_primer:
name: Run mypy_primer
runs-on: depot-ubuntu-22.04-32
timeout-minutes: 20
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: ruff
fetch-depth: 0
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
with:
workspaces: "ruff"
- name: Install Rust toolchain
run: rustup show
- name: Run mypy_primer
shell: bash
run: |
cd ruff
echo "Enabling mypy primer specific configuration overloads (see .github/mypy-primer-ty.toml)"
mkdir -p ~/.config/ty
cp .github/mypy-primer-ty.toml ~/.config/ty/ty.toml
PRIMER_SELECTOR="$(paste -s -d'|' crates/ty_python_semantic/resources/primer/good.txt)"
echo "new commit"
git rev-list --format=%s --max-count=1 "$GITHUB_SHA"
MERGE_BASE="$(git merge-base "$GITHUB_SHA" "origin/$GITHUB_BASE_REF")"
git checkout -b base_commit "$MERGE_BASE"
echo "base commit"
git rev-list --format=%s --max-count=1 base_commit
cd ..
echo "Project selector: $PRIMER_SELECTOR"
# Allow the exit code to be 0 or 1, only fail for actual mypy_primer crashes/bugs
uvx \
--from="git+https://github.com/hauntsaninja/mypy_primer@01a7ca325f674433c58e02416a867178d1571128" \
mypy_primer \
--repo ruff \
--type-checker ty \
--old base_commit \
--new "$GITHUB_SHA" \
--project-selector "/($PRIMER_SELECTOR)\$" \
--output concise \
--debug > mypy_primer.diff || [ $? -eq 1 ]
# Output diff with ANSI color codes
cat mypy_primer.diff
# Remove ANSI color codes before uploading
sed -ie 's/\x1b\[[0-9;]*m//g' mypy_primer.diff
echo ${{ github.event.number }} > pr-number
- name: Upload diff
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: mypy_primer_diff
path: mypy_primer.diff
- name: Upload pr-number
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: pr-number
path: pr-number

View File

@@ -1,97 +0,0 @@
name: PR comment (mypy_primer)
on: # zizmor: ignore[dangerous-triggers]
workflow_run:
workflows: [Run mypy_primer]
types: [completed]
workflow_dispatch:
inputs:
workflow_run_id:
description: The mypy_primer workflow that triggers the workflow run
required: true
jobs:
comment:
runs-on: ubuntu-24.04
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: Download PR number
with:
name: pr-number
run_id: ${{ github.event.workflow_run.id || github.event.inputs.workflow_run_id }}
if_no_artifact_found: ignore
allow_forks: true
- name: Parse pull request number
id: pr-number
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
name: "Download mypy_primer results"
id: download-mypy_primer_diff
if: steps.pr-number.outputs.pr-number
with:
name: mypy_primer_diff
workflow: mypy_primer.yaml
pr: ${{ steps.pr-number.outputs.pr-number }}
path: pr/mypy_primer_diff
workflow_conclusion: completed
if_no_artifact_found: ignore
allow_forks: true
- name: Generate comment content
id: generate-comment
if: steps.download-mypy_primer_diff.outputs.found_artifact == 'true'
run: |
# Guard against malicious mypy_primer results that symlink to a secret
# file on this runner
if [[ -L pr/mypy_primer_diff/mypy_primer.diff ]]
then
echo "Error: mypy_primer.diff cannot be a symlink"
exit 1
fi
# Note this identifier is used to find the comment to update on
# subsequent runs
echo '<!-- generated-comment mypy_primer -->' >> comment.txt
echo '## `mypy_primer` results' >> comment.txt
if [ -s "pr/mypy_primer_diff/mypy_primer.diff" ]; then
echo '<details>' >> comment.txt
echo '<summary>Changes were detected when running on open source projects</summary>' >> comment.txt
echo '' >> comment.txt
echo '```diff' >> comment.txt
cat pr/mypy_primer_diff/mypy_primer.diff >> comment.txt
echo '```' >> comment.txt
echo '</details>' >> comment.txt
else
echo 'No ecosystem changes detected ✅' >> comment.txt
fi
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.txt >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
issue-number: ${{ steps.pr-number.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "<!-- generated-comment mypy_primer -->"
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}
body-path: comment.txt
edit-mode: replace

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: "Update pre-commit mirror"
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
uses: actions/github-script@v7
with:
github-token: ${{ secrets.RUFF_PRE_COMMIT_PAT }}
script: |

View File

@@ -10,13 +10,14 @@ on:
description: The ecosystem workflow that triggers the workflow run
required: true
permissions:
pull-requests: write
jobs:
comment:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
- uses: dawidd6/action-download-artifact@v6
name: Download pull request number
with:
name: pr-number
@@ -29,10 +30,10 @@ jobs:
run: |
if [[ -f pr-number ]]
then
echo "pr-number=$(<pr-number)" >> "$GITHUB_OUTPUT"
echo "pr-number=$(<pr-number)" >> $GITHUB_OUTPUT
fi
- uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
- uses: dawidd6/action-download-artifact@v6
name: "Download ecosystem results"
id: download-ecosystem-result
if: steps.pr-number.outputs.pr-number
@@ -65,12 +66,12 @@ jobs:
cat pr/ecosystem/ecosystem-result >> comment.txt
echo "" >> comment.txt
echo 'comment<<EOF' >> "$GITHUB_OUTPUT"
cat comment.txt >> "$GITHUB_OUTPUT"
echo 'EOF' >> "$GITHUB_OUTPUT"
echo 'comment<<EOF' >> $GITHUB_OUTPUT
cat comment.txt >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
- name: Find existing comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
uses: peter-evans/find-comment@v3
if: steps.generate-comment.outcome == 'success'
id: find-comment
with:
@@ -80,7 +81,7 @@ jobs:
- name: Create or update comment
if: steps.find-comment.outcome == 'success'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4
uses: peter-evans/create-or-update-comment@v4
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.pr-number.outputs.pr-number }}

View File

@@ -23,19 +23,17 @@ jobs:
env:
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
with:
ref: ${{ inputs.ref }}
persist-credentials: true
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
- uses: actions/setup-python@v5
with:
python-version: 3.12
- name: "Set docs version"
env:
version: ${{ (inputs.plan != '' && fromJson(inputs.plan).announcement_tag) || inputs.ref }}
run: |
version="${{ (inputs.plan != '' && fromJson(inputs.plan).announcement_tag) || inputs.ref }}"
# if version is missing, use 'latest'
if [ -z "$version" ]; then
echo "Using 'latest' as version"
@@ -45,30 +43,32 @@ jobs:
# Use version as display name for now
display_name="$version"
echo "version=$version" >> "$GITHUB_ENV"
echo "display_name=$display_name" >> "$GITHUB_ENV"
echo "version=$version" >> $GITHUB_ENV
echo "display_name=$display_name" >> $GITHUB_ENV
- name: "Set branch name"
run: |
version="${{ env.version }}"
display_name="${{ env.display_name }}"
timestamp="$(date +%s)"
# create branch_display_name from display_name by replacing all
# characters disallowed in git branch names with hyphens
branch_display_name="$(echo "${display_name}" | tr -c '[:alnum:]._' '-' | tr -s '-')"
branch_display_name="$(echo "$display_name" | tr -c '[:alnum:]._' '-' | tr -s '-')"
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> "$GITHUB_ENV"
echo "timestamp=$timestamp" >> "$GITHUB_ENV"
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> $GITHUB_ENV
echo "timestamp=$timestamp" >> $GITHUB_ENV
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: Swatinem/rust-cache@v2
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
@@ -92,7 +92,9 @@ jobs:
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Clone docs repo"
run: git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs
run: |
version="${{ env.version }}"
git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs
- name: "Copy docs"
run: rm -rf astral-docs/site/ruff && mkdir -p astral-docs/site && cp -r site/ruff astral-docs/site/
@@ -100,10 +102,12 @@ jobs:
- name: "Commit docs"
working-directory: astral-docs
run: |
branch_name="${{ env.branch_name }}"
git config user.name "astral-docs-bot"
git config user.email "176161322+astral-docs-bot@users.noreply.github.com"
git checkout -b "${branch_name}"
git checkout -b $branch_name
git add site/ruff
git commit -m "Update ruff documentation for $version"
@@ -112,8 +116,12 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.ASTRAL_DOCS_PAT }}
run: |
version="${{ env.version }}"
display_name="${{ env.display_name }}"
branch_name="${{ env.branch_name }}"
# set the PR title
pull_request_title="Update ruff documentation for ${display_name}"
pull_request_title="Update ruff documentation for $display_name"
# Delete any existing pull requests that are open for this version
# by checking against pull_request_title because the new PR will
@@ -122,15 +130,13 @@ jobs:
xargs -I {} gh pr close {}
# push the branch to GitHub
git push origin "${branch_name}"
git push origin $branch_name
# create the PR
gh pr create \
--base=main \
--head="${branch_name}" \
--title="${pull_request_title}" \
--body="Automated documentation update for ${display_name}" \
--label="documentation"
gh pr create --base main --head $branch_name \
--title "$pull_request_title" \
--body "Automated documentation update for $display_name" \
--label "documentation"
- name: "Merge Pull Request"
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
@@ -138,7 +144,9 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.ASTRAL_DOCS_PAT }}
run: |
branch_name="${{ env.branch_name }}"
# auto-merge the PR if the build was triggered by a release. Manual builds should be reviewed by a human.
# give the PR a few seconds to be created before trying to auto-merge it
sleep 10
gh pr merge --squash "${branch_name}"
gh pr merge --squash $branch_name

View File

@@ -24,31 +24,32 @@ jobs:
env:
CF_API_TOKEN_EXISTS: ${{ secrets.CF_API_TOKEN != '' }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
- uses: actions/setup-node@v4
with:
node-version: 22
node-version: 20
cache: "npm"
cache-dependency-path: playground/package-lock.json
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- uses: jetli/wasm-pack-action@v0.4.0
- uses: jetli/wasm-bindgen-action@v0.2.0
- name: "Run wasm-pack"
run: wasm-pack build --target web --out-dir ../../playground/src/pkg crates/ruff_wasm
- name: "Install Node dependencies"
run: npm ci
working-directory: playground
- name: "Run TypeScript checks"
run: npm run check
working-directory: playground
- name: "Build Ruff playground"
run: npm run build --workspace ruff-playground
- name: "Build JavaScript bundle"
run: npm run build
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3.14.1
uses: cloudflare/wrangler-action@v3.12.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
# `github.head_ref` is only set during pull requests and for manual runs or tags we use `main` to deploy to production
command: pages deploy playground/ruff/dist --project-name=ruff-playground --branch ${{ github.head_ref || 'main' }} --commit-hash ${GITHUB_SHA}
command: pages deploy playground/dist --project-name=ruff-playground --branch ${{ github.head_ref || 'main' }} --commit-hash ${GITHUB_SHA}

View File

@@ -22,8 +22,8 @@ jobs:
id-token: write
steps:
- name: "Install uv"
uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca # v6.0.1
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
uses: astral-sh/setup-uv@v3
- uses: actions/download-artifact@v4
with:
pattern: wheels-*
path: wheels

View File

@@ -1,58 +0,0 @@
# Publish the ty playground.
name: "[ty Playground] Release"
permissions: {}
on:
push:
branches: [main]
paths:
- "crates/ty*/**"
- "crates/ruff_db/**"
- "crates/ruff_python_ast/**"
- "crates/ruff_python_parser/**"
- "playground/**"
- ".github/workflows/publish-ty-playground.yml"
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
jobs:
publish:
runs-on: ubuntu-latest
env:
CF_API_TOKEN_EXISTS: ${{ secrets.CF_API_TOKEN != '' }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: 22
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- name: "Install Node dependencies"
run: npm ci
working-directory: playground
- name: "Run TypeScript checks"
run: npm run check
working-directory: playground
- name: "Build ty playground"
run: npm run build --workspace ty-playground
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3.14.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
# `github.head_ref` is only set during pull requests and for manual runs or tags we use `main` to deploy to production
command: pages deploy playground/ty/dist --project-name=ty-playground --branch ${{ github.head_ref || 'main' }} --commit-hash ${GITHUB_SHA}

View File

@@ -29,15 +29,11 @@ jobs:
target: [web, bundler, nodejs]
fail-fast: false
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: jetli/wasm-pack-action@0d096b08b4e5a7de8c28de67e11e945404e9eefa # v0.4.0
with:
version: v0.13.1
- uses: jetli/wasm-bindgen-action@20b33e20595891ab1a0ed73145d8a21fc96e7c29 # v0.2.0
- uses: jetli/wasm-pack-action@v0.4.0
- uses: jetli/wasm-bindgen-action@v0.2.0
- name: "Run wasm-pack build"
run: wasm-pack build --target ${{ matrix.target }} crates/ruff_wasm
- name: "Rename generated package"
@@ -45,7 +41,7 @@ jobs:
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
- uses: actions/setup-node@v4
with:
node-version: 20
registry-url: "https://registry.npmjs.org"

View File

@@ -1,13 +1,12 @@
# This file was autogenerated by dist: https://github.com/astral-sh/cargo-dist
# This file was autogenerated by cargo-dist: https://opensource.axo.dev/cargo-dist/
#
# Copyright 2022-2024, axodotdev
# Copyright 2025 Astral Software Inc.
# SPDX-License-Identifier: MIT or Apache-2.0
#
# CI that:
#
# * checks for a Git Tag that looks like a release
# * builds artifacts with dist (archives, installers, hashes)
# * builds artifacts with cargo-dist (archives, installers, hashes)
# * uploads those artifacts to temporary workflow zip
# * on success, uploads the artifacts to a GitHub Release
#
@@ -25,10 +24,10 @@ permissions:
# must be a Cargo-style SemVer Version (must have at least major.minor.patch).
#
# If PACKAGE_NAME is specified, then the announcement will be for that
# package (erroring out if it doesn't have the given version or isn't dist-able).
# package (erroring out if it doesn't have the given version or isn't cargo-dist-able).
#
# If PACKAGE_NAME isn't specified, then the announcement will be for all
# (dist-able) packages in the workspace with that version (this mode is
# (cargo-dist-able) packages in the workspace with that version (this mode is
# intended for workspaces with only one dist-able package, or with all dist-able
# packages versioned/released in lockstep).
#
@@ -40,7 +39,6 @@ permissions:
# If there's a prerelease-style suffix to the version, then the release(s)
# will be marked as a prerelease.
on:
pull_request:
workflow_dispatch:
inputs:
tag:
@@ -50,9 +48,9 @@ on:
type: string
jobs:
# Run 'dist plan' (or host) to determine what tasks we need to do
# Run 'cargo dist plan' (or host) to determine what tasks we need to do
plan:
runs-on: "depot-ubuntu-latest-4"
runs-on: "ubuntu-20.04"
outputs:
val: ${{ steps.plan.outputs.manifest }}
tag: ${{ (inputs.tag != 'dry-run' && inputs.tag) || '' }}
@@ -61,20 +59,19 @@ jobs:
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@85e6279cec87321a52edac9c87bce653a07cf6c2
- uses: actions/checkout@v4
with:
persist-credentials: false
submodules: recursive
- name: Install dist
- name: Install cargo-dist
# we specify bash to get pipefail; it guards against the `curl` command
# failing. otherwise `sh` won't catch that `curl` returned non-0
shell: bash
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/cargo-dist/releases/download/v0.28.5-prerelease.1/cargo-dist-installer.sh | sh"
- name: Cache dist
uses: actions/upload-artifact@6027e3dd177782cd8ab9af838c04fd81a07f1d47
run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.22.1/cargo-dist-installer.sh | sh"
- name: Cache cargo-dist
uses: actions/upload-artifact@v4
with:
name: cargo-dist-cache
path: ~/.cargo/bin/dist
path: ~/.cargo/bin/cargo-dist
# sure would be cool if github gave us proper conditionals...
# so here's a doubly-nested ternary-via-truthiness to try to provide the best possible
# functionality based on whether this is a pull_request, and whether it's from a fork.
@@ -82,12 +79,12 @@ jobs:
# but also really annoying to build CI around when it needs secrets to work right.)
- id: plan
run: |
dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --output-format=json > plan-dist-manifest.json
echo "dist ran successfully"
cargo dist ${{ (inputs.tag && inputs.tag != 'dry-run' && format('host --steps=create --tag={0}', inputs.tag)) || 'plan' }} --output-format=json > plan-dist-manifest.json
echo "cargo dist ran successfully"
cat plan-dist-manifest.json
echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT"
- name: "Upload dist-manifest.json"
uses: actions/upload-artifact@6027e3dd177782cd8ab9af838c04fd81a07f1d47
uses: actions/upload-artifact@v4
with:
name: artifacts-plan-dist-manifest
path: plan-dist-manifest.json
@@ -119,24 +116,23 @@ jobs:
- plan
- custom-build-binaries
- custom-build-docker
runs-on: "depot-ubuntu-latest-4"
runs-on: "ubuntu-20.04"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json
steps:
- uses: actions/checkout@85e6279cec87321a52edac9c87bce653a07cf6c2
- uses: actions/checkout@v4
with:
persist-credentials: false
submodules: recursive
- name: Install cached dist
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
- name: Install cached cargo-dist
uses: actions/download-artifact@v4
with:
name: cargo-dist-cache
path: ~/.cargo/bin/
- run: chmod +x ~/.cargo/bin/dist
- run: chmod +x ~/.cargo/bin/cargo-dist
# Get all the local artifacts for the global tasks to use (for e.g. checksums)
- name: Fetch local artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@v4
with:
pattern: artifacts-*
path: target/distrib/
@@ -144,8 +140,8 @@ jobs:
- id: cargo-dist
shell: bash
run: |
dist build ${{ needs.plan.outputs.tag-flag }} --output-format=json "--artifacts=global" > dist-manifest.json
echo "dist ran successfully"
cargo dist build ${{ needs.plan.outputs.tag-flag }} --output-format=json "--artifacts=global" > dist-manifest.json
echo "cargo dist ran successfully"
# Parse out what we just built and upload it to scratch storage
echo "paths<<EOF" >> "$GITHUB_OUTPUT"
@@ -154,7 +150,7 @@ jobs:
cp dist-manifest.json "$BUILD_MANIFEST_NAME"
- name: "Upload artifacts"
uses: actions/upload-artifact@6027e3dd177782cd8ab9af838c04fd81a07f1d47
uses: actions/upload-artifact@v4
with:
name: artifacts-build-global
path: |
@@ -171,23 +167,22 @@ jobs:
if: ${{ always() && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.custom-build-binaries.result == 'skipped' || needs.custom-build-binaries.result == 'success') && (needs.custom-build-docker.result == 'skipped' || needs.custom-build-docker.result == 'success') }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
runs-on: "depot-ubuntu-latest-4"
runs-on: "ubuntu-20.04"
outputs:
val: ${{ steps.host.outputs.manifest }}
steps:
- uses: actions/checkout@85e6279cec87321a52edac9c87bce653a07cf6c2
- uses: actions/checkout@v4
with:
persist-credentials: false
submodules: recursive
- name: Install cached dist
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
- name: Install cached cargo-dist
uses: actions/download-artifact@v4
with:
name: cargo-dist-cache
path: ~/.cargo/bin/
- run: chmod +x ~/.cargo/bin/dist
- run: chmod +x ~/.cargo/bin/cargo-dist
# Fetch artifacts from scratch-storage
- name: Fetch artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@v4
with:
pattern: artifacts-*
path: target/distrib/
@@ -196,12 +191,12 @@ jobs:
- id: host
shell: bash
run: |
dist host ${{ needs.plan.outputs.tag-flag }} --steps=upload --steps=release --output-format=json > dist-manifest.json
cargo dist host ${{ needs.plan.outputs.tag-flag }} --steps=upload --steps=release --output-format=json > dist-manifest.json
echo "artifacts uploaded and released successfully"
cat dist-manifest.json
echo "manifest=$(jq -c "." dist-manifest.json)" >> "$GITHUB_OUTPUT"
- name: "Upload dist-manifest.json"
uses: actions/upload-artifact@6027e3dd177782cd8ab9af838c04fd81a07f1d47
uses: actions/upload-artifact@v4
with:
# Overwrite the previous copy
name: artifacts-dist-manifest
@@ -247,17 +242,16 @@ jobs:
# still allowing individual publish jobs to skip themselves (for prereleases).
# "host" however must run to completion, no skipping allowed!
if: ${{ always() && needs.host.result == 'success' && (needs.custom-publish-pypi.result == 'skipped' || needs.custom-publish-pypi.result == 'success') && (needs.custom-publish-wasm.result == 'skipped' || needs.custom-publish-wasm.result == 'success') }}
runs-on: "depot-ubuntu-latest-4"
runs-on: "ubuntu-20.04"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@85e6279cec87321a52edac9c87bce653a07cf6c2
- uses: actions/checkout@v4
with:
persist-credentials: false
submodules: recursive
# Create a GitHub Release while uploading all files to it
- name: "Download GitHub Artifacts"
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
uses: actions/download-artifact@v4
with:
pattern: artifacts-*
path: artifacts

View File

@@ -21,17 +21,15 @@ jobs:
contents: write
pull-requests: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
name: Checkout Ruff
with:
path: ruff
persist-credentials: true
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v4
name: Checkout typeshed
with:
repository: python/typeshed
path: typeshed
persist-credentials: false
- name: Setup git
run: |
git config --global user.name typeshedbot
@@ -39,13 +37,13 @@ jobs:
- name: Sync typeshed
id: sync
run: |
rm -rf ruff/crates/ty_vendored/vendor/typeshed
mkdir ruff/crates/ty_vendored/vendor/typeshed
cp typeshed/README.md ruff/crates/ty_vendored/vendor/typeshed
cp typeshed/LICENSE ruff/crates/ty_vendored/vendor/typeshed
cp -r typeshed/stdlib ruff/crates/ty_vendored/vendor/typeshed/stdlib
rm -rf ruff/crates/ty_vendored/vendor/typeshed/stdlib/@tests
git -C typeshed rev-parse HEAD > ruff/crates/ty_vendored/vendor/typeshed/source_commit.txt
rm -rf ruff/crates/red_knot_vendored/vendor/typeshed
mkdir ruff/crates/red_knot_vendored/vendor/typeshed
cp typeshed/README.md ruff/crates/red_knot_vendored/vendor/typeshed
cp typeshed/LICENSE ruff/crates/red_knot_vendored/vendor/typeshed
cp -r typeshed/stdlib ruff/crates/red_knot_vendored/vendor/typeshed/stdlib
rm -rf ruff/crates/red_knot_vendored/vendor/typeshed/stdlib/@tests
git -C typeshed rev-parse HEAD > ruff/crates/red_knot_vendored/vendor/typeshed/source_commit.txt
- name: Commit the changes
id: commit
if: ${{ steps.sync.outcome == 'success' }}
@@ -59,7 +57,7 @@ jobs:
run: |
cd ruff
git push --force origin typeshedbot/sync-typeshed
gh pr list --repo "$GITHUB_REPOSITORY" --head typeshedbot/sync-typeshed --json id --jq length | grep 1 && exit 0 # exit if there is existing pr
gh pr list --repo $GITHUB_REPOSITORY --head typeshedbot/sync-typeshed --json id --jq length | grep 1 && exit 0 # exit if there is existing pr
gh pr create --title "Sync vendored typeshed stubs" --body "Close and reopen this PR to trigger CI" --label "internal"
create-issue-on-failure:
@@ -70,7 +68,7 @@ jobs:
permissions:
issues: write
steps:
- uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
@@ -78,6 +76,5 @@ jobs:
owner: "astral-sh",
repo: "ruff",
title: `Automated typeshed sync failed on ${new Date().toDateString()}`,
body: "Run listed here: https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}",
labels: ["bug", "ty"],
body: "Runs are listed here: https://github.com/astral-sh/ruff/actions/workflows/sync_typeshed.yaml",
})

19
.github/zizmor.yml vendored
View File

@@ -1,19 +0,0 @@
# Configuration for the zizmor static analysis tool, run via pre-commit in CI
# https://woodruffw.github.io/zizmor/configuration/
#
# TODO: can we remove the ignores here so that our workflows are more secure?
rules:
dangerous-triggers:
ignore:
- pr-comment.yaml
cache-poisoning:
ignore:
- build-docker.yml
- publish-playground.yml
excessive-permissions:
# it's hard to test what the impact of removing these ignores would be
# without actually running the release workflow...
ignore:
- build-docker.yml
- publish-playground.yml
- publish-docs.yml

4
.gitignore vendored
View File

@@ -29,10 +29,6 @@ tracing.folded
tracing-flamechart.svg
tracing-flamegraph.svg
# insta
*.rs.pending-snap
###
# Rust.gitignore
###

View File

@@ -1 +0,0 @@
!/.github/

View File

@@ -21,15 +21,3 @@ MD014: false
MD024:
# Allow when nested under different parents e.g. CHANGELOG.md
siblings_only: true
# MD046/code-block-style
#
# Ignore this because it conflicts with the code block style used in content
# tabs of mkdocs-material which is to add a blank line after the content title.
#
# Ref: https://github.com/astral-sh/ruff/pull/15011#issuecomment-2544790854
MD046: false
# Link text should be descriptive
# Disallows link text like *here* which is annoying.
MD059: false

View File

@@ -2,11 +2,8 @@ fail_fast: false
exclude: |
(?x)^(
.github/workflows/release.yml|
crates/ty_vendored/vendor/.*|
crates/ty_project/resources/.*|
crates/ty/docs/(configuration|rules|cli).md|
crates/ruff_benchmark/resources/.*|
crates/red_knot_vendored/vendor/.*|
crates/red_knot_workspace/resources/.*|
crates/ruff_linter/resources/.*|
crates/ruff_linter/src/rules/.*/snapshots/.*|
crates/ruff_notebook/resources/.*|
@@ -19,23 +16,19 @@ exclude: |
)$
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-merge-conflict
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.24.1
rev: v0.23
hooks:
- id: validate-pyproject
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.22
rev: 0.7.18
hooks:
- id: mdformat
additional_dependencies:
- mdformat-mkdocs==4.0.0
- mdformat-footnote==0.1.1
- mdformat-mkdocs
- mdformat-admon
- mdformat-footnote
exclude: |
(?x)^(
docs/formatter/black\.md
@@ -43,7 +36,7 @@ repos:
)$
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.45.0
rev: v0.42.0
hooks:
- id: markdownlint-fix
exclude: |
@@ -63,10 +56,10 @@ repos:
.*?invalid(_.+)*_syntax\.md
)$
additional_dependencies:
- black==25.1.0
- black==24.10.0
- repo: https://github.com/crate-ci/typos
rev: v1.32.0
rev: v1.27.3
hooks:
- id: typos
@@ -80,7 +73,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.10
rev: v0.7.4
hooks:
- id: ruff-format
- id: ruff
@@ -90,42 +83,10 @@ repos:
# Prettier
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.5.3
rev: v3.3.3
hooks:
- id: prettier
types: [yaml]
# zizmor detects security vulnerabilities in GitHub Actions workflows.
# Additional configuration for the tool is found in `.github/zizmor.yml`
- repo: https://github.com/woodruffw/zizmor-pre-commit
rev: v1.7.0
hooks:
- id: zizmor
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.33.0
hooks:
- id: check-github-workflows
# `actionlint` hook, for verifying correct syntax in GitHub Actions workflows.
# Some additional configuration for `actionlint` can be found in `.github/actionlint.yaml`.
- repo: https://github.com/rhysd/actionlint
rev: v1.7.7
hooks:
- id: actionlint
stages:
# This hook is disabled by default, since it's quite slow.
# To run all hooks *including* this hook, use `uvx pre-commit run -a --hook-stage=manual`.
# To run *just* this hook, use `uvx pre-commit run -a actionlint --hook-stage=manual`.
- manual
args:
- "-ignore=SC2129" # ignorable stylistic lint from shellcheck
- "-ignore=SC2016" # another shellcheck lint: seems to have false positives?
additional_dependencies:
# actionlint has a shellcheck integration which extracts shell scripts in `run:` steps from GitHub Actions
# and checks these with shellcheck. This is arguably its most useful feature,
# but the integration only works if shellcheck is installed
- "github.com/wasilibs/go-shellcheck/cmd/shellcheck@v0.10.0"
ci:
skip: [cargo-fmt, dev-generate-all]

View File

@@ -1,84 +1,5 @@
# Breaking Changes
## 0.11.0
This is a follow-up to release 0.10.0. Because of a mistake in the release process, the `requires-python` inference changes were not included in that release. Ruff 0.11.0 now includes this change as well as the stabilization of the preview behavior for `PGH004`.
- **Changes to how the Python version is inferred when a `target-version` is not specified** ([#16319](https://github.com/astral-sh/ruff/pull/16319))
In previous versions of Ruff, you could specify your Python version with:
- The `target-version` option in a `ruff.toml` file or the `[tool.ruff]` section of a pyproject.toml file.
- The `project.requires-python` field in a `pyproject.toml` file with a `[tool.ruff]` section.
These options worked well in most cases, and are still recommended for fine control of the Python version. However, because of the way Ruff discovers config files, `pyproject.toml` files without a `[tool.ruff]` section would be ignored, including the `requires-python` setting. Ruff would then use the default Python version (3.9 as of this writing) instead, which is surprising when you've attempted to request another version.
In v0.10, config discovery has been updated to address this issue:
- If Ruff finds a `ruff.toml` file without a `target-version`, it will check
for a `pyproject.toml` file in the same directory and respect its
`requires-python` version, even if it does not contain a `[tool.ruff]`
section.
- If Ruff finds a user-level configuration, the `requires-python` field of the closest `pyproject.toml` in a parent directory will take precedence.
- If there is no config file (`ruff.toml`or `pyproject.toml` with a
`[tool.ruff]` section) in the directory of the file being checked, Ruff will
search for the closest `pyproject.toml` in the parent directories and use its
`requires-python` setting.
## 0.10.0
- **Changes to how the Python version is inferred when a `target-version` is not specified** ([#16319](https://github.com/astral-sh/ruff/pull/16319))
Because of a mistake in the release process, the `requires-python` inference changes are not included in this release and instead shipped as part of 0.11.0.
You can find a description of this change in the 0.11.0 section.
- **Updated `TYPE_CHECKING` behavior** ([#16669](https://github.com/astral-sh/ruff/pull/16669))
Previously, Ruff only recognized typechecking blocks that tested the `typing.TYPE_CHECKING` symbol. Now, Ruff recognizes any local variable named `TYPE_CHECKING`. This release also removes support for the legacy `if 0:` and `if False:` typechecking checks. Use a local `TYPE_CHECKING` variable instead.
- **More robust noqa parsing** ([#16483](https://github.com/astral-sh/ruff/pull/16483))
The syntax for both file-level and in-line suppression comments has been unified and made more robust to certain errors. In most cases, this will result in more suppression comments being read by Ruff, but there are a few instances where previously read comments will now log an error to the user instead. Please refer to the documentation on [_Error suppression_](https://docs.astral.sh/ruff/linter/#error-suppression) for the full specification.
- **Avoid unnecessary parentheses around with statements with a single context manager and a trailing comment** ([#14005](https://github.com/astral-sh/ruff/pull/14005))
This change fixes a bug in the formatter where it introduced unnecessary parentheses around with statements with a single context manager and a trailing comment. This change may result in a change in formatting for some users.
- **Bump alpine default tag to 3.21 for derived Docker images** ([#16456](https://github.com/astral-sh/ruff/pull/16456))
Alpine 3.21 was released in Dec 2024 and is used in the official Alpine-based Python images. Now the ruff:alpine image will use 3.21 instead of 3.20 and ruff:alpine3.20 will no longer be updated.
- **\[`unsafe-markup-use`\]: `RUF035` has been recoded to `S704`** ([#15957](https://github.com/astral-sh/ruff/pull/15957))
## 0.9.0
Ruff now formats your code according to the 2025 style guide. As a result, your code might now get formatted differently. See the [changelog](./CHANGELOG.md#090) for a detailed list of changes.
## 0.8.0
- **Default to Python 3.9**
Ruff now defaults to Python 3.9 instead of 3.8 if no explicit Python version is configured using [`ruff.target-version`](https://docs.astral.sh/ruff/settings/#target-version) or [`project.requires-python`](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#python-requires) ([#13896](https://github.com/astral-sh/ruff/pull/13896))
- **Changed location of `pydoclint` diagnostics**
[`pydoclint`](https://docs.astral.sh/ruff/rules/#pydoclint-doc) diagnostics now point to the first-line of the problematic docstring. Previously, this was not the case.
If you've opted into these preview rules but have them suppressed using
[`noqa`](https://docs.astral.sh/ruff/linter/#error-suppression) comments in
some places, this change may mean that you need to move the `noqa` suppression
comments. Most users should be unaffected by this change.
- **Use XDG (i.e. `~/.local/bin`) instead of the Cargo home directory in the standalone installer**
Previously, Ruff's installer used `$CARGO_HOME` or `~/.cargo/bin` for its target install directory. Now, Ruff will be installed into `$XDG_BIN_HOME`, `$XDG_DATA_HOME/../bin`, or `~/.local/bin` (in that order).
This change is only relevant to users of the standalone Ruff installer (using the shell or PowerShell script). If you installed Ruff using uv or pip, you should be unaffected.
- **Changes to the line width calculation**
Ruff now uses a new version of the [unicode-width](https://github.com/unicode-rs/unicode-width) Rust crate to calculate the line width. In very rare cases, this may lead to lines containing Unicode characters being reformatted, or being considered too long when they were not before ([`E501`](https://docs.astral.sh/ruff/rules/line-too-long/)).
## 0.7.0
- The pytest rules `PT001` and `PT023` now default to omitting the decorator parentheses when there are no arguments
@@ -246,7 +167,7 @@ flag or `unsafe-fixes` configuration option can be used to enable unsafe fixes.
See the [docs](https://docs.astral.sh/ruff/configuration/#fix-safety) for details.
### Remove formatter-conflicting rules from the default rule set ([#7900](https://github.com/astral-sh/ruff/pull/7900))
### Remove formatter-conflicting rules from the default rule set ([#7900](https://github.com/astral-sh/ruff/pull/7900))
Previously, Ruff enabled all implemented rules in Pycodestyle (`E`) by default. Ruff now only includes the
Pycodestyle prefixes `E4`, `E7`, and `E9` to exclude rules that conflict with automatic formatters. Consequently,
@@ -259,8 +180,8 @@ This change only affects those using Ruff under its default rule set. Users that
### Remove support for emoji identifiers ([#7212](https://github.com/astral-sh/ruff/pull/7212))
Previously, Ruff supported non-standards-compliant emoji identifiers such as `📦 = 1`.
We decided to remove this non-standard language extension. Ruff now reports syntax errors for invalid emoji identifiers in your code, the same as CPython.
Previously, Ruff supported the non-standard compliant emoji identifiers e.g. `📦 = 1`.
We decided to remove this non-standard language extension, and Ruff now reports syntax errors for emoji identifiers in your code, the same as CPython.
### Improved GitLab fingerprints ([#7203](https://github.com/astral-sh/ruff/pull/7203))

File diff suppressed because it is too large Load Diff

View File

@@ -71,7 +71,8 @@ representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at <hey@astral.sh>.
reported to the community leaders responsible for enforcement at
<charlie.r.marsh@gmail.com>.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the

View File

@@ -2,11 +2,6 @@
Welcome! We're happy to have you here. Thank you in advance for your contribution to Ruff.
> [!NOTE]
>
> This guide is for Ruff. If you're looking to contribute to ty, please see [the ty contributing
> guide](https://github.com/astral-sh/ruff/blob/main/crates/ty/CONTRIBUTING.md).
## The Basics
Ruff welcomes contributions in the form of pull requests.
@@ -144,7 +139,7 @@ At a high level, the steps involved in adding a new lint rule are as follows:
1. Create a file for your rule (e.g., `crates/ruff_linter/src/rules/flake8_bugbear/rules/assert_false.rs`).
1. In that file, define a violation struct (e.g., `pub struct AssertFalse`). You can grep for
`#[derive(ViolationMetadata)]` to see examples.
`#[violation]` to see examples.
1. In that file, define a function that adds the violation to the diagnostic list as appropriate
(e.g., `pub(crate) fn assert_false`) based on whatever inputs are required for the rule (e.g.,
@@ -371,15 +366,6 @@ uvx --from ./python/ruff-ecosystem ruff-ecosystem format ruff "./target/debug/ru
See the [ruff-ecosystem package](https://github.com/astral-sh/ruff/tree/main/python/ruff-ecosystem) for more details.
## Upgrading Rust
1. Change the `channel` in `./rust-toolchain.toml` to the new Rust version (`<latest>`)
1. Change the `rust-version` in the `./Cargo.toml` to `<latest> - 2` (e.g. 1.84 if the latest is 1.86)
1. Run `cargo clippy --fix --allow-dirty --allow-staged` to fix new clippy warnings
1. Create and merge the PR
1. Bump the Rust version in Ruff's conda forge recipe. See [this PR](https://github.com/conda-forge/ruff-feedstock/pull/266) for an example.
1. Enjoy the new Rust version!
## Benchmarking and Profiling
We have several ways of benchmarking and profiling Ruff:
@@ -411,7 +397,7 @@ cargo install hyperfine
To benchmark the release build:
```shell
cargo build --release --bin ruff && hyperfine --warmup 10 \
cargo build --release && hyperfine --warmup 10 \
"./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ --no-cache -e" \
"./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ -e"
@@ -481,7 +467,7 @@ cargo build --release && hyperfine --warmup 10 \
"./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ --no-cache -e --select W505,E501"
```
You can run `uv venv --project ./scripts/benchmarks`, activate the venv and then run `uv sync --project ./scripts/benchmarks` to create a working environment for the
You can run `poetry install` from `./scripts/benchmarks` to create a working environment for the
above. All reported benchmarks were computed using the versions specified by
`./scripts/benchmarks/pyproject.toml` on Python 3.11.
@@ -540,7 +526,7 @@ cargo benchmark
#### Benchmark-driven Development
Ruff uses [Criterion.rs](https://bheisler.github.io/criterion.rs/book/) for benchmarks. You can use
`--save-baseline=<name>` to store an initial baseline benchmark (e.g., on `main`) and then use
`--save-baseline=<name>` to store an initial baseline benchmark (e.g. on `main`) and then use
`--benchmark=<name>` to compare against that benchmark. Criterion will print a message telling you
if the benchmark improved/regressed compared to that baseline.
@@ -610,7 +596,8 @@ Then convert the recorded profile
perf script -F +pid > /tmp/test.perf
```
You can now view the converted file with [firefox profiler](https://profiler.firefox.com/). To learn more about Firefox profiler, read the [Firefox profiler profiling-guide](https://profiler.firefox.com/docs/#/./guide-perf-profiling).
You can now view the converted file with [firefox profiler](https://profiler.firefox.com/), with a
more in-depth guide [here](https://profiler.firefox.com/docs/#/./guide-perf-profiling)
An alternative is to convert the perf data to `flamegraph.svg` using
[flamegraph](https://github.com/flamegraph-rs/flamegraph) (`cargo install flamegraph`):
@@ -691,9 +678,9 @@ utils with it:
23 Newline 24
```
- `cargo dev print-cst <file>`: Print the CST of a Python file using
- `cargo dev print-cst <file>`: Print the CST of a python file using
[LibCST](https://github.com/Instagram/LibCST), which is used in addition to the RustPython parser
in Ruff. For example, for `if True: pass # comment`, everything, including the whitespace, is represented:
in Ruff. E.g. for `if True: pass # comment` everything including the whitespace is represented:
```text
Module {
@@ -876,7 +863,7 @@ each configuration file.
The package root is used to determine a file's "module path". Consider, again, `baz.py`. In that
case, `./my_project/src/foo` was identified as the package root, so the module path for `baz.py`
would resolve to `foo.bar.baz` — as computed by taking the relative path from the package root
would resolve to `foo.bar.baz` — as computed by taking the relative path from the package root
(inclusive of the root itself). The module path can be thought of as "the path you would use to
import the module" (e.g., `import foo.bar.baz`).

2599
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,9 +3,8 @@ members = ["crates/*"]
resolver = "2"
[workspace.package]
# Please update rustfmt.toml when bumping the Rust edition
edition = "2024"
rust-version = "1.85"
edition = "2021"
rust-version = "1.80"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -14,7 +13,6 @@ license = "MIT"
[workspace.dependencies]
ruff = { path = "crates/ruff" }
ruff_annotate_snippets = { path = "crates/ruff_annotate_snippets" }
ruff_cache = { path = "crates/ruff_cache" }
ruff_db = { path = "crates/ruff_db", default-features = false }
ruff_diagnostics = { path = "crates/ruff_diagnostics" }
@@ -24,7 +22,6 @@ ruff_index = { path = "crates/ruff_index" }
ruff_linter = { path = "crates/ruff_linter" }
ruff_macros = { path = "crates/ruff_macros" }
ruff_notebook = { path = "crates/ruff_notebook" }
ruff_options_metadata = { path = "crates/ruff_options_metadata" }
ruff_python_ast = { path = "crates/ruff_python_ast" }
ruff_python_codegen = { path = "crates/ruff_python_codegen" }
ruff_python_formatter = { path = "crates/ruff_python_formatter" }
@@ -37,56 +34,51 @@ ruff_python_trivia = { path = "crates/ruff_python_trivia" }
ruff_server = { path = "crates/ruff_server" }
ruff_source_file = { path = "crates/ruff_source_file" }
ruff_text_size = { path = "crates/ruff_text_size" }
red_knot_vendored = { path = "crates/red_knot_vendored" }
ruff_workspace = { path = "crates/ruff_workspace" }
ty = { path = "crates/ty" }
ty_ide = { path = "crates/ty_ide" }
ty_project = { path = "crates/ty_project", default-features = false }
ty_python_semantic = { path = "crates/ty_python_semantic" }
ty_server = { path = "crates/ty_server" }
ty_test = { path = "crates/ty_test" }
ty_vendored = { path = "crates/ty_vendored" }
red_knot_python_semantic = { path = "crates/red_knot_python_semantic" }
red_knot_server = { path = "crates/red_knot_server" }
red_knot_test = { path = "crates/red_knot_test" }
red_knot_workspace = { path = "crates/red_knot_workspace", default-features = false }
aho-corasick = { version = "1.1.3" }
anstream = { version = "0.6.18" }
anstyle = { version = "1.0.10" }
annotate-snippets = { version = "0.9.2", features = ["color"] }
anyhow = { version = "1.0.80" }
assert_fs = { version = "1.1.0" }
argfile = { version = "0.2.0" }
bincode = { version = "2.0.0" }
bincode = { version = "1.3.3" }
bitflags = { version = "2.5.0" }
bstr = { version = "1.9.1" }
cachedir = { version = "0.3.1" }
camino = { version = "1.1.7" }
chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
clap = { version = "4.5.3", features = ["derive"] }
clap_complete_command = { version = "0.6.0" }
clearscreen = { version = "4.0.0" }
clearscreen = { version = "3.0.0" }
codspeed-criterion-compat = { version = "2.6.0", default-features = false }
colored = { version = "3.0.0" }
colored = { version = "2.1.0" }
console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
compact_str = "0.9.0"
criterion = { version = "0.6.0", default-features = false }
compact_str = "0.8.0"
criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
dir-test = { version = "0.4.0" }
dir-test = { version = "0.3.0" }
dunce = { version = "1.0.5" }
drop_bomb = { version = "0.1.5" }
env_logger = { version = "0.11.0" }
etcetera = { version = "0.10.0" }
etcetera = { version = "0.8.0" }
fern = { version = "0.7.0" }
filetime = { version = "0.2.23" }
getrandom = { version = "0.3.1" }
glob = { version = "0.3.1" }
globset = { version = "0.4.14" }
globwalk = { version = "0.9.1" }
hashbrown = { version = "0.15.0", default-features = false, features = [
"raw-entry",
"equivalent",
"inline-more",
] }
heck = "0.5.0"
ignore = { version = "0.4.22" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
@@ -97,10 +89,9 @@ insta = { version = "1.35.1" }
insta-cmd = { version = "0.6.0" }
is-macro = { version = "0.3.5" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.14.0" }
jiff = { version = "0.2.0" }
itertools = { version = "0.13.0" }
js-sys = { version = "0.3.69" }
jod-thread = { version = "1.0.0" }
jod-thread = { version = "0.1.2" }
libc = { version = "0.2.153" }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
@@ -112,7 +103,7 @@ matchit = { version = "0.8.1" }
memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
notify = { version = "8.0.0" }
notify = { version = "7.0.0" }
ordermap = { version = "0.5.0" }
path-absolutize = { version = "3.1.1" }
path-slash = { version = "0.2.1" }
@@ -120,16 +111,14 @@ pathdiff = { version = "0.2.1" }
pep440_rs = { version = "0.7.1" }
pretty_assertions = "1.3.0"
proc-macro2 = { version = "1.0.79" }
pyproject-toml = { version = "0.13.4" }
pyproject-toml = { version = "0.9.0" }
quick-junit = { version = "0.5.0" }
quote = { version = "1.0.23" }
rand = { version = "0.9.0" }
rand = { version = "0.8.5" }
rayon = { version = "1.10.0" }
regex = { version = "1.10.2" }
rustc-hash = { version = "2.0.0" }
rustc-stable-hash = { version = "0.1.2" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "7edce6e248f35c8114b4b021cdb474a3fb2813b3" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "254c749b02cde2fd29852a7463a33e800b771758" }
schemars = { version = "0.8.16" }
seahash = { version = "4.1.0" }
serde = { version = "1.0.197", features = ["derive"] }
@@ -142,15 +131,9 @@ serde_with = { version = "3.6.0", default-features = false, features = [
shellexpand = { version = "3.0.0" }
similar = { version = "2.4.0", features = ["inline"] }
smallvec = { version = "1.13.2" }
snapbox = { version = "0.6.0", features = [
"diff",
"term-svg",
"cmd",
"examples",
] }
static_assertions = "1.1.0"
strum = { version = "0.27.0", features = ["strum_macros"] }
strum_macros = { version = "0.27.0" }
strum = { version = "0.26.0", features = ["strum_macros"] }
strum_macros = { version = "0.26.0" }
syn = { version = "2.0.55" }
tempfile = { version = "3.9.0" }
test-case = { version = "3.3.1" }
@@ -160,20 +143,18 @@ toml = { version = "0.8.11" }
tracing = { version = "0.1.40" }
tracing-flame = { version = "0.2.0" }
tracing-indicatif = { version = "0.3.6" }
tracing-log = { version = "0.2.0" }
tracing-subscriber = { version = "0.3.18", default-features = false, features = [
"env-filter",
"fmt",
"ansi",
"smallvec"
] }
tryfn = { version = "0.2.1" }
tracing-tree = { version = "0.4.0" }
typed-arena = { version = "2.0.2" }
unic-ucd-category = { version = "0.9" }
unicode-ident = { version = "1.0.12" }
unicode-width = { version = "0.2.0" }
unicode-width = { version = "0.1.11" }
unicode_names2 = { version = "1.2.2" }
unicode-normalization = { version = "0.1.23" }
ureq = { version = "2.9.6" }
url = { version = "2.5.0" }
uuid = { version = "1.6.1", features = [
"v4",
@@ -187,10 +168,6 @@ wasm-bindgen-test = { version = "0.3.42" }
wild = { version = "2" }
zip = { version = "0.6.6", default-features = false }
[workspace.metadata.cargo-shear]
ignored = ["getrandom", "ruff_options_metadata"]
[workspace.lints.rust]
unsafe_code = "warn"
unreachable_pub = "warn"
@@ -215,8 +192,6 @@ must_use_candidate = "allow"
similar_names = "allow"
single_match_else = "allow"
too_many_lines = "allow"
needless_continue = "allow" # An explicit continue can be more readable, especially if the alternative is an empty block.
unnecessary_debug_formatting = "allow" # too many instances, the display also doesn't quote the path which is often desired in logs where we use them the most often.
# Without the hashes we run into a `rustfmt` bug in some snapshot tests, see #13250
needless_raw_string_hashes = "allow"
# Disallowed restriction lints
@@ -235,9 +210,6 @@ redundant_clone = "warn"
debug_assert_with_mut_call = "warn"
unused_peekable = "warn"
# Diagnostics are not actionable: Enable once https://github.com/rust-lang/rust-clippy/issues/13774 is resolved.
large_stack_arrays = "allow"
[profile.release]
# Note that we set these explicitly, and these values
# were chosen based on a trade-off between compile times
@@ -261,9 +233,6 @@ opt-level = 3
[profile.dev.package.similar]
opt-level = 3
[profile.dev.package.salsa]
opt-level = 3
# Reduce complexity of a parser function that would trigger a locals limit in a wasm tool.
# https://github.com/bytecodealliance/wasm-tools/blob/b5c3d98e40590512a3b12470ef358d5c7b983b15/crates/wasmparser/src/limits.rs#L29
[profile.dev.package.ruff_python_parser]
@@ -278,3 +247,64 @@ debug = 1
# The profile that 'cargo dist' will build with.
[profile.dist]
inherits = "release"
# Config for 'cargo dist'
[workspace.metadata.dist]
# The preferred cargo-dist version to use in CI (Cargo.toml SemVer syntax)
cargo-dist-version = "0.22.1"
# CI backends to support
ci = "github"
# The installers to generate for each app
installers = ["shell", "powershell"]
# The archive format to use for windows builds (defaults .zip)
windows-archive = ".zip"
# The archive format to use for non-windows builds (defaults .tar.xz)
unix-archive = ".tar.gz"
# Target platforms to build apps for (Rust target-triple syntax)
targets = [
"aarch64-apple-darwin",
"aarch64-pc-windows-msvc",
"aarch64-unknown-linux-gnu",
"aarch64-unknown-linux-musl",
"arm-unknown-linux-musleabihf",
"armv7-unknown-linux-gnueabihf",
"armv7-unknown-linux-musleabihf",
"i686-pc-windows-msvc",
"i686-unknown-linux-gnu",
"i686-unknown-linux-musl",
"powerpc64-unknown-linux-gnu",
"powerpc64le-unknown-linux-gnu",
"s390x-unknown-linux-gnu",
"x86_64-apple-darwin",
"x86_64-pc-windows-msvc",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
]
# Whether to auto-include files like READMEs, LICENSEs, and CHANGELOGs (default true)
auto-includes = false
# Whether cargo-dist should create a GitHub Release or use an existing draft
create-release = true
# Which actions to run on pull requests
pr-run-mode = "skip"
# Whether CI should trigger releases with dispatches instead of tag pushes
dispatch-releases = true
# Which phase cargo-dist should use to create the GitHub release
github-release = "announce"
# Whether CI should include auto-generated code to build local artifacts
build-local-artifacts = false
# Local artifacts jobs to run in CI
local-artifacts-jobs = ["./build-binaries", "./build-docker"]
# Publish jobs to run in CI
publish-jobs = ["./publish-pypi", "./publish-wasm"]
# Post-announce jobs to run in CI
post-announce-jobs = [
"./notify-dependents",
"./publish-docs",
"./publish-playground",
]
# Custom permissions for GitHub Jobs
github-custom-job-permissions = { "build-docker" = { packages = "write", contents = "read" }, "publish-wasm" = { contents = "read", id-token = "write", packages = "write" } }
# Whether to install an updater program
install-updater = false
# Path that installers should place binaries in
install-path = "CARGO_HOME"

View File

@@ -116,22 +116,9 @@ For more, see the [documentation](https://docs.astral.sh/ruff/).
### Installation
Ruff is available as [`ruff`](https://pypi.org/project/ruff/) on PyPI.
Invoke Ruff directly with [`uvx`](https://docs.astral.sh/uv/):
Ruff is available as [`ruff`](https://pypi.org/project/ruff/) on PyPI:
```shell
uvx ruff check # Lint all files in the current directory.
uvx ruff format # Format all files in the current directory.
```
Or install Ruff with `uv` (recommended), `pip`, or `pipx`:
```shell
# With uv.
uv tool install ruff@latest # Install Ruff globally.
uv add --dev ruff # Or add Ruff to your project.
# With pip.
pip install ruff
@@ -149,8 +136,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.11.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.11.10/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.7.4/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.7.4/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -183,7 +170,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.11.10
rev: v0.7.4
hooks:
# Run the linter.
- id: ruff
@@ -205,7 +192,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/ruff-action@v3
- uses: astral-sh/ruff-action@v1
```
### Configuration<a id="configuration"></a>
@@ -251,11 +238,11 @@ exclude = [
line-length = 88
indent-width = 4
# Assume Python 3.9
target-version = "py39"
# Assume Python 3.8
target-version = "py38"
[lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
select = ["E4", "E7", "E9", "F"]
ignore = []
@@ -452,7 +439,6 @@ Ruff is used by a number of major open-source projects and companies, including:
- ING Bank ([popmon](https://github.com/ing-bank/popmon), [probatus](https://github.com/ing-bank/probatus))
- [Ibis](https://github.com/ibis-project/ibis)
- [ivy](https://github.com/unifyai/ivy)
- [JAX](https://github.com/jax-ml/jax)
- [Jupyter](https://github.com/jupyter-server/jupyter_server)
- [Kraken Tech](https://kraken.tech/)
- [LangChain](https://github.com/hwchase17/langchain)

View File

@@ -1,15 +0,0 @@
# Security policy
## Reporting a vulnerability
If you have found a possible vulnerability, please email `security at astral dot sh`.
## Bug bounties
While we sincerely appreciate and encourage reports of suspected security problems, please note that
Astral does not currently run any bug bounty programs.
## Vulnerability disclosures
Critical vulnerabilities will be disclosed via GitHub's
[security advisory](https://github.com/astral-sh/ruff/security) system.

View File

@@ -1,9 +1,10 @@
[files]
# https://github.com/crate-ci/typos/issues/868
extend-exclude = [
"crates/ty_vendored/vendor/**/*",
"**/resources/**/*",
"**/snapshots/**/*",
"crates/red_knot_vendored/vendor/**/*",
"**/resources/**/*",
"**/snapshots/**/*",
"crates/red_knot_workspace/src/workspace/pyproject/package_name.rs"
]
[default.extend-words]
@@ -20,14 +21,7 @@ Numer = "Numer" # Library name 'NumerBlox' in "Who's Using Ruff?"
[default]
extend-ignore-re = [
# Line ignore with trailing "spellchecker:disable-line"
"(?Rm)^.*#\\s*spellchecker:disable-line$",
"LICENSEs",
# Various third party dependencies uses `typ` as struct field names (e.g., lsp_types::LogMessageParams)
"typ",
# TODO: Remove this once the `TYP` redirects are removed from `rule_redirects.rs`
"TYP",
# Line ignore with trailing "spellchecker:disable-line"
"(?Rm)^.*#\\s*spellchecker:disable-line$",
"LICENSEs",
]
[default.extend-identifiers]
"FrIeNdLy" = "FrIeNdLy"

View File

@@ -1,25 +1,21 @@
doc-valid-idents = [
"..",
"CodeQL",
"FastAPI",
"IPython",
"LangChain",
"LibCST",
"McCabe",
"NumPy",
"SCREAMING_SNAKE_CASE",
"SQLAlchemy",
"StackOverflow",
"PyCharm",
"SNMPv1",
"SNMPv2",
"SNMPv3",
"PyFlakes"
"..",
"CodeQL",
"FastAPI",
"IPython",
"LangChain",
"LibCST",
"McCabe",
"NumPy",
"SCREAMING_SNAKE_CASE",
"SQLAlchemy",
"StackOverflow",
"PyCharm",
]
ignore-interior-mutability = [
# Interned is read-only. The wrapped `Rc` never gets updated.
"ruff_formatter::format_element::Interned",
# The expression is read-only.
"ruff_python_ast::hashable::HashableExpr",
# Interned is read-only. The wrapped `Rc` never gets updated.
"ruff_formatter::format_element::Interned",
# The expression is read-only.
"ruff_python_ast::hashable::HashableExpr",
]

View File

@@ -0,0 +1,40 @@
[package]
name = "red_knot"
version = "0.0.0"
edition.workspace = true
rust-version.workspace = true
homepage.workspace = true
documentation.workspace = true
repository.workspace = true
authors.workspace = true
license.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
red_knot_python_semantic = { workspace = true }
red_knot_workspace = { workspace = true, features = ["zstd"] }
red_knot_server = { workspace = true }
ruff_db = { workspace = true, features = ["os", "cache"] }
anyhow = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true, features = ["wrap_help"] }
colored = { workspace = true }
countme = { workspace = true, features = ["enable"] }
crossbeam = { workspace = true }
ctrlc = { version = "3.4.4" }
rayon = { workspace = true }
salsa = { workspace = true }
tracing = { workspace = true, features = ["release_max_level_debug"] }
tracing-subscriber = { workspace = true, features = ["env-filter", "fmt"] }
tracing-flame = { workspace = true }
tracing-tree = { workspace = true }
[dev-dependencies]
filetime = { workspace = true }
tempfile = { workspace = true }
ruff_db = { workspace = true, features = ["testing"] }
[lints]
workspace = true

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

@@ -1,7 +1,6 @@
# Tracing
Traces are a useful tool to narrow down the location of a bug or, at least, to understand why the compiler is doing a
particular thing.
Traces are a useful tool to narrow down the location of a bug or, at least, to understand why the compiler is doing a particular thing.
Note, tracing messages with severity `debug` or greater are user-facing. They should be phrased accordingly.
Tracing spans are only shown when using `-vvv`.
@@ -10,28 +9,20 @@ Tracing spans are only shown when using `-vvv`.
The CLI supports different verbosity levels.
- default: Only show errors and warnings.
- `-v` activates `info!`: Show generally useful information such as paths of configuration files, detected platform,
etc., but it's not a lot of messages, it's something you'll activate in CI by default. cargo build e.g. shows you
which packages are fresh.
- `-vv` activates `debug!` and timestamps: This should be enough information to get to the bottom of bug reports. When
you're processing many packages or files, you'll get pages and pages of output, but each line is link to a specific
action or state change.
- `-vvv` activates `trace!` (only in debug builds) and shows tracing-spans: At this level, you're logging everything.
Most of this is wasted, it's really slow, we dump e.g. the entire resolution graph. Only useful to developers, and you
almost certainly want to use `TY_LOG` to filter it down to the area your investigating.
- `-v` activates `info!`: Show generally useful information such as paths of configuration files, detected platform, etc., but it's not a lot of messages, it's something you'll activate in CI by default. cargo build e.g. shows you which packages are fresh.
- `-vv` activates `debug!` and timestamps: This should be enough information to get to the bottom of bug reports. When you're processing many packages or files, you'll get pages and pages of output, but each line is link to a specific action or state change.
- `-vvv` activates `trace!` (only in debug builds) and shows tracing-spans: At this level, you're logging everything. Most of this is wasted, it's really slow, we dump e.g. the entire resolution graph. Only useful to developers, and you almost certainly want to use `RED_KNOT_LOG` to filter it down to the area your investigating.
## Better logging with `TY_LOG` and `TY_MAX_PARALLELISM`
## Better logging with `RED_KNOT_LOG` and `RAYON_NUM_THREADS`
By default, the CLI shows messages from the `ruff` and `ty` crates. Tracing messages from other crates are not shown.
The `TY_LOG` environment variable allows you to customize which messages are shown by specifying one
or
more [filter directives](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives).
By default, the CLI shows messages from the `ruff` and `red_knot` crates. Tracing messages from other crates are not shown.
The `RED_KNOT_LOG` environment variable allows you to customize which messages are shown by specifying one
or more [filter directives](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives).
The `TY_MAX_PARALLELISM` environment variable, meanwhile, can be used to control the level of parallelism ty uses.
By default, ty will attempt to parallelize its work so that multiple files are checked simultaneously,
but this can result in a confused logging output where messages from different threads are intertwined and non
determinism.
To switch off parallelism entirely and have more readable logs, use `TY_MAX_PARALLELISM=1` (or `RAYON_NUM_THREADS=1`).
The `RAYON_NUM_THREADS` environment variable, meanwhile, can be used to control the level of concurrency red-knot uses.
By default, red-knot will attempt to parallelize its work so that multiple files are checked simultaneously,
but this can result in a confused logging output where messages from different threads are intertwined.
To switch off concurrency entirely and have more readable logs, use `RAYON_NUM_THREADS=1`.
### Examples
@@ -40,23 +31,23 @@ To switch off parallelism entirely and have more readable logs, use `TY_MAX_PARA
Shows debug messages from all crates.
```bash
TY_LOG=debug
RED_KNOT_LOG=debug
```
#### Show salsa query execution messages
Show the salsa `execute: my_query` messages in addition to all ty messages.
Show the salsa `execute: my_query` messages in addition to all red knot messages.
```bash
TY_LOG=ruff=trace,ty=trace,salsa=info
RED_KNOT_LOG=ruff=trace,red_knot=trace,salsa=info
```
#### Show typing traces
Only show traces for the `ty_python_semantic::types` module.
Only show traces for the `red_knot_python_semantic::types` module.
```bash
TY_LOG="ty_python_semantic::types"
RED_KNOT_LOG="red_knot_python_semantic::types"
```
Note: Ensure that you use `-vvv` to see tracing spans.
@@ -66,7 +57,7 @@ Note: Ensure that you use `-vvv` to see tracing spans.
Shows all messages that are inside of a span for a specific file.
```bash
TY_LOG=ty[{file=/home/micha/astral/test/x.py}]=trace
RED_KNOT_LOG=red_knot[{file=/home/micha/astral/test/x.py}]=trace
```
**Note**: Tracing still shows all spans because tracing can't know at the time of entering the span
@@ -88,24 +79,22 @@ query to return the failure as part of the query's result or use a Salsa accumul
## Tracing in tests
You can use `ruff_db::testing::setup_logging` or `ruff_db::testing::setup_logging_with_filter` to set up logging in
tests.
You can use `ruff_db::testing::setup_logging` or `ruff_db::testing::setup_logging_with_filter` to set up logging in tests.
```rust
use ruff_db::testing::setup_logging;
#[test]
fn test() {
let _logging = setup_logging();
let _logging = setup_logging();
tracing::info!("This message will be printed to stderr");
tracing::info!("This message will be printed to stderr");
}
```
Note: Most test runners capture stderr and only show its output when a test fails.
Note also that `setup_logging` only sets up logging for the current thread because
[`set_global_default`](https://docs.rs/tracing/latest/tracing/subscriber/fn.set_global_default.html) can only be
Note also that `setup_logging` only sets up logging for the current thread because [`set_global_default`](https://docs.rs/tracing/latest/tracing/subscriber/fn.set_global_default.html) can only be
called **once**.
## Release builds
@@ -114,11 +103,10 @@ called **once**.
## Profiling
ty generates a folded stack trace to the current directory named `tracing.folded` when setting the environment variable
`TY_LOG_PROFILE` to `1` or `true`.
Red Knot generates a folded stack trace to the current directory named `tracing.folded` when setting the environment variable `RED_KNOT_LOG_PROFILE` to `1` or `true`.
```bash
TY_LOG_PROFILE=1 ty -- --current-directory=../test -vvv
RED_KNOT_LOG_PROFILE=1 red_knot -- --current-directory=../test -vvv
```
You can convert the textual representation into a visual one using `inferno`.

View File

@@ -1,17 +1,16 @@
//! Sets up logging for ty
//! Sets up logging for Red Knot
use crate::args::TerminalColor;
use anyhow::Context;
use colored::Colorize;
use std::fmt;
use std::fs::File;
use std::io::{BufWriter, IsTerminal};
use std::io::BufWriter;
use tracing::{Event, Subscriber};
use tracing_subscriber::EnvFilter;
use tracing_subscriber::filter::LevelFilter;
use tracing_subscriber::fmt::format::Writer;
use tracing_subscriber::fmt::{FmtContext, FormatEvent, FormatFields};
use tracing_subscriber::registry::LookupSpan;
use tracing_subscriber::EnvFilter;
/// Logging flags to `#[command(flatten)]` into your CLI
#[derive(clap::Args, Debug, Clone, Default)]
@@ -43,14 +42,14 @@ impl Verbosity {
#[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd)]
pub(crate) enum VerbosityLevel {
/// Default output level. Only shows Ruff and ty events up to the [`WARN`](tracing::Level::WARN).
/// Default output level. Only shows Ruff and Red Knot events up to the [`WARN`](tracing::Level::WARN).
Default,
/// Enables verbose output. Emits Ruff and ty events up to the [`INFO`](tracing::Level::INFO).
/// Enables verbose output. Emits Ruff and Red Knot events up to the [`INFO`](tracing::Level::INFO).
/// Corresponds to `-v`.
Verbose,
/// Enables a more verbose tracing format and emits Ruff and ty events up to [`DEBUG`](tracing::Level::DEBUG).
/// Enables a more verbose tracing format and emits Ruff and Red Knot events up to [`DEBUG`](tracing::Level::DEBUG).
/// Corresponds to `-vv`
ExtraVerbose,
@@ -77,17 +76,14 @@ impl VerbosityLevel {
}
}
pub(crate) fn setup_tracing(
level: VerbosityLevel,
color: TerminalColor,
) -> anyhow::Result<TracingGuard> {
pub(crate) fn setup_tracing(level: VerbosityLevel) -> anyhow::Result<TracingGuard> {
use tracing_subscriber::prelude::*;
// The `TY_LOG` environment variable overrides the default log level.
let filter = if let Ok(log_env_variable) = std::env::var("TY_LOG") {
// The `RED_KNOT_LOG` environment variable overrides the default log level.
let filter = if let Ok(log_env_variable) = std::env::var("RED_KNOT_LOG") {
EnvFilter::builder()
.parse(log_env_variable)
.context("Failed to parse directives specified in TY_LOG environment variable.")?
.context("Failed to parse directives specified in RED_KNOT_LOG environment variable.")?
} else {
match level {
VerbosityLevel::Default => {
@@ -97,9 +93,9 @@ pub(crate) fn setup_tracing(
level => {
let level_filter = level.level_filter();
// Show info|debug|trace events, but allow `TY_LOG` to override
// Show info|debug|trace events, but allow `RED_KNOT_LOG` to override
let filter = EnvFilter::default().add_directive(
format!("ty={level_filter}")
format!("red_knot={level_filter}")
.parse()
.expect("Hardcoded directive to be valid"),
);
@@ -119,33 +115,27 @@ pub(crate) fn setup_tracing(
.with(filter)
.with(profiling_layer);
let ansi = match color {
TerminalColor::Auto => {
colored::control::SHOULD_COLORIZE.should_colorize() && std::io::stderr().is_terminal()
}
TerminalColor::Always => true,
TerminalColor::Never => false,
};
if level.is_trace() {
let subscriber = registry.with(
tracing_subscriber::fmt::layer()
.event_format(tracing_subscriber::fmt::format().pretty())
tracing_tree::HierarchicalLayer::default()
.with_indent_lines(true)
.with_indent_amount(2)
.with_bracketed_fields(true)
.with_thread_ids(true)
.with_ansi(ansi)
.with_writer(std::io::stderr),
.with_targets(true)
.with_writer(std::io::stderr)
.with_timer(tracing_tree::time::Uptime::default()),
);
subscriber.init();
} else {
let subscriber = registry.with(
tracing_subscriber::fmt::layer()
.event_format(TyFormat {
.event_format(RedKnotFormat {
display_level: true,
display_timestamp: level.is_extra_verbose(),
show_spans: false,
})
.with_ansi(ansi)
.with_writer(std::io::stderr),
);
@@ -157,7 +147,7 @@ pub(crate) fn setup_tracing(
})
}
#[expect(clippy::type_complexity)]
#[allow(clippy::type_complexity)]
fn setup_profile<S>() -> (
Option<tracing_flame::FlameLayer<S, BufWriter<File>>>,
Option<tracing_flame::FlushGuard<BufWriter<File>>>,
@@ -165,7 +155,7 @@ fn setup_profile<S>() -> (
where
S: Subscriber + for<'span> LookupSpan<'span>,
{
if let Ok("1" | "true") = std::env::var("TY_LOG_PROFILE").as_deref() {
if let Ok("1" | "true") = std::env::var("RED_KNOT_LOG_PROFILE").as_deref() {
let (layer, guard) = tracing_flame::FlameLayer::with_file("tracing.folded")
.expect("Flame layer to be created");
(Some(layer), Some(guard))
@@ -178,14 +168,14 @@ pub(crate) struct TracingGuard {
_flame_guard: Option<tracing_flame::FlushGuard<BufWriter<File>>>,
}
struct TyFormat {
struct RedKnotFormat {
display_timestamp: bool,
display_level: bool,
show_spans: bool,
}
/// See <https://docs.rs/tracing-subscriber/0.3.18/src/tracing_subscriber/fmt/format/mod.rs.html#1026-1156>
impl<S, N> FormatEvent<S, N> for TyFormat
impl<S, N> FormatEvent<S, N> for RedKnotFormat
where
S: Subscriber + for<'a> LookupSpan<'a>,
N: for<'a> FormatFields<'a> + 'static,
@@ -200,8 +190,8 @@ where
let ansi = writer.has_ansi_escapes();
if self.display_timestamp {
let timestamp = jiff::Zoned::now()
.strftime("%Y-%m-%d %H:%M:%S.%f")
let timestamp = chrono::Local::now()
.format("%Y-%m-%d %H:%M:%S.%f")
.to_string();
if ansi {
write!(writer, "{} ", timestamp.dimmed())?;
@@ -209,7 +199,7 @@ where
write!(
writer,
"{} ",
jiff::Zoned::now().strftime("%Y-%m-%d %H:%M:%S.%f")
chrono::Local::now().format("%Y-%m-%d %H:%M:%S.%f")
)?;
}
}

388
crates/red_knot/src/main.rs Normal file
View File

@@ -0,0 +1,388 @@
use std::process::{ExitCode, Termination};
use std::sync::Mutex;
use anyhow::{anyhow, Context};
use clap::Parser;
use colored::Colorize;
use crossbeam::channel as crossbeam_channel;
use red_knot_python_semantic::SitePackages;
use red_knot_server::run_server;
use red_knot_workspace::db::RootDatabase;
use red_knot_workspace::watch;
use red_knot_workspace::watch::WorkspaceWatcher;
use red_knot_workspace::workspace::settings::Configuration;
use red_knot_workspace::workspace::WorkspaceMetadata;
use ruff_db::diagnostic::Diagnostic;
use ruff_db::system::{OsSystem, System, SystemPath, SystemPathBuf};
use salsa::plumbing::ZalsaDatabase;
use target_version::TargetVersion;
use crate::logging::{setup_tracing, Verbosity};
mod logging;
mod target_version;
mod verbosity;
#[derive(Debug, Parser)]
#[command(
author,
name = "red-knot",
about = "An extremely fast Python type checker."
)]
#[command(version)]
struct Args {
#[command(subcommand)]
pub(crate) command: Option<Command>,
#[arg(
long,
help = "Changes the current working directory.",
long_help = "Changes the current working directory before any specified operations. This affects the workspace and configuration discovery.",
value_name = "PATH"
)]
current_directory: Option<SystemPathBuf>,
#[arg(
long,
help = "Path to the virtual environment the project uses",
long_help = "\
Path to the virtual environment the project uses. \
If provided, red-knot will use the `site-packages` directory of this virtual environment \
to resolve type information for the project's third-party dependencies.",
value_name = "PATH"
)]
venv_path: Option<SystemPathBuf>,
#[arg(
long,
value_name = "DIRECTORY",
help = "Custom directory to use for stdlib typeshed stubs"
)]
custom_typeshed_dir: Option<SystemPathBuf>,
#[arg(
long,
value_name = "PATH",
help = "Additional path to use as a module-resolution source (can be passed multiple times)"
)]
extra_search_path: Option<Vec<SystemPathBuf>>,
#[arg(
long,
help = "Python version to assume when resolving types",
value_name = "VERSION"
)]
target_version: Option<TargetVersion>,
#[clap(flatten)]
verbosity: Verbosity,
#[arg(
long,
help = "Run in watch mode by re-running whenever files change",
short = 'W'
)]
watch: bool,
}
impl Args {
fn to_configuration(&self, cli_cwd: &SystemPath) -> Configuration {
let mut configuration = Configuration::default();
if let Some(target_version) = self.target_version {
configuration.target_version = Some(target_version.into());
}
if let Some(venv_path) = &self.venv_path {
configuration.search_paths.site_packages = Some(SitePackages::Derived {
venv_path: SystemPath::absolute(venv_path, cli_cwd),
});
}
if let Some(custom_typeshed_dir) = &self.custom_typeshed_dir {
configuration.search_paths.custom_typeshed =
Some(SystemPath::absolute(custom_typeshed_dir, cli_cwd));
}
if let Some(extra_search_paths) = &self.extra_search_path {
configuration.search_paths.extra_paths = extra_search_paths
.iter()
.map(|path| Some(SystemPath::absolute(path, cli_cwd)))
.collect();
}
configuration
}
}
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Start the language server
Server,
}
#[allow(clippy::print_stdout, clippy::unnecessary_wraps, clippy::print_stderr)]
pub fn main() -> ExitStatus {
run().unwrap_or_else(|error| {
use std::io::Write;
// Use `writeln` instead of `eprintln` to avoid panicking when the stderr pipe is broken.
let mut stderr = std::io::stderr().lock();
// This communicates that this isn't a linter error but Red Knot itself hard-errored for
// some reason (e.g. failed to resolve the configuration)
writeln!(stderr, "{}", "Red Knot failed".red().bold()).ok();
// Currently we generally only see one error, but e.g. with io errors when resolving
// the configuration it is help to chain errors ("resolving configuration failed" ->
// "failed to read file: subdir/pyproject.toml")
for cause in error.chain() {
writeln!(stderr, " {} {cause}", "Cause:".bold()).ok();
}
ExitStatus::Error
})
}
fn run() -> anyhow::Result<ExitStatus> {
let args = Args::parse_from(std::env::args());
if matches!(args.command, Some(Command::Server)) {
return run_server().map(|()| ExitStatus::Success);
}
let verbosity = args.verbosity.level();
countme::enable(verbosity.is_trace());
let _guard = setup_tracing(verbosity)?;
// The base path to which all CLI arguments are relative to.
let cli_base_path = {
let cwd = std::env::current_dir().context("Failed to get the current working directory")?;
SystemPathBuf::from_path_buf(cwd)
.map_err(|path| {
anyhow!(
"The current working directory `{}` contains non-Unicode characters. Red Knot only supports Unicode paths.",
path.display()
)
})?
};
let cwd = args
.current_directory
.as_ref()
.map(|cwd| {
if cwd.as_std_path().is_dir() {
Ok(SystemPath::absolute(cwd, &cli_base_path))
} else {
Err(anyhow!(
"Provided current-directory path `{cwd}` is not a directory"
))
}
})
.transpose()?
.unwrap_or_else(|| cli_base_path.clone());
let system = OsSystem::new(cwd.clone());
let cli_configuration = args.to_configuration(&cwd);
let workspace_metadata = WorkspaceMetadata::discover(
system.current_directory(),
&system,
Some(&cli_configuration),
)?;
// TODO: Use the `program_settings` to compute the key for the database's persistent
// cache and load the cache if it exists.
let mut db = RootDatabase::new(workspace_metadata, system)?;
let (main_loop, main_loop_cancellation_token) = MainLoop::new(cli_configuration);
// Listen to Ctrl+C and abort the watch mode.
let main_loop_cancellation_token = Mutex::new(Some(main_loop_cancellation_token));
ctrlc::set_handler(move || {
let mut lock = main_loop_cancellation_token.lock().unwrap();
if let Some(token) = lock.take() {
token.stop();
}
})?;
let exit_status = if args.watch {
main_loop.watch(&mut db)?
} else {
main_loop.run(&mut db)
};
tracing::trace!("Counts for entire CLI run:\n{}", countme::get_all());
std::mem::forget(db);
Ok(exit_status)
}
#[derive(Copy, Clone)]
pub enum ExitStatus {
/// Checking was successful and there were no errors.
Success = 0,
/// Checking was successful but there were errors.
Failure = 1,
/// Checking failed.
Error = 2,
}
impl Termination for ExitStatus {
fn report(self) -> ExitCode {
ExitCode::from(self as u8)
}
}
struct MainLoop {
/// Sender that can be used to send messages to the main loop.
sender: crossbeam_channel::Sender<MainLoopMessage>,
/// Receiver for the messages sent **to** the main loop.
receiver: crossbeam_channel::Receiver<MainLoopMessage>,
/// The file system watcher, if running in watch mode.
watcher: Option<WorkspaceWatcher>,
cli_configuration: Configuration,
}
impl MainLoop {
fn new(cli_configuration: Configuration) -> (Self, MainLoopCancellationToken) {
let (sender, receiver) = crossbeam_channel::bounded(10);
(
Self {
sender: sender.clone(),
receiver,
watcher: None,
cli_configuration,
},
MainLoopCancellationToken { sender },
)
}
fn watch(mut self, db: &mut RootDatabase) -> anyhow::Result<ExitStatus> {
tracing::debug!("Starting watch mode");
let sender = self.sender.clone();
let watcher = watch::directory_watcher(move |event| {
sender.send(MainLoopMessage::ApplyChanges(event)).unwrap();
})?;
self.watcher = Some(WorkspaceWatcher::new(watcher, db));
self.run(db);
Ok(ExitStatus::Success)
}
fn run(mut self, db: &mut RootDatabase) -> ExitStatus {
self.sender.send(MainLoopMessage::CheckWorkspace).unwrap();
let result = self.main_loop(db);
tracing::debug!("Exiting main loop");
result
}
fn main_loop(&mut self, db: &mut RootDatabase) -> ExitStatus {
// Schedule the first check.
tracing::debug!("Starting main loop");
let mut revision = 0u64;
while let Ok(message) = self.receiver.recv() {
match message {
MainLoopMessage::CheckWorkspace => {
let db = db.snapshot();
let sender = self.sender.clone();
// Spawn a new task that checks the workspace. This needs to be done in a separate thread
// to prevent blocking the main loop here.
rayon::spawn(move || {
if let Ok(result) = db.check() {
// Send the result back to the main loop for printing.
sender
.send(MainLoopMessage::CheckCompleted { result, revision })
.unwrap();
}
});
}
MainLoopMessage::CheckCompleted {
result,
revision: check_revision,
} => {
let has_diagnostics = !result.is_empty();
if check_revision == revision {
#[allow(clippy::print_stdout)]
for diagnostic in result {
println!("{}", diagnostic.display(db));
}
} else {
tracing::debug!(
"Discarding check result for outdated revision: current: {revision}, result revision: {check_revision}"
);
}
if self.watcher.is_none() {
return if has_diagnostics {
ExitStatus::Failure
} else {
ExitStatus::Success
};
}
tracing::trace!("Counts after last check:\n{}", countme::get_all());
}
MainLoopMessage::ApplyChanges(changes) => {
revision += 1;
// Automatically cancels any pending queries and waits for them to complete.
db.apply_changes(changes, Some(&self.cli_configuration));
if let Some(watcher) = self.watcher.as_mut() {
watcher.update(db);
}
self.sender.send(MainLoopMessage::CheckWorkspace).unwrap();
}
MainLoopMessage::Exit => {
// Cancel any pending queries and wait for them to complete.
// TODO: Don't use Salsa internal APIs
// [Zulip-Thread](https://salsa.zulipchat.com/#narrow/stream/333573-salsa-3.2E0/topic/Expose.20an.20API.20to.20cancel.20other.20queries)
let _ = db.zalsa_mut();
return ExitStatus::Success;
}
}
tracing::debug!("Waiting for next main loop message.");
}
ExitStatus::Success
}
}
#[derive(Debug)]
struct MainLoopCancellationToken {
sender: crossbeam_channel::Sender<MainLoopMessage>,
}
impl MainLoopCancellationToken {
fn stop(self) {
self.sender.send(MainLoopMessage::Exit).unwrap();
}
}
/// Message sent from the orchestrator to the main loop.
#[derive(Debug)]
enum MainLoopMessage {
CheckWorkspace,
CheckCompleted {
result: Vec<Box<dyn Diagnostic>>,
revision: u64,
},
ApplyChanges(Vec<watch::ChangeEvent>),
Exit,
}

View File

@@ -0,0 +1,62 @@
/// Enumeration of all supported Python versions
///
/// TODO: unify with the `PythonVersion` enum in the linter/formatter crates?
#[derive(Copy, Clone, Hash, Debug, PartialEq, Eq, PartialOrd, Ord, Default, clap::ValueEnum)]
pub enum TargetVersion {
Py37,
Py38,
#[default]
Py39,
Py310,
Py311,
Py312,
Py313,
}
impl TargetVersion {
const fn as_str(self) -> &'static str {
match self {
Self::Py37 => "py37",
Self::Py38 => "py38",
Self::Py39 => "py39",
Self::Py310 => "py310",
Self::Py311 => "py311",
Self::Py312 => "py312",
Self::Py313 => "py313",
}
}
}
impl std::fmt::Display for TargetVersion {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.as_str())
}
}
impl From<TargetVersion> for red_knot_python_semantic::PythonVersion {
fn from(value: TargetVersion) -> Self {
match value {
TargetVersion::Py37 => Self::PY37,
TargetVersion::Py38 => Self::PY38,
TargetVersion::Py39 => Self::PY39,
TargetVersion::Py310 => Self::PY310,
TargetVersion::Py311 => Self::PY311,
TargetVersion::Py312 => Self::PY312,
TargetVersion::Py313 => Self::PY313,
}
}
}
#[cfg(test)]
mod tests {
use crate::target_version::TargetVersion;
use red_knot_python_semantic::PythonVersion;
#[test]
fn same_default_as_python_version() {
assert_eq!(
PythonVersion::from(TargetVersion::default()),
PythonVersion::default()
);
}
}

View File

@@ -0,0 +1 @@

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
[package]
name = "ty_python_semantic"
name = "red_knot_python_semantic"
version = "0.0.0"
publish = false
authors = { workspace = true }
@@ -12,54 +12,43 @@ license = { workspace = true }
[dependencies]
ruff_db = { workspace = true }
ruff_index = { workspace = true, features = ["salsa"] }
ruff_macros = { workspace = true }
ruff_python_ast = { workspace = true, features = ["salsa"] }
ruff_index = { workspace = true }
ruff_python_ast = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_stdlib = { workspace = true }
ruff_source_file = { workspace = true }
ruff_text_size = { workspace = true }
ruff_python_literal = { workspace = true }
ruff_python_trivia = { workspace = true }
anyhow = { workspace = true }
bitflags = { workspace = true }
camino = { workspace = true }
compact_str = { workspace = true }
countme = { workspace = true }
drop_bomb = { workspace = true }
indexmap = { workspace = true }
itertools = { workspace = true }
ordermap = { workspace = true }
salsa = { workspace = true, features = ["compact_str"] }
salsa = { workspace = true }
thiserror = { workspace = true }
tracing = { workspace = true }
rustc-hash = { workspace = true }
hashbrown = { workspace = true }
schemars = { workspace = true, optional = true }
serde = { workspace = true, optional = true }
smallvec = { workspace = true }
static_assertions = { workspace = true }
test-case = { workspace = true }
memchr = { workspace = true }
strum = { workspace = true }
strum_macros = { workspace = true }
[dev-dependencies]
ruff_db = { workspace = true, features = ["testing", "os"] }
ruff_db = { workspace = true, features = ["os", "testing"] }
ruff_python_parser = { workspace = true }
ty_test = { workspace = true }
ty_vendored = { workspace = true }
red_knot_test = { workspace = true }
red_knot_vendored = { workspace = true }
anyhow = { workspace = true }
dir-test = { workspace = true }
insta = { workspace = true }
tempfile = { workspace = true }
quickcheck = { version = "1.0.3", default-features = false }
quickcheck_macros = { version = "1.0.0" }
[features]
serde = ["ruff_db/serde", "dep:serde", "ruff_python_ast/serde"]
[lints]
workspace = true

View File

@@ -1,4 +1,4 @@
Markdown files within the `mdtest/` subdirectory are tests of type inference and type checking;
executed by the `tests/mdtest.rs` integration test.
See `crates/ty_test/README.md` for documentation of this test format.
See `crates/red_knot_test/README.md` for documentation of this test format.

View File

@@ -45,13 +45,3 @@ def f():
# revealed: int | None
reveal_type(a)
```
## Invalid
```py
from typing import Optional
# error: [invalid-type-form] "`typing.Optional` requires exactly one argument when used in a type expression"
def f(x: Optional) -> None:
reveal_type(x) # revealed: Unknown
```

View File

@@ -1,10 +1,5 @@
# Starred expression annotations
```toml
[environment]
python-version = "3.11"
```
Type annotations for `*args` can be starred expressions themselves:
```py
@@ -13,10 +8,11 @@ from typing_extensions import TypeVarTuple
Ts = TypeVarTuple("Ts")
def append_int(*args: *Ts) -> tuple[*Ts, int]:
reveal_type(args) # revealed: @Todo(PEP 646)
# TODO: should show some representation of the variadic generic type
reveal_type(args) # revealed: @Todo
return (*args, 1)
# TODO should be tuple[Literal[True], Literal["a"], int]
reveal_type(append_int(True, "a")) # revealed: @Todo(PEP 646)
reveal_type(append_int(True, "a")) # revealed: @Todo
```

View File

@@ -0,0 +1,191 @@
# String annotations
## Simple
```py
def f() -> "int":
return 1
reveal_type(f()) # revealed: int
```
## Nested
```py
def f() -> "'int'":
return 1
reveal_type(f()) # revealed: int
```
## Type expression
```py
def f1() -> "int | str":
return 1
def f2() -> "tuple[int, str]":
return 1
reveal_type(f1()) # revealed: int | str
reveal_type(f2()) # revealed: tuple[int, str]
```
## Partial
```py
def f() -> tuple[int, "str"]:
return 1
reveal_type(f()) # revealed: tuple[int, str]
```
## Deferred
```py
def f() -> "Foo":
return Foo()
class Foo:
pass
reveal_type(f()) # revealed: Foo
```
## Deferred (undefined)
```py
# error: [unresolved-reference]
def f() -> "Foo":
pass
reveal_type(f()) # revealed: Unknown
```
## Partial deferred
```py
def f() -> int | "Foo":
return 1
class Foo:
pass
reveal_type(f()) # revealed: int | Foo
```
## `typing.Literal`
```py
from typing import Literal
def f1() -> Literal["Foo", "Bar"]:
return "Foo"
def f2() -> 'Literal["Foo", "Bar"]':
return "Foo"
class Foo:
pass
reveal_type(f1()) # revealed: Literal["Foo", "Bar"]
reveal_type(f2()) # revealed: Literal["Foo", "Bar"]
```
## Various string kinds
```py
# error: [annotation-raw-string] "Type expressions cannot use raw string literal"
def f1() -> r"int":
return 1
# error: [annotation-f-string] "Type expressions cannot use f-strings"
def f2() -> f"int":
return 1
# error: [annotation-byte-string] "Type expressions cannot use bytes literal"
def f3() -> b"int":
return 1
def f4() -> "int":
return 1
# error: [annotation-implicit-concat] "Type expressions cannot span multiple string literals"
def f5() -> "in" "t":
return 1
# error: [annotation-escape-character] "Type expressions cannot contain escape characters"
def f6() -> "\N{LATIN SMALL LETTER I}nt":
return 1
# error: [annotation-escape-character] "Type expressions cannot contain escape characters"
def f7() -> "\x69nt":
return 1
def f8() -> """int""":
return 1
# error: [annotation-byte-string] "Type expressions cannot use bytes literal"
def f9() -> "b'int'":
return 1
reveal_type(f1()) # revealed: Unknown
reveal_type(f2()) # revealed: Unknown
reveal_type(f3()) # revealed: Unknown
reveal_type(f4()) # revealed: int
reveal_type(f5()) # revealed: Unknown
reveal_type(f6()) # revealed: Unknown
reveal_type(f7()) # revealed: Unknown
reveal_type(f8()) # revealed: int
reveal_type(f9()) # revealed: Unknown
```
## Various string kinds in `typing.Literal`
```py
from typing import Literal
def f() -> Literal["a", r"b", b"c", "d" "e", "\N{LATIN SMALL LETTER F}", "\x67", """h"""]:
return "normal"
reveal_type(f()) # revealed: Literal["a", "b", "de", "f", "g", "h"] | Literal[b"c"]
```
## Class variables
```py
MyType = int
class Aliases:
MyType = str
forward: "MyType"
not_forward: MyType
reveal_type(Aliases.forward) # revealed: str
reveal_type(Aliases.not_forward) # revealed: str
```
## Annotated assignment
```py
a: "int" = 1
b: "'int'" = 1
c: "Foo"
# error: [invalid-assignment] "Object of type `Literal[1]` is not assignable to `Foo`"
d: "Foo" = 1
class Foo:
pass
c = Foo()
reveal_type(a) # revealed: Literal[1]
reveal_type(b) # revealed: Literal[1]
reveal_type(c) # revealed: Foo
reveal_type(d) # revealed: Foo
```
## Parameter
TODO: Add tests once parameter inference is supported

View File

@@ -25,14 +25,7 @@ x = "foo" # error: [invalid-assignment] "Object of type `Literal["foo"]` is not
## Tuple annotations are understood
```toml
[environment]
python-version = "3.12"
```
`module.py`:
```py
```py path=module.py
from typing_extensions import Unpack
a: tuple[()] = ()
@@ -40,6 +33,8 @@ b: tuple[int] = (42,)
c: tuple[str, int] = ("42", 42)
d: tuple[tuple[str, str], tuple[int, int]] = (("foo", "foo"), (42, 42))
e: tuple[str, ...] = ()
# TODO: we should not emit this error
# error: [call-possibly-unbound-method] "Method `__class_getitem__` of type `Literal[tuple]` is possibly unbound"
f: tuple[str, *tuple[int, ...], bytes] = ("42", b"42")
g: tuple[str, Unpack[tuple[int, ...]], bytes] = ("42", b"42")
h: tuple[list[int], list[int]] = ([], [])
@@ -47,21 +42,22 @@ i: tuple[str | int, str | int] = (42, 42)
j: tuple[str | int] = (42,)
```
`script.py`:
```py
```py path=script.py
from module import a, b, c, d, e, f, g, h, i, j
reveal_type(a) # revealed: tuple[()]
reveal_type(b) # revealed: tuple[int]
reveal_type(c) # revealed: tuple[str, int]
reveal_type(d) # revealed: tuple[tuple[str, str], tuple[int, int]]
reveal_type(e) # revealed: tuple[str, ...]
reveal_type(f) # revealed: @Todo(PEP 646)
reveal_type(g) # revealed: @Todo(PEP 646)
# TODO: homogenous tuples, PEP-646 tuples
reveal_type(e) # revealed: @Todo
reveal_type(f) # revealed: @Todo
reveal_type(g) # revealed: @Todo
# TODO: support more kinds of type expressions in annotations
reveal_type(h) # revealed: @Todo
reveal_type(h) # revealed: tuple[list[int], list[int]]
reveal_type(i) # revealed: tuple[str | int, str | int]
reveal_type(j) # revealed: tuple[str | int]
```
@@ -75,44 +71,27 @@ a: tuple[()] = (1, 2)
# error: [invalid-assignment] "Object of type `tuple[Literal["foo"]]` is not assignable to `tuple[int]`"
b: tuple[int] = ("foo",)
# error: [invalid-assignment] "Object of type `tuple[list[Unknown], Literal["foo"]]` is not assignable to `tuple[str | int, str]`"
# error: [invalid-assignment] "Object of type `tuple[list, Literal["foo"]]` is not assignable to `tuple[str | int, str]`"
c: tuple[str | int, str] = ([], "foo")
```
## PEP-604 annotations are supported
```py
def foo(v: str | int | None, w: str | str | None, x: str | str):
reveal_type(v) # revealed: str | int | None
reveal_type(w) # revealed: str | None
reveal_type(x) # revealed: str
```
def foo() -> str | int | None:
return None
## PEP-604 in non-type-expression context
reveal_type(foo()) # revealed: str | int | None
### In Python 3.10 and later
def bar() -> str | str | None:
return None
```toml
[environment]
python-version = "3.10"
```
reveal_type(bar()) # revealed: str | None
```py
IntOrStr = int | str
```
def baz() -> str | str:
return "Hello, world!"
### Earlier versions
<!-- snapshot-diagnostics -->
```toml
[environment]
python-version = "3.9"
```
```py
# error: [unsupported-operator]
IntOrStr = int | str
reveal_type(baz()) # revealed: str
```
## Attribute expressions in type annotations are understood
@@ -139,7 +118,8 @@ from __future__ import annotations
x: Foo
class Foo: ...
class Foo:
pass
x = Foo()
reveal_type(x) # revealed: Foo
@@ -147,18 +127,12 @@ reveal_type(x) # revealed: Foo
## Annotations in stub files are deferred
```pyi
```pyi path=main.pyi
x: Foo
class Foo: ...
class Foo:
pass
x = Foo()
reveal_type(x) # revealed: Foo
```
## Annotated assignments in stub files are inferred correctly
```pyi
x: int = 1
reveal_type(x) # revealed: Literal[1]
```

View File

@@ -0,0 +1,182 @@
# Augmented assignment
## Basic
```py
x = 3
x -= 1
reveal_type(x) # revealed: Literal[2]
x = 1.0
x /= 2
reveal_type(x) # revealed: float
```
## Dunder methods
```py
class C:
def __isub__(self, other: int) -> str:
return "Hello, world!"
x = C()
x -= 1
reveal_type(x) # revealed: str
class C:
def __iadd__(self, other: str) -> float:
return 1.0
x = C()
x += "Hello"
reveal_type(x) # revealed: float
```
## Unsupported types
```py
class C:
def __isub__(self, other: str) -> int:
return 42
x = C()
x -= 1
# TODO: should error, once operand type check is implemented
reveal_type(x) # revealed: int
```
## Method union
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
class Foo:
if bool_instance():
def __iadd__(self, other: int) -> str:
return "Hello, world!"
else:
def __iadd__(self, other: int) -> int:
return 42
f = Foo()
f += 12
reveal_type(f) # revealed: str | int
```
## Partially bound `__iadd__`
```py
def bool_instance() -> bool:
return True
class Foo:
if bool_instance():
def __iadd__(self, other: str) -> int:
return 42
f = Foo()
# TODO: We should emit an `unsupported-operator` error here, possibly with the information
# that `Foo.__iadd__` may be unbound as additional context.
f += "Hello, world!"
reveal_type(f) # revealed: int | Unknown
```
## Partially bound with `__add__`
```py
def bool_instance() -> bool:
return True
class Foo:
def __add__(self, other: str) -> str:
return "Hello, world!"
if bool_instance():
def __iadd__(self, other: str) -> int:
return 42
f = Foo()
f += "Hello, world!"
reveal_type(f) # revealed: int | str
```
## Partially bound target union
```py
def bool_instance() -> bool:
return True
class Foo:
def __add__(self, other: int) -> str:
return "Hello, world!"
if bool_instance():
def __iadd__(self, other: int) -> int:
return 42
if bool_instance():
f = Foo()
else:
f = 42.0
f += 12
reveal_type(f) # revealed: int | str | float
```
## Target union
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
class Foo:
def __iadd__(self, other: int) -> str:
return "Hello, world!"
if flag:
f = Foo()
else:
f = 42.0
f += 12
reveal_type(f) # revealed: str | float
```
## Partially bound target union with `__add__`
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
class Foo:
def __add__(self, other: int) -> str:
return "Hello, world!"
if bool_instance():
def __iadd__(self, other: int) -> int:
return 42
class Bar:
def __add__(self, other: int) -> bytes:
return b"Hello, world!"
def __iadd__(self, other: int) -> float:
return 42.0
if flag:
f = Foo()
else:
f = Bar()
f += 12
reveal_type(f) # revealed: int | str | float
```

View File

@@ -18,3 +18,43 @@ Note: in this particular example, one could argue that the most likely error wou
of the `x`/`foo` definitions, and so it could be desirable to infer `Literal[1]` for the type of
`x`. On the other hand, there might be a variable `fob` a little higher up in this file, and the
actual error might have been just a typo. Inferring `Unknown` thus seems like the safest option.
## Unbound class variable
Name lookups within a class scope fall back to globals, but lookups of class attributes don't.
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
x = 1
class C:
y = x
if flag:
x = 2
# error: [possibly-unbound-attribute] "Attribute `x` on type `Literal[C]` is possibly unbound"
reveal_type(C.x) # revealed: Literal[2]
reveal_type(C.y) # revealed: Literal[1]
```
## Possibly unbound in class and global scope
```py
def bool_instance() -> bool:
return True
if bool_instance():
x = "abc"
class C:
if bool_instance():
x = 1
# error: [possibly-unresolved-reference]
y = x
reveal_type(C.y) # revealed: Literal[1] | Literal["abc"]
```

View File

@@ -0,0 +1,136 @@
# Class attributes
## Union of attributes
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
class C1:
x = 1
else:
class C1:
x = 2
class C2:
if flag:
x = 3
else:
x = 4
reveal_type(C1.x) # revealed: Literal[1, 2]
reveal_type(C2.x) # revealed: Literal[3, 4]
```
## Inherited attributes
```py
class A:
X = "foo"
class B(A): ...
class C(B): ...
reveal_type(C.X) # revealed: Literal["foo"]
```
## Inherited attributes (multiple inheritance)
```py
class O: ...
class F(O):
X = 56
class E(O):
X = 42
class D(O): ...
class C(D, F): ...
class B(E, D): ...
class A(B, C): ...
# revealed: tuple[Literal[A], Literal[B], Literal[E], Literal[C], Literal[D], Literal[F], Literal[O], Literal[object]]
reveal_type(A.__mro__)
# `E` is earlier in the MRO than `F`, so we should use the type of `E.X`
reveal_type(A.X) # revealed: Literal[42]
```
## Unions with possibly unbound paths
### Definite boundness within a class
In this example, the `x` attribute is not defined in the `C2` element of the union:
```py
def bool_instance() -> bool:
return True
class C1:
x = 1
class C2: ...
class C3:
x = 3
flag1 = bool_instance()
flag2 = bool_instance()
C = C1 if flag1 else C2 if flag2 else C3
# error: [possibly-unbound-attribute] "Attribute `x` on type `Literal[C1, C2, C3]` is possibly unbound"
reveal_type(C.x) # revealed: Literal[1, 3]
```
### Possibly-unbound within a class
We raise the same diagnostic if the attribute is possibly-unbound in at least one element of the
union:
```py
def bool_instance() -> bool:
return True
class C1:
x = 1
class C2:
if bool_instance():
x = 2
class C3:
x = 3
flag1 = bool_instance()
flag2 = bool_instance()
C = C1 if flag1 else C2 if flag2 else C3
# error: [possibly-unbound-attribute] "Attribute `x` on type `Literal[C1, C2, C3]` is possibly unbound"
reveal_type(C.x) # revealed: Literal[1, 2, 3]
```
## Unions with all paths unbound
If the symbol is unbound in all elements of the union, we detect that:
```py
def bool_instance() -> bool:
return True
class C1: ...
class C2: ...
flag = bool_instance()
C = C1 if flag else C2
# error: [unresolved-attribute] "Type `Literal[C1, C2]` has no attribute `x`"
reveal_type(C.x) # revealed: Unknown
```

View File

@@ -0,0 +1,48 @@
## Binary operations on booleans
## Basic Arithmetic
We try to be precise and all operations except for division will result in Literal type.
```py
a = True
b = False
reveal_type(a + a) # revealed: Literal[2]
reveal_type(a + b) # revealed: Literal[1]
reveal_type(b + a) # revealed: Literal[1]
reveal_type(b + b) # revealed: Literal[0]
reveal_type(a - a) # revealed: Literal[0]
reveal_type(a - b) # revealed: Literal[1]
reveal_type(b - a) # revealed: Literal[-1]
reveal_type(b - b) # revealed: Literal[0]
reveal_type(a * a) # revealed: Literal[1]
reveal_type(a * b) # revealed: Literal[0]
reveal_type(b * a) # revealed: Literal[0]
reveal_type(b * b) # revealed: Literal[0]
reveal_type(a % a) # revealed: Literal[0]
reveal_type(b % a) # revealed: Literal[0]
reveal_type(a // a) # revealed: Literal[1]
reveal_type(b // a) # revealed: Literal[0]
reveal_type(a**a) # revealed: Literal[1]
reveal_type(a**b) # revealed: Literal[1]
reveal_type(b**a) # revealed: Literal[0]
reveal_type(b**b) # revealed: Literal[1]
# Division
reveal_type(a / a) # revealed: float
reveal_type(b / a) # revealed: float
b / b # error: [division-by-zero] "Cannot divide object of type `Literal[False]` by zero"
a / b # error: [division-by-zero] "Cannot divide object of type `Literal[True]` by zero"
# bitwise OR
reveal_type(a | a) # revealed: Literal[True]
reveal_type(a | b) # revealed: Literal[True]
reveal_type(b | a) # revealed: Literal[True]
reveal_type(b | b) # revealed: Literal[False]
```

View File

@@ -14,43 +14,43 @@ We support inference for all Python's binary operators: `+`, `-`, `*`, `@`, `/`,
```py
class A:
def __add__(self, other) -> "A":
def __add__(self, other) -> A:
return self
def __sub__(self, other) -> "A":
def __sub__(self, other) -> A:
return self
def __mul__(self, other) -> "A":
def __mul__(self, other) -> A:
return self
def __matmul__(self, other) -> "A":
def __matmul__(self, other) -> A:
return self
def __truediv__(self, other) -> "A":
def __truediv__(self, other) -> A:
return self
def __floordiv__(self, other) -> "A":
def __floordiv__(self, other) -> A:
return self
def __mod__(self, other) -> "A":
def __mod__(self, other) -> A:
return self
def __pow__(self, other) -> "A":
def __pow__(self, other) -> A:
return self
def __lshift__(self, other) -> "A":
def __lshift__(self, other) -> A:
return self
def __rshift__(self, other) -> "A":
def __rshift__(self, other) -> A:
return self
def __and__(self, other) -> "A":
def __and__(self, other) -> A:
return self
def __xor__(self, other) -> "A":
def __xor__(self, other) -> A:
return self
def __or__(self, other) -> "A":
def __or__(self, other) -> A:
return self
class B: ...
@@ -76,43 +76,43 @@ We also support inference for reflected operations:
```py
class A:
def __radd__(self, other) -> "A":
def __radd__(self, other) -> A:
return self
def __rsub__(self, other) -> "A":
def __rsub__(self, other) -> A:
return self
def __rmul__(self, other) -> "A":
def __rmul__(self, other) -> A:
return self
def __rmatmul__(self, other) -> "A":
def __rmatmul__(self, other) -> A:
return self
def __rtruediv__(self, other) -> "A":
def __rtruediv__(self, other) -> A:
return self
def __rfloordiv__(self, other) -> "A":
def __rfloordiv__(self, other) -> A:
return self
def __rmod__(self, other) -> "A":
def __rmod__(self, other) -> A:
return self
def __rpow__(self, other) -> "A":
def __rpow__(self, other) -> A:
return self
def __rlshift__(self, other) -> "A":
def __rlshift__(self, other) -> A:
return self
def __rrshift__(self, other) -> "A":
def __rrshift__(self, other) -> A:
return self
def __rand__(self, other) -> "A":
def __rand__(self, other) -> A:
return self
def __rxor__(self, other) -> "A":
def __rxor__(self, other) -> A:
return self
def __ror__(self, other) -> "A":
def __ror__(self, other) -> A:
return self
class B: ...
@@ -157,11 +157,11 @@ the right-hand side is not a subtype of the left-hand side, `lhs.__add__` will t
```py
class A:
def __add__(self, other: "B") -> int:
def __add__(self, other: B) -> int:
return 42
class B:
def __radd__(self, other: "A") -> str:
def __radd__(self, other: A) -> str:
return "foo"
reveal_type(A() + B()) # revealed: int
@@ -169,10 +169,10 @@ reveal_type(A() + B()) # revealed: int
# Edge case: C is a subtype of C, *but* if the two sides are of *equal* types,
# the lhs *still* takes precedence
class C:
def __add__(self, other: "C") -> int:
def __add__(self, other: C) -> int:
return 42
def __radd__(self, other: "C") -> str:
def __radd__(self, other: C) -> str:
return "foo"
reveal_type(C() + C()) # revealed: int
@@ -237,14 +237,17 @@ well.
```py
class A:
def __sub__(self, other: "A") -> "A":
def __sub__(self, other: A) -> A:
return A()
class B:
def __rsub__(self, other: A) -> "B":
def __rsub__(self, other: A) -> B:
return B()
reveal_type(A() - B()) # revealed: B
# TODO: this should be `B` (the return annotation of `B.__rsub__`),
# because `A.__sub__` is annotated as only accepting `A`,
# but `B.__rsub__` will accept `A`.
reveal_type(A() - B()) # revealed: A
```
## Callable instances as dunders
@@ -259,38 +262,39 @@ class A:
class B:
__add__ = A()
reveal_type(B() + B()) # revealed: Unknown | int
```
Note that we union with `Unknown` here because `__add__` is not declared. We do infer just `int` if
the callable is declared:
```py
class B2:
__add__: A = A()
reveal_type(B2() + B2()) # revealed: int
reveal_type(B() + B()) # revealed: int
```
## Integration test: numbers from typeshed
We get less precise results from binary operations on float/complex literals due to the special case
for annotations of `float` or `complex`, which applies also to return annotations for typeshed
dunder methods. Perhaps we could have a special-case on the special-case, to exclude these typeshed
return annotations from the widening, and preserve a bit more precision here?
```py
reveal_type(3j + 3.14) # revealed: int | float | complex
reveal_type(4.2 + 42) # revealed: int | float
reveal_type(3j + 3) # revealed: int | float | complex
reveal_type(3.14 + 3j) # revealed: int | float | complex
reveal_type(42 + 4.2) # revealed: int | float
reveal_type(3 + 3j) # revealed: int | float | complex
reveal_type(3j + 3.14) # revealed: complex
reveal_type(4.2 + 42) # revealed: float
reveal_type(3j + 3) # revealed: complex
def _(x: bool, y: int):
reveal_type(x + y) # revealed: int
reveal_type(4.2 + x) # revealed: int | float
reveal_type(y + 4.12) # revealed: int | float
# TODO should be complex, need to check arg type and fall back to `rhs.__radd__`
reveal_type(3.14 + 3j) # revealed: float
# TODO should be float, need to check arg type and fall back to `rhs.__radd__`
reveal_type(42 + 4.2) # revealed: int
# TODO should be complex, need to check arg type and fall back to `rhs.__radd__`
reveal_type(3 + 3j) # revealed: int
def returns_int() -> int:
return 42
def returns_bool() -> bool:
return True
x = returns_bool()
y = returns_int()
reveal_type(x + y) # revealed: int
reveal_type(4.2 + x) # revealed: float
# TODO should be float, need to check arg type and fall back to `rhs.__radd__`
reveal_type(y + 4.12) # revealed: int
```
## With literal types
@@ -300,30 +304,37 @@ its instance super-type.
```py
class A:
def __add__(self, other) -> "A":
def __add__(self, other) -> A:
return self
def __radd__(self, other) -> "A":
def __radd__(self, other) -> A:
return self
reveal_type(A() + 1) # revealed: A
reveal_type(1 + A()) # revealed: A
# TODO should be `A` since `int.__add__` doesn't support `A` instances
reveal_type(1 + A()) # revealed: int
reveal_type(A() + "foo") # revealed: A
reveal_type("foo" + A()) # revealed: A
# TODO should be `A` since `str.__add__` doesn't support `A` instances
# TODO overloads
reveal_type("foo" + A()) # revealed: @Todo
reveal_type(A() + b"foo") # revealed: A
reveal_type(b"foo" + A()) # revealed: A
# TODO should be `A` since `bytes.__add__` doesn't support `A` instances
reveal_type(b"foo" + A()) # revealed: bytes
reveal_type(A() + ()) # revealed: A
reveal_type(() + A()) # revealed: A
# TODO this should be `A`, since `tuple.__add__` doesn't support `A` instances
reveal_type(() + A()) # revealed: @Todo
literal_string_instance = "foo" * 1_000_000_000
# the test is not testing what it's meant to be testing if this isn't a `LiteralString`:
reveal_type(literal_string_instance) # revealed: LiteralString
reveal_type(A() + literal_string_instance) # revealed: A
reveal_type(literal_string_instance + A()) # revealed: A
# TODO should be `A` since `str.__add__` doesn't support `A` instances
# TODO overloads
reveal_type(literal_string_instance + A()) # revealed: @Todo
```
## Operations involving instances of classes inheriting from `Any`
@@ -351,53 +362,6 @@ class Y(Foo): ...
reveal_type(X() + Y()) # revealed: int
```
## Operations involving types with invalid `__bool__` methods
<!-- snapshot-diagnostics -->
```py
class NotBoolable:
__bool__: int = 3
a = NotBoolable()
# error: [unsupported-bool-conversion]
10 and a and True
```
## Operations on class objects
When operating on class objects, the corresponding dunder methods are looked up on the metaclass.
```py
from __future__ import annotations
class Meta(type):
def __add__(self, other: Meta) -> int:
return 1
def __lt__(self, other: Meta) -> bool:
return True
def __getitem__(self, key: int) -> str:
return "a"
class A(metaclass=Meta): ...
class B(metaclass=Meta): ...
reveal_type(A + B) # revealed: int
# error: [unsupported-operator] "Operator `-` is unsupported between objects of type `<class 'A'>` and `<class 'B'>`"
reveal_type(A - B) # revealed: Unknown
reveal_type(A < B) # revealed: bool
reveal_type(A > B) # revealed: bool
# error: [unsupported-operator] "Operator `<=` is not supported for types `<class 'A'>` and `<class 'B'>`"
reveal_type(A <= B) # revealed: Unknown
reveal_type(A[0]) # revealed: str
```
## Unsupported
### Dunder as instance attribute
@@ -433,12 +397,10 @@ A left-hand dunder method doesn't apply for the right-hand operand, or vice vers
```py
class A:
def __add__(self, other) -> int:
return 1
def __add__(self, other) -> int: ...
class B:
def __radd__(self, other) -> int:
return 1
def __radd__(self, other) -> int: ...
class C: ...
@@ -460,7 +422,7 @@ the unreflected dunder of the left-hand operand. For context, see
```py
class Foo:
def __radd__(self, other: "Foo") -> "Foo":
def __radd__(self, other: Foo) -> Foo:
return self
# error: [unsupported-operator]

View File

@@ -9,36 +9,6 @@ reveal_type(3 * -1) # revealed: Literal[-3]
reveal_type(-3 // 3) # revealed: Literal[-1]
reveal_type(-3 / 3) # revealed: float
reveal_type(5 % 3) # revealed: Literal[2]
reveal_type(3 | 4) # revealed: Literal[7]
reveal_type(5 & 6) # revealed: Literal[4]
reveal_type(7 ^ 2) # revealed: Literal[5]
# error: [unsupported-operator] "Operator `+` is unsupported between objects of type `Literal[2]` and `Literal["f"]`"
reveal_type(2 + "f") # revealed: Unknown
def lhs(x: int):
reveal_type(x + 1) # revealed: int
reveal_type(x - 4) # revealed: int
reveal_type(x * -1) # revealed: int
reveal_type(x // 3) # revealed: int
reveal_type(x / 3) # revealed: int | float
reveal_type(x % 3) # revealed: int
def rhs(x: int):
reveal_type(2 + x) # revealed: int
reveal_type(3 - x) # revealed: int
reveal_type(3 * x) # revealed: int
reveal_type(-3 // x) # revealed: int
reveal_type(-3 / x) # revealed: int | float
reveal_type(5 % x) # revealed: int
def both(x: int):
reveal_type(x + x) # revealed: int
reveal_type(x - x) # revealed: int
reveal_type(x * x) # revealed: int
reveal_type(x // x) # revealed: int
reveal_type(x / x) # revealed: int | float
reveal_type(x % x) # revealed: int
```
## Power
@@ -51,23 +21,6 @@ largest_u32 = 4_294_967_295
reveal_type(2**2) # revealed: Literal[4]
reveal_type(1 ** (largest_u32 + 1)) # revealed: int
reveal_type(2**largest_u32) # revealed: int
def variable(x: int):
reveal_type(x**2) # revealed: int
reveal_type(2**x) # revealed: Any
reveal_type(x**x) # revealed: Any
```
If the second argument is \<0, a `float` is returned at runtime. If the first argument is \<0 but
the second argument is >=0, an `int` is still returned:
```py
reveal_type(1**0) # revealed: Literal[1]
reveal_type(0**1) # revealed: Literal[0]
reveal_type(0**0) # revealed: Literal[1]
reveal_type((-1) ** 2) # revealed: Literal[1]
reveal_type(2 ** (-1)) # revealed: float
reveal_type((-1) ** (-1)) # revealed: float
```
## Division by Zero
@@ -94,20 +47,24 @@ c = 3 % 0 # error: "Cannot reduce object of type `Literal[3]` modulo zero"
reveal_type(c) # revealed: int
# error: "Cannot divide object of type `int` by zero"
reveal_type(int() / 0) # revealed: int | float
# revealed: float
reveal_type(int() / 0)
# error: "Cannot divide object of type `Literal[1]` by zero"
reveal_type(1 / False) # revealed: float
# revealed: float
reveal_type(1 / False)
# error: [division-by-zero] "Cannot divide object of type `Literal[True]` by zero"
True / False
# error: [division-by-zero] "Cannot divide object of type `Literal[True]` by zero"
bool(1) / False
# error: "Cannot divide object of type `float` by zero"
reveal_type(1.0 / 0) # revealed: int | float
# revealed: float
reveal_type(1.0 / 0)
class MyInt(int): ...
# No error for a subclass of int
reveal_type(MyInt(3) / 0) # revealed: int | float
# revealed: float
reveal_type(MyInt(3) / 0)
```

View File

@@ -0,0 +1,78 @@
# Short-Circuit Evaluation
## Not all boolean expressions must be evaluated
In `or` expressions, if the left-hand side is truthy, the right-hand side is not evaluated.
Similarly, in `and` expressions, if the left-hand side is falsy, the right-hand side is not
evaluated.
```py
def bool_instance() -> bool:
return True
if bool_instance() or (x := 1):
# error: [possibly-unresolved-reference]
reveal_type(x) # revealed: Literal[1]
if bool_instance() and (x := 1):
# error: [possibly-unresolved-reference]
reveal_type(x) # revealed: Literal[1]
```
## First expression is always evaluated
```py
def bool_instance() -> bool:
return True
if (x := 1) or bool_instance():
reveal_type(x) # revealed: Literal[1]
if (x := 1) and bool_instance():
reveal_type(x) # revealed: Literal[1]
```
## Statically known truthiness
```py
if True or (x := 1):
# TODO: infer that the second arm is never executed, and raise `unresolved-reference`.
# error: [possibly-unresolved-reference]
reveal_type(x) # revealed: Literal[1]
if True and (x := 1):
# TODO: infer that the second arm is always executed, do not raise a diagnostic
# error: [possibly-unresolved-reference]
reveal_type(x) # revealed: Literal[1]
```
## Later expressions can always use variables from earlier expressions
```py
def bool_instance() -> bool:
return True
bool_instance() or (x := 1) or reveal_type(x) # revealed: Literal[1]
# error: [unresolved-reference]
bool_instance() or reveal_type(y) or (y := 1) # revealed: Unknown
```
## Nested expressions
```py
def bool_instance() -> bool:
return True
if bool_instance() or ((x := 1) and bool_instance()):
# error: [possibly-unresolved-reference]
reveal_type(x) # revealed: Literal[1]
if ((y := 1) and bool_instance()) or bool_instance():
reveal_type(y) # revealed: Literal[1]
# error: [possibly-unresolved-reference]
if (bool_instance() and (z := 1)) or reveal_type(z): # revealed: Literal[1]
# error: [possibly-unresolved-reference]
reveal_type(z) # revealed: Literal[1]
```

View File

@@ -0,0 +1,75 @@
# Callable instance
## Dunder call
```py
class Multiplier:
def __init__(self, factor: float):
self.factor = factor
def __call__(self, number: float) -> float:
return number * self.factor
a = Multiplier(2.0)(3.0)
reveal_type(a) # revealed: float
class Unit: ...
b = Unit()(3.0) # error: "Object of type `Unit` is not callable"
reveal_type(b) # revealed: Unknown
```
## Possibly unbound `__call__` method
```py
def flag() -> bool: ...
class PossiblyNotCallable:
if flag():
def __call__(self) -> int: ...
a = PossiblyNotCallable()
result = a() # error: "Object of type `PossiblyNotCallable` is not callable (possibly unbound `__call__` method)"
reveal_type(result) # revealed: int
```
## Possibly unbound callable
```py
def flag() -> bool: ...
if flag():
class PossiblyUnbound:
def __call__(self) -> int: ...
# error: [possibly-unresolved-reference]
a = PossiblyUnbound()
reveal_type(a()) # revealed: int
```
## Non-callable `__call__`
```py
class NonCallable:
__call__ = 1
a = NonCallable()
# error: "Object of type `NonCallable` is not callable"
reveal_type(a()) # revealed: Unknown
```
## Possibly non-callable `__call__`
```py
def flag() -> bool: ...
class NonCallable:
if flag():
__call__ = 1
else:
def __call__(self) -> int: ...
a = NonCallable()
# error: "Object of type `Literal[1] | Literal[__call__]` is not callable (due to union element `Literal[1]`)"
reveal_type(a()) # revealed: Unknown | int
```

View File

@@ -0,0 +1,7 @@
# Constructor
```py
class Foo: ...
reveal_type(Foo()) # revealed: Foo
```

View File

@@ -0,0 +1,68 @@
# Call expression
## Simple
```py
def get_int() -> int:
return 42
reveal_type(get_int()) # revealed: int
```
## Async
```py
async def get_int_async() -> int:
return 42
# TODO: we don't yet support `types.CoroutineType`, should be generic `Coroutine[Any, Any, int]`
reveal_type(get_int_async()) # revealed: @Todo
```
## Generic
```py
def get_int[T]() -> int:
return 42
reveal_type(get_int()) # revealed: int
```
## Decorated
```py
from typing import Callable
def foo() -> int:
return 42
def decorator(func) -> Callable[[], int]:
return foo
@decorator
def bar() -> str:
return "bar"
# TODO: should reveal `int`, as the decorator replaces `bar` with `foo`
reveal_type(bar()) # revealed: @Todo
```
## Invalid callable
```py
nonsense = 123
x = nonsense() # error: "Object of type `Literal[123]` is not callable"
```
## Potentially unbound function
```py
def flag() -> bool: ...
if flag():
def foo() -> int:
return 42
# error: [possibly-unresolved-reference]
reveal_type(foo()) # revealed: int
```

View File

@@ -0,0 +1,104 @@
# Unions in calls
## Union of return types
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
def f() -> int:
return 1
else:
def f() -> str:
return "foo"
reveal_type(f()) # revealed: int | str
```
## Calling with an unknown union
```py
from nonexistent import f # error: [unresolved-import] "Cannot resolve import `nonexistent`"
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
def f() -> int:
return 1
reveal_type(f()) # revealed: Unknown | int
```
## Non-callable elements in a union
Calling a union with a non-callable element should emit a diagnostic.
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
f = 1
else:
def f() -> int:
return 1
x = f() # error: "Object of type `Literal[1] | Literal[f]` is not callable (due to union element `Literal[1]`)"
reveal_type(x) # revealed: Unknown | int
```
## Multiple non-callable elements in a union
Calling a union with multiple non-callable elements should mention all of them in the diagnostic.
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
if flag:
f = 1
elif flag2:
f = "foo"
else:
def f() -> int:
return 1
# error: "Object of type `Literal[1] | Literal["foo"] | Literal[f]` is not callable (due to union elements Literal[1], Literal["foo"])"
# revealed: Unknown | int
reveal_type(f())
```
## All non-callable union elements
Calling a union with no callable elements can emit a simpler diagnostic.
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
f = 1
else:
f = "foo"
x = f() # error: "Object of type `Literal[1] | Literal["foo"]` is not callable"
reveal_type(x) # revealed: Unknown
```

View File

@@ -0,0 +1,40 @@
# Identity tests
```py
class A: ...
def get_a() -> A: ...
def get_object() -> object: ...
a1 = get_a()
a2 = get_a()
n1 = None
n2 = None
o = get_object()
reveal_type(a1 is a1) # revealed: bool
reveal_type(a1 is a2) # revealed: bool
reveal_type(n1 is n1) # revealed: Literal[True]
reveal_type(n1 is n2) # revealed: Literal[True]
reveal_type(a1 is n1) # revealed: Literal[False]
reveal_type(n1 is a1) # revealed: Literal[False]
reveal_type(a1 is o) # revealed: bool
reveal_type(n1 is o) # revealed: bool
reveal_type(a1 is not a1) # revealed: bool
reveal_type(a1 is not a2) # revealed: bool
reveal_type(n1 is not n1) # revealed: Literal[False]
reveal_type(n1 is not n2) # revealed: Literal[False]
reveal_type(a1 is not n1) # revealed: Literal[True]
reveal_type(n1 is not a1) # revealed: Literal[True]
reveal_type(a1 is not o) # revealed: bool
reveal_type(n1 is not o) # revealed: bool
```

View File

@@ -21,9 +21,8 @@ class A:
reveal_type("hello" in A()) # revealed: bool
reveal_type("hello" not in A()) # revealed: bool
# error: [unsupported-operator] "Operator `in` is not supported for types `int` and `A`, in comparing `Literal[42]` with `A`"
# TODO: should emit diagnostic, need to check arg type, will fail
reveal_type(42 in A()) # revealed: bool
# error: [unsupported-operator] "Operator `not in` is not supported for types `int` and `A`, in comparing `Literal[42]` with `A`"
reveal_type(42 not in A()) # revealed: bool
```
@@ -127,9 +126,9 @@ class A:
reveal_type(CheckContains() in A()) # revealed: bool
# error: [unsupported-operator] "Operator `in` is not supported for types `CheckIter` and `A`"
# TODO: should emit diagnostic, need to check arg type,
# should not fall back to __iter__ or __getitem__
reveal_type(CheckIter() in A()) # revealed: bool
# error: [unsupported-operator] "Operator `in` is not supported for types `CheckGetItem` and `A`"
reveal_type(CheckGetItem() in A()) # revealed: bool
class B:
@@ -155,50 +154,7 @@ class A:
def __getitem__(self, key: str) -> str:
return "foo"
# error: [unsupported-operator] "Operator `in` is not supported for types `int` and `A`, in comparing `Literal[42]` with `A`"
# TODO should emit a diagnostic
reveal_type(42 in A()) # revealed: bool
# error: [unsupported-operator] "Operator `in` is not supported for types `str` and `A`, in comparing `Literal["hello"]` with `A`"
reveal_type("hello" in A()) # revealed: bool
```
## Return type that doesn't implement `__bool__` correctly
`in` and `not in` operations will fail at runtime if the object on the right-hand side of the
operation has a `__contains__` method that returns a type which is not convertible to `bool`. This
is because of the way these operations are handled by the Python interpreter at runtime. If we
assume that `y` is an object that has a `__contains__` method, the Python expression `x in y`
desugars to a `contains(y, x)` call, where `contains` looks something like this:
```ignore
def contains(y, x):
return bool(type(y).__contains__(y, x))
```
where the `bool()` conversion itself implicitly calls `__bool__` under the hood.
TODO: Ideally the message would explain to the user what's wrong. E.g,
```ignore
error: [operator] cannot use `in` operator on object of type `WithContains`
note: This is because the `in` operator implicitly calls `WithContains.__contains__`, but `WithContains.__contains__` is invalidly defined
note: `WithContains.__contains__` is invalidly defined because it returns an instance of `NotBoolable`, which cannot be evaluated in a boolean context
note: `NotBoolable` cannot be evaluated in a boolean context because its `__bool__` attribute is not callable
```
It may also be more appropriate to use `unsupported-operator` as the error code.
<!-- snapshot-diagnostics -->
```py
class NotBoolable:
__bool__: int = 3
class WithContains:
def __contains__(self, item) -> NotBoolable:
return NotBoolable()
# error: [unsupported-bool-conversion]
10 in WithContains()
# error: [unsupported-bool-conversion]
10 not in WithContains()
```

View File

@@ -0,0 +1,328 @@
# Comparison: Rich Comparison
Rich comparison operations (`==`, `!=`, `<`, `<=`, `>`, `>=`) in Python are implemented through
double-underscore methods that allow customization of comparison behavior.
For references, see:
- <https://docs.python.org/3/reference/datamodel.html#object.__lt__>
- <https://snarky.ca/unravelling-rich-comparison-operators/>
## Rich Comparison Dunder Implementations For Same Class
Classes can support rich comparison by implementing dunder methods like `__eq__`, `__ne__`, etc. The
most common case involves implementing these methods for the same type:
```py
from __future__ import annotations
class A:
def __eq__(self, other: A) -> int:
return 42
def __ne__(self, other: A) -> float:
return 42.0
def __lt__(self, other: A) -> str:
return "42"
def __le__(self, other: A) -> bytes:
return b"42"
def __gt__(self, other: A) -> list:
return [42]
def __ge__(self, other: A) -> set:
return {42}
reveal_type(A() == A()) # revealed: int
reveal_type(A() != A()) # revealed: float
reveal_type(A() < A()) # revealed: str
reveal_type(A() <= A()) # revealed: bytes
reveal_type(A() > A()) # revealed: list
reveal_type(A() >= A()) # revealed: set
```
## Rich Comparison Dunder Implementations for Other Class
In some cases, classes may implement rich comparison dunder methods for comparisons with a different
type:
```py
from __future__ import annotations
class A:
def __eq__(self, other: B) -> int:
return 42
def __ne__(self, other: B) -> float:
return 42.0
def __lt__(self, other: B) -> str:
return "42"
def __le__(self, other: B) -> bytes:
return b"42"
def __gt__(self, other: B) -> list:
return [42]
def __ge__(self, other: B) -> set:
return {42}
class B: ...
reveal_type(A() == B()) # revealed: int
reveal_type(A() != B()) # revealed: float
reveal_type(A() < B()) # revealed: str
reveal_type(A() <= B()) # revealed: bytes
reveal_type(A() > B()) # revealed: list
reveal_type(A() >= B()) # revealed: set
```
## Reflected Comparisons
Fallback to the right-hand sides comparison methods occurs when the left-hand side does not define
them. Note: class `B` has its own `__eq__` and `__ne__` methods to override those of `object`, but
these methods will be ignored here because they require a mismatched operand type.
```py
from __future__ import annotations
class A:
def __eq__(self, other: B) -> int:
return 42
def __ne__(self, other: B) -> float:
return 42.0
def __lt__(self, other: B) -> str:
return "42"
def __le__(self, other: B) -> bytes:
return b"42"
def __gt__(self, other: B) -> list:
return [42]
def __ge__(self, other: B) -> set:
return {42}
class B:
# To override builtins.object.__eq__ and builtins.object.__ne__
# TODO these should emit an invalid override diagnostic
def __eq__(self, other: str) -> B:
return B()
def __ne__(self, other: str) -> B:
return B()
# TODO: should be `int` and `float`.
# Need to check arg type and fall back to `rhs.__eq__` and `rhs.__ne__`.
#
# Because `object.__eq__` and `object.__ne__` accept `object` in typeshed,
# this can only happen with an invalid override of these methods,
# but we still support it.
reveal_type(B() == A()) # revealed: B
reveal_type(B() != A()) # revealed: B
reveal_type(B() < A()) # revealed: list
reveal_type(B() <= A()) # revealed: set
reveal_type(B() > A()) # revealed: str
reveal_type(B() >= A()) # revealed: bytes
class C:
def __gt__(self, other: C) -> int:
return 42
def __ge__(self, other: C) -> float:
return 42.0
reveal_type(C() < C()) # revealed: int
reveal_type(C() <= C()) # revealed: float
```
## Reflected Comparisons with Subclasses
When subclasses override comparison methods, these overridden methods take precedence over those in
the parent class. Class `B` inherits from `A` and redefines comparison methods to return types other
than `A`.
```py
from __future__ import annotations
class A:
def __eq__(self, other: A) -> A:
return A()
def __ne__(self, other: A) -> A:
return A()
def __lt__(self, other: A) -> A:
return A()
def __le__(self, other: A) -> A:
return A()
def __gt__(self, other: A) -> A:
return A()
def __ge__(self, other: A) -> A:
return A()
class B(A):
def __eq__(self, other: A) -> int:
return 42
def __ne__(self, other: A) -> float:
return 42.0
def __lt__(self, other: A) -> str:
return "42"
def __le__(self, other: A) -> bytes:
return b"42"
def __gt__(self, other: A) -> list:
return [42]
def __ge__(self, other: A) -> set:
return {42}
reveal_type(A() == B()) # revealed: int
reveal_type(A() != B()) # revealed: float
reveal_type(A() < B()) # revealed: list
reveal_type(A() <= B()) # revealed: set
reveal_type(A() > B()) # revealed: str
reveal_type(A() >= B()) # revealed: bytes
```
## Reflected Comparisons with Subclass But Falls Back to LHS
In the case of a subclass, the right-hand side has priority. However, if the overridden dunder
method has an mismatched type to operand, the comparison will fall back to the left-hand side.
```py
from __future__ import annotations
class A:
def __lt__(self, other: A) -> A:
return A()
def __gt__(self, other: A) -> A:
return A()
class B(A):
def __lt__(self, other: int) -> B:
return B()
def __gt__(self, other: int) -> B:
return B()
# TODO: should be `A`, need to check argument type and fall back to LHS method
reveal_type(A() < B()) # revealed: B
reveal_type(A() > B()) # revealed: B
```
## Operations involving instances of classes inheriting from `Any`
`Any` and `Unknown` represent a set of possible runtime objects, wherein the bounds of the set are
unknown. Whether the left-hand operand's dunder or the right-hand operand's reflected dunder depends
on whether the right-hand operand is an instance of a class that is a subclass of the left-hand
operand's class and overrides the reflected dunder. In the following example, because of the
unknowable nature of `Any`/`Unknown`, we must consider both possibilities: `Any`/`Unknown` might
resolve to an unknown third class that inherits from `X` and overrides `__gt__`; but it also might
not. Thus, the correct answer here for the `reveal_type` is `int | Unknown`.
(This test is referenced from `mdtest/binary/instances.md`)
```py
from does_not_exist import Foo # error: [unresolved-import]
reveal_type(Foo) # revealed: Unknown
class X:
def __lt__(self, other: object) -> int:
return 42
class Y(Foo): ...
# TODO: Should be `int | Unknown`; see above discussion.
reveal_type(X() < Y()) # revealed: int
```
## Equality and Inequality Fallback
This test confirms that `==` and `!=` comparisons default to identity comparisons (`is`, `is not`)
when argument types do not match the method signature.
Please refer to the [docs](https://docs.python.org/3/reference/datamodel.html#object.__eq__)
```py
from __future__ import annotations
class A:
# TODO both these overrides should emit invalid-override diagnostic
def __eq__(self, other: int) -> A:
return A()
def __ne__(self, other: int) -> A:
return A()
# TODO: it should be `bool`, need to check arg type and fall back to `is` and `is not`
reveal_type(A() == A()) # revealed: A
reveal_type(A() != A()) # revealed: A
```
## Object Comparisons with Typeshed
```py
class A: ...
reveal_type(A() == object()) # revealed: bool
reveal_type(A() != object()) # revealed: bool
reveal_type(object() == A()) # revealed: bool
reveal_type(object() != A()) # revealed: bool
# error: [unsupported-operator] "Operator `<` is not supported for types `A` and `object`"
# revealed: Unknown
reveal_type(A() < object())
```
## Numbers Comparison with typeshed
```py
reveal_type(1 == 1.0) # revealed: bool
reveal_type(1 != 1.0) # revealed: bool
reveal_type(1 < 1.0) # revealed: bool
reveal_type(1 <= 1.0) # revealed: bool
reveal_type(1 > 1.0) # revealed: bool
reveal_type(1 >= 1.0) # revealed: bool
reveal_type(1 == 2j) # revealed: bool
reveal_type(1 != 2j) # revealed: bool
# TODO: should be Unknown and emit diagnostic,
# need to check arg type and should be failed
reveal_type(1 < 2j) # revealed: bool
reveal_type(1 <= 2j) # revealed: bool
reveal_type(1 > 2j) # revealed: bool
reveal_type(1 >= 2j) # revealed: bool
def bool_instance() -> bool:
return True
def int_instance() -> int:
return 42
x = bool_instance()
y = int_instance()
reveal_type(x < y) # revealed: bool
reveal_type(y < x) # revealed: bool
reveal_type(4.2 < x) # revealed: bool
reveal_type(x < 4.2) # revealed: bool
```

View File

@@ -12,16 +12,18 @@ reveal_type(1 is 1) # revealed: bool
reveal_type(1 is not 1) # revealed: bool
reveal_type(1 is 2) # revealed: Literal[False]
reveal_type(1 is not 7) # revealed: Literal[True]
# error: [unsupported-operator] "Operator `<=` is not supported for types `int` and `str`, in comparing `Literal[1]` with `Literal[""]`"
reveal_type(1 <= "" and 0 < 1) # revealed: (Unknown & ~AlwaysTruthy) | Literal[True]
# TODO: should be Unknown, and emit diagnostic, once we check call argument types
reveal_type(1 <= "" and 0 < 1) # revealed: bool
```
## Integer instance
```py
# TODO: implement lookup of `__eq__` on typeshed `int` stub.
def _(a: int, b: int):
reveal_type(1 == a) # revealed: bool
reveal_type(9 < a) # revealed: bool
reveal_type(a < b) # revealed: bool
def int_instance() -> int:
return 42
reveal_type(1 == int_instance()) # revealed: bool
reveal_type(9 < int_instance()) # revealed: bool
reveal_type(int_instance() < int_instance()) # revealed: bool
```

View File

@@ -0,0 +1,155 @@
# Comparison: Intersections
## Positive contributions
If we have an intersection type `A & B` and we get a definitive true/false answer for one of the
types, we can infer that the result for the intersection type is also true/false:
```py
class Base: ...
class Child1(Base):
def __eq__(self, other) -> Literal[True]:
return True
class Child2(Base): ...
def get_base() -> Base: ...
x = get_base()
c1 = Child1()
# Create an intersection type through narrowing:
if isinstance(x, Child1):
if isinstance(x, Child2):
reveal_type(x) # revealed: Child1 & Child2
reveal_type(x == 1) # revealed: Literal[True]
# Other comparison operators fall back to the base type:
reveal_type(x > 1) # revealed: bool
reveal_type(x is c1) # revealed: bool
```
## Negative contributions
Negative contributions to the intersection type only allow simplifications in a few special cases
(equality and identity comparisons).
### Equality comparisons
#### Literal strings
```py
x = "x" * 1_000_000_000
y = "y" * 1_000_000_000
reveal_type(x) # revealed: LiteralString
if x != "abc":
reveal_type(x) # revealed: LiteralString & ~Literal["abc"]
reveal_type(x == "abc") # revealed: Literal[False]
reveal_type("abc" == x) # revealed: Literal[False]
reveal_type(x == "something else") # revealed: bool
reveal_type("something else" == x) # revealed: bool
reveal_type(x != "abc") # revealed: Literal[True]
reveal_type("abc" != x) # revealed: Literal[True]
reveal_type(x != "something else") # revealed: bool
reveal_type("something else" != x) # revealed: bool
reveal_type(x == y) # revealed: bool
reveal_type(y == x) # revealed: bool
reveal_type(x != y) # revealed: bool
reveal_type(y != x) # revealed: bool
reveal_type(x >= "abc") # revealed: bool
reveal_type("abc" >= x) # revealed: bool
reveal_type(x in "abc") # revealed: bool
reveal_type("abc" in x) # revealed: bool
```
#### Integers
```py
def get_int() -> int: ...
x = get_int()
if x != 1:
reveal_type(x) # revealed: int & ~Literal[1]
reveal_type(x != 1) # revealed: Literal[True]
reveal_type(x != 2) # revealed: bool
reveal_type(x == 1) # revealed: Literal[False]
reveal_type(x == 2) # revealed: bool
```
### Identity comparisons
```py
class A: ...
def get_object() -> object: ...
o = object()
a = A()
n = None
if o is not None:
reveal_type(o) # revealed: object & ~None
reveal_type(o is n) # revealed: Literal[False]
reveal_type(o is not n) # revealed: Literal[True]
```
## Diagnostics
### Unsupported operators for positive contributions
Raise an error if any of the positive contributions to the intersection type are unsupported for the
given operator:
```py
class Container:
def __contains__(self, x) -> bool: ...
class NonContainer: ...
def get_object() -> object: ...
x = get_object()
if isinstance(x, Container):
if isinstance(x, NonContainer):
reveal_type(x) # revealed: Container & NonContainer
# error: [unsupported-operator] "Operator `in` is not supported for types `int` and `NonContainer`"
reveal_type(2 in x) # revealed: bool
```
### Unsupported operators for negative contributions
Do *not* raise an error if any of the negative contributions to the intersection type are
unsupported for the given operator:
```py
class Container:
def __contains__(self, x) -> bool: ...
class NonContainer: ...
def get_object() -> object: ...
x = get_object()
if isinstance(x, Container):
if not isinstance(x, NonContainer):
reveal_type(x) # revealed: Container & ~NonContainer
# No error here!
reveal_type(2 in x) # revealed: bool
```

View File

@@ -22,25 +22,19 @@ Walking through examples:
from __future__ import annotations
class A:
def __lt__(self, other) -> A:
return self
def __gt__(self, other) -> bool:
return False
def __lt__(self, other) -> A: ...
class B:
def __lt__(self, other) -> B:
return self
def __lt__(self, other) -> B: ...
class C:
def __lt__(self, other) -> C:
return self
def __lt__(self, other) -> C: ...
x = A() < B() < C()
reveal_type(x) # revealed: (A & ~AlwaysTruthy) | B
reveal_type(x) # revealed: A | B
y = 0 < 1 < A() < 3
reveal_type(y) # revealed: Literal[False] | A
reveal_type(y) # revealed: bool | A
z = 10 < 0 < A() < B() < C()
reveal_type(z) # revealed: Literal[False]

View File

@@ -0,0 +1,20 @@
# Comparison: Strings
## String literals
```py
def str_instance() -> str: ...
reveal_type("abc" == "abc") # revealed: Literal[True]
reveal_type("ab_cd" <= "ab_ce") # revealed: Literal[True]
reveal_type("abc" in "ab cd") # revealed: Literal[False]
reveal_type("" not in "hello") # revealed: Literal[False]
reveal_type("--" is "--") # revealed: bool
reveal_type("A" is "B") # revealed: Literal[False]
reveal_type("--" is not "--") # revealed: bool
reveal_type("A" is not "B") # revealed: Literal[True]
reveal_type(str_instance() < "...") # revealed: bool
# ensure we're not comparing the interned salsa symbols, which compare by order of declaration.
reveal_type("ab" < "ab_cd") # revealed: Literal[True]
```

View File

@@ -0,0 +1,204 @@
# Comparison: Tuples
## Heterogeneous
For tuples like `tuple[int, str, Literal[1]]`
### Value Comparisons
"Value Comparisons" refers to the operators: `==`, `!=`, `<`, `<=`, `>`, `>=`
#### Results without Ambiguity
Cases where the result can be definitively inferred as a `BooleanLiteral`.
```py
a = (1, "test", (3, 13), True)
b = (1, "test", (3, 14), False)
reveal_type(a == a) # revealed: Literal[True]
reveal_type(a != a) # revealed: Literal[False]
reveal_type(a < a) # revealed: Literal[False]
reveal_type(a <= a) # revealed: Literal[True]
reveal_type(a > a) # revealed: Literal[False]
reveal_type(a >= a) # revealed: Literal[True]
reveal_type(a == b) # revealed: Literal[False]
reveal_type(a != b) # revealed: Literal[True]
reveal_type(a < b) # revealed: Literal[True]
reveal_type(a <= b) # revealed: Literal[True]
reveal_type(a > b) # revealed: Literal[False]
reveal_type(a >= b) # revealed: Literal[False]
```
Even when tuples have different lengths, comparisons should be handled appropriately.
```py path=different_length.py
a = (1, 2, 3)
b = (1, 2, 3, 4)
reveal_type(a == b) # revealed: Literal[False]
reveal_type(a != b) # revealed: Literal[True]
reveal_type(a < b) # revealed: Literal[True]
reveal_type(a <= b) # revealed: Literal[True]
reveal_type(a > b) # revealed: Literal[False]
reveal_type(a >= b) # revealed: Literal[False]
c = ("a", "b", "c", "d")
d = ("a", "b", "c")
reveal_type(c == d) # revealed: Literal[False]
reveal_type(c != d) # revealed: Literal[True]
reveal_type(c < d) # revealed: Literal[False]
reveal_type(c <= d) # revealed: Literal[False]
reveal_type(c > d) # revealed: Literal[True]
reveal_type(c >= d) # revealed: Literal[True]
```
#### Results with Ambiguity
```py
def bool_instance() -> bool: ...
def int_instance() -> int:
return 42
a = (bool_instance(),)
b = (int_instance(),)
reveal_type(a == a) # revealed: bool
reveal_type(a != a) # revealed: bool
reveal_type(a < a) # revealed: bool
reveal_type(a <= a) # revealed: bool
reveal_type(a > a) # revealed: bool
reveal_type(a >= a) # revealed: bool
reveal_type(a == b) # revealed: bool
reveal_type(a != b) # revealed: bool
reveal_type(a < b) # revealed: bool
reveal_type(a <= b) # revealed: bool
reveal_type(a > b) # revealed: bool
reveal_type(a >= b) # revealed: bool
```
#### Comparison Unsupported
If two tuples contain types that do not support comparison, the result may be `Unknown`. However,
`==` and `!=` are exceptions and can still provide definite results.
```py
a = (1, 2)
b = (1, "hello")
# TODO: should be Literal[False], once we implement (in)equality for mismatched literals
reveal_type(a == b) # revealed: bool
# TODO: should be Literal[True], once we implement (in)equality for mismatched literals
reveal_type(a != b) # revealed: bool
# TODO: should be Unknown and add more informative diagnostics
reveal_type(a < b) # revealed: bool
reveal_type(a <= b) # revealed: bool
reveal_type(a > b) # revealed: bool
reveal_type(a >= b) # revealed: bool
```
However, if the lexicographic comparison completes without reaching a point where str and int are
compared, Python will still produce a result based on the prior elements.
```py path=short_circuit.py
a = (1, 2)
b = (999999, "hello")
reveal_type(a == b) # revealed: Literal[False]
reveal_type(a != b) # revealed: Literal[True]
reveal_type(a < b) # revealed: Literal[True]
reveal_type(a <= b) # revealed: Literal[True]
reveal_type(a > b) # revealed: Literal[False]
reveal_type(a >= b) # revealed: Literal[False]
```
#### Matryoshka Tuples
```py
a = (1, True, "Hello")
b = (a, a, a)
c = (b, b, b)
reveal_type(c == c) # revealed: Literal[True]
reveal_type(c != c) # revealed: Literal[False]
reveal_type(c < c) # revealed: Literal[False]
reveal_type(c <= c) # revealed: Literal[True]
reveal_type(c > c) # revealed: Literal[False]
reveal_type(c >= c) # revealed: Literal[True]
```
#### Non Boolean Rich Comparisons
```py
class A:
def __eq__(self, o) -> str: ...
def __ne__(self, o) -> int: ...
def __lt__(self, o) -> float: ...
def __le__(self, o) -> object: ...
def __gt__(self, o) -> tuple: ...
def __ge__(self, o) -> list: ...
a = (A(), A())
reveal_type(a == a) # revealed: bool
reveal_type(a != a) # revealed: bool
reveal_type(a < a) # revealed: bool
reveal_type(a <= a) # revealed: bool
reveal_type(a > a) # revealed: bool
reveal_type(a >= a) # revealed: bool
```
### Membership Test Comparisons
"Membership Test Comparisons" refers to the operators `in` and `not in`.
```py
def int_instance() -> int:
return 42
a = (1, 2)
b = ((3, 4), (1, 2))
c = ((1, 2, 3), (4, 5, 6))
d = ((int_instance(), int_instance()), (int_instance(), int_instance()))
reveal_type(a in b) # revealed: Literal[True]
reveal_type(a not in b) # revealed: Literal[False]
reveal_type(a in c) # revealed: Literal[False]
reveal_type(a not in c) # revealed: Literal[True]
reveal_type(a in d) # revealed: bool
reveal_type(a not in d) # revealed: bool
```
### Identity Comparisons
"Identity Comparisons" refers to `is` and `is not`.
```py
a = (1, 2)
b = ("a", "b")
c = (1, 2, 3)
reveal_type(a is (1, 2)) # revealed: bool
reveal_type(a is not (1, 2)) # revealed: bool
# TODO should be Literal[False] once we implement comparison of mismatched literal types
reveal_type(a is b) # revealed: bool
# TODO should be Literal[True] once we implement comparison of mismatched literal types
reveal_type(a is not b) # revealed: bool
reveal_type(a is c) # revealed: Literal[False]
reveal_type(a is not c) # revealed: Literal[True]
```
## Homogeneous
For tuples like `tuple[int, ...]`, `tuple[Any, ...]`
// TODO

View File

@@ -0,0 +1,88 @@
# Comparison: Unions
## Union on one side of the comparison
Comparisons on union types need to consider all possible cases:
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
one_or_two = 1 if flag else 2
reveal_type(one_or_two <= 2) # revealed: Literal[True]
reveal_type(one_or_two <= 1) # revealed: bool
reveal_type(one_or_two <= 0) # revealed: Literal[False]
reveal_type(2 >= one_or_two) # revealed: Literal[True]
reveal_type(1 >= one_or_two) # revealed: bool
reveal_type(0 >= one_or_two) # revealed: Literal[False]
reveal_type(one_or_two < 1) # revealed: Literal[False]
reveal_type(one_or_two < 2) # revealed: bool
reveal_type(one_or_two < 3) # revealed: Literal[True]
reveal_type(one_or_two > 0) # revealed: Literal[True]
reveal_type(one_or_two > 1) # revealed: bool
reveal_type(one_or_two > 2) # revealed: Literal[False]
reveal_type(one_or_two == 3) # revealed: Literal[False]
reveal_type(one_or_two == 1) # revealed: bool
reveal_type(one_or_two != 3) # revealed: Literal[True]
reveal_type(one_or_two != 1) # revealed: bool
a_or_ab = "a" if flag else "ab"
reveal_type(a_or_ab in "ab") # revealed: Literal[True]
reveal_type("a" in a_or_ab) # revealed: Literal[True]
reveal_type("c" not in a_or_ab) # revealed: Literal[True]
reveal_type("a" not in a_or_ab) # revealed: Literal[False]
reveal_type("b" in a_or_ab) # revealed: bool
reveal_type("b" not in a_or_ab) # revealed: bool
one_or_none = 1 if flag else None
reveal_type(one_or_none is None) # revealed: bool
reveal_type(one_or_none is not None) # revealed: bool
```
## Union on both sides of the comparison
With unions on both sides, we need to consider the full cross product of options when building the
resulting (union) type:
```py
def bool_instance() -> bool:
return True
flag_s, flag_l = bool_instance(), bool_instance()
small = 1 if flag_s else 2
large = 2 if flag_l else 3
reveal_type(small <= large) # revealed: Literal[True]
reveal_type(small >= large) # revealed: bool
reveal_type(small < large) # revealed: bool
reveal_type(small > large) # revealed: Literal[False]
```
## Unsupported operations
Make sure we emit a diagnostic if *any* of the possible comparisons is unsupported. For now, we fall
back to `bool` for the result type instead of trying to infer something more precise from the other
(supported) variants:
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
x = [1, 2] if flag else 1
result = 1 in x # error: "Operator `in` is not supported"
reveal_type(result) # revealed: bool
```

View File

@@ -0,0 +1,36 @@
# Comparison: Unsupported operators
```py
def bool_instance() -> bool:
return True
a = 1 in 7 # error: "Operator `in` is not supported for types `Literal[1]` and `Literal[7]`"
reveal_type(a) # revealed: bool
b = 0 not in 10 # error: "Operator `not in` is not supported for types `Literal[0]` and `Literal[10]`"
reveal_type(b) # revealed: bool
# TODO: should error, once operand type check is implemented
# ("Operator `<` is not supported for types `object` and `int`")
c = object() < 5
# TODO: should be Unknown, once operand type check is implemented
reveal_type(c) # revealed: bool
# TODO: should error, once operand type check is implemented
# ("Operator `<` is not supported for types `int` and `object`")
d = 5 < object()
# TODO: should be Unknown, once operand type check is implemented
reveal_type(d) # revealed: bool
flag = bool_instance()
int_literal_or_str_literal = 1 if flag else "foo"
# error: "Operator `in` is not supported for types `Literal[42]` and `Literal[1]`, in comparing `Literal[42]` with `Literal[1] | Literal["foo"]`"
e = 42 in int_literal_or_str_literal
reveal_type(e) # revealed: bool
# TODO: should error, need to check if __lt__ signature is valid for right operand
# error may be "Operator `<` is not supported for types `int` and `str`, in comparing `tuple[Literal[1], Literal[2]]` with `tuple[Literal[1], Literal["hello"]]`
f = (1, 2) < (1, "hello")
# TODO: should be Unknown, once operand type check is implemented
reveal_type(f) # revealed: bool
```

View File

@@ -0,0 +1,49 @@
# If expressions
## Simple if-expression
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
x = 1 if flag else 2
reveal_type(x) # revealed: Literal[1, 2]
```
## If-expression with walrus operator
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
y = 0
z = 0
x = (y := 1) if flag else (z := 2)
reveal_type(x) # revealed: Literal[1, 2]
reveal_type(y) # revealed: Literal[0, 1]
reveal_type(z) # revealed: Literal[0, 2]
```
## Nested if-expression
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
x = 1 if flag else 2 if flag2 else 3
reveal_type(x) # revealed: Literal[1, 2, 3]
```
## None
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
x = 1 if flag else None
reveal_type(x) # revealed: Literal[1] | None
```

View File

@@ -0,0 +1,130 @@
# If statements
## Simple if
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
y = 1
y = 2
if flag:
y = 3
reveal_type(y) # revealed: Literal[2, 3]
```
## Simple if-elif-else
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
y = 1
y = 2
if flag:
y = 3
elif flag2:
y = 4
else:
r = y
y = 5
s = y
x = y
reveal_type(x) # revealed: Literal[3, 4, 5]
# revealed: Literal[2]
# error: [possibly-unresolved-reference]
reveal_type(r)
# revealed: Literal[5]
# error: [possibly-unresolved-reference]
reveal_type(s)
```
## Single symbol across if-elif-else
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
if flag:
y = 1
elif flag2:
y = 2
else:
y = 3
reveal_type(y) # revealed: Literal[1, 2, 3]
```
## if-elif-else without else assignment
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
y = 0
if flag:
y = 1
elif flag2:
y = 2
else:
pass
reveal_type(y) # revealed: Literal[0, 1, 2]
```
## if-elif-else with intervening assignment
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
y = 0
if flag:
y = 1
z = 3
elif flag2:
y = 2
else:
pass
reveal_type(y) # revealed: Literal[0, 1, 2]
```
## Nested if statement
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
y = 0
if flag:
if flag2:
y = 1
reveal_type(y) # revealed: Literal[0, 1]
```
## if-elif without else
```py
def bool_instance() -> bool:
return True
flag, flag2 = bool_instance(), bool_instance()
y = 1
y = 2
if flag:
y = 3
elif flag2:
y = 4
reveal_type(y) # revealed: Literal[2, 3, 4]
```

View File

@@ -0,0 +1,41 @@
# Pattern matching
## With wildcard
```py
match 0:
case 1:
y = 2
case _:
y = 3
reveal_type(y) # revealed: Literal[2, 3]
```
## Without wildcard
```py
match 0:
case 1:
y = 2
case 2:
y = 3
# revealed: Literal[2, 3]
# error: [possibly-unresolved-reference]
reveal_type(y)
```
## Basic match
```py
y = 1
y = 2
match 0:
case 1:
y = 3
case 2:
y = 4
reveal_type(y) # revealed: Literal[2, 3, 4]
```

View File

@@ -0,0 +1,51 @@
# Errors while declaring
## Violates previous assignment
```py
x = 1
x: str # error: [invalid-declaration] "Cannot declare type `str` for inferred type `Literal[1]`"
```
## Incompatible declarations
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
x: str
else:
x: int
x = 1 # error: [conflicting-declarations] "Conflicting declared types for `x`: str, int"
```
## Partial declarations
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
x: int
x = 1 # error: [conflicting-declarations] "Conflicting declared types for `x`: Unknown, int"
```
## Incompatible declarations with bad assignment
```py
def bool_instance() -> bool:
return True
flag = bool_instance()
if flag:
x: str
else:
x: int
# error: [conflicting-declarations]
# error: [invalid-assignment]
x = b"foo"
```

View File

@@ -0,0 +1,60 @@
# Exception Handling
## Single Exception
```py
import re
try:
help()
except NameError as e:
reveal_type(e) # revealed: NameError
except re.error as f:
reveal_type(f) # revealed: error
```
## Unknown type in except handler does not cause spurious diagnostic
```py
from nonexistent_module import foo # error: [unresolved-import]
try:
help()
except foo as e:
reveal_type(foo) # revealed: Unknown
reveal_type(e) # revealed: Unknown
```
## Multiple Exceptions in a Tuple
```py
EXCEPTIONS = (AttributeError, TypeError)
try:
help()
except (RuntimeError, OSError) as e:
reveal_type(e) # revealed: RuntimeError | OSError
except EXCEPTIONS as f:
reveal_type(f) # revealed: AttributeError | TypeError
```
## Dynamic exception types
```py
def foo(
x: type[AttributeError],
y: tuple[type[OSError], type[RuntimeError]],
z: tuple[type[BaseException], ...],
):
try:
help()
except x as e:
# TODO: should be `AttributeError`
reveal_type(e) # revealed: @Todo
except y as f:
# TODO: should be `OSError | RuntimeError`
reveal_type(f) # revealed: @Todo
except z as g:
# TODO: should be `BaseException`
reveal_type(g) # revealed: @Todo
```

View File

@@ -29,7 +29,7 @@ completing. The type of `x` at the beginning of the `except` suite in this examp
`x = could_raise_returns_str()` redefinition, but we *also* could have jumped to the `except` suite
*after* that redefinition.
```py
```py path=union_type_inferred.py
def could_raise_returns_str() -> str:
return "foo"
@@ -50,7 +50,10 @@ reveal_type(x) # revealed: str | Literal[2]
If `x` has the same type at the end of both branches, however, the branches unify and `x` is not
inferred as having a union type following the `try`/`except` block:
```py
```py path=branches_unify_to_non_union_type.py
def could_raise_returns_str() -> str:
return "foo"
x = 1
try:
@@ -130,7 +133,7 @@ the `except` suite:
- At the end of `else`, `x == 3`
- At the end of `except`, `x == 2`
```py
```py path=single_except.py
def could_raise_returns_str() -> str:
return "foo"
@@ -158,6 +161,9 @@ been executed in its entirety, or the `try` suite and the `else` suite must both
in their entireties:
```py
def could_raise_returns_str() -> str:
return "foo"
x = 1
try:
@@ -186,7 +192,7 @@ A `finally` suite is *always* executed. As such, if we reach the `reveal_type` c
this example, we know that `x` *must* have been reassigned to `2` during the `finally` suite. The
type of `x` at the end of the example is therefore `Literal[2]`:
```py
```py path=redef_in_finally.py
def could_raise_returns_str() -> str:
return "foo"
@@ -211,7 +217,10 @@ at this point than there were when we were inside the `finally` block.
(Our current model does *not* correctly infer the types *inside* `finally` suites, however; this is
still a TODO item for us.)
```py
```py path=no_redef_in_finally.py
def could_raise_returns_str() -> str:
return "foo"
x = 1
try:
@@ -240,35 +249,31 @@ suites:
exception raised in the `except` suite to cause us to jump to the `finally` suite before the
`except` suite ran to completion
```py
class A: ...
class B: ...
class C: ...
```py path=redef_in_finally.py
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_A() -> A:
return A()
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_B() -> B:
return B()
def could_raise_returns_C() -> C:
return C()
def could_raise_returns_bool() -> bool:
return True
x = 1
try:
reveal_type(x) # revealed: Literal[1]
x = could_raise_returns_A()
reveal_type(x) # revealed: A
x = could_raise_returns_str()
reveal_type(x) # revealed: str
except TypeError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
x = could_raise_returns_bool()
reveal_type(x) # revealed: bool
finally:
# TODO: should be `Literal[1] | A | B | C`
reveal_type(x) # revealed: A | C
# TODO: should be `Literal[1] | str | bytes | bool`
reveal_type(x) # revealed: str | bool
x = 2
reveal_type(x) # revealed: Literal[2]
@@ -281,61 +286,76 @@ itself. (In some control-flow possibilities, some exceptions were merely *suspen
`finally` suite; these lead to the scope's termination following the conclusion of the `finally`
suite.)
```py
```py path=no_redef_in_finally.py
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_bool() -> bool:
return True
x = 1
try:
reveal_type(x) # revealed: Literal[1]
x = could_raise_returns_A()
reveal_type(x) # revealed: A
x = could_raise_returns_str()
reveal_type(x) # revealed: str
except TypeError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
x = could_raise_returns_bool()
reveal_type(x) # revealed: bool
finally:
# TODO: should be `Literal[1] | A | B | C`
reveal_type(x) # revealed: A | C
# TODO: should be `Literal[1] | str | bytes | bool`
reveal_type(x) # revealed: str | bool
reveal_type(x) # revealed: A | C
reveal_type(x) # revealed: str | bool
```
An example with multiple `except` branches and a `finally` branch:
```py
class D: ...
class E: ...
```py path=multiple_except_branches.py
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_D() -> D:
return D()
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_E() -> E:
return E()
def could_raise_returns_bool() -> bool:
return True
def could_raise_returns_memoryview() -> memoryview:
return memoryview(b"")
def could_raise_returns_float() -> float:
return 3.14
x = 1
try:
reveal_type(x) # revealed: Literal[1]
x = could_raise_returns_A()
reveal_type(x) # revealed: A
x = could_raise_returns_str()
reveal_type(x) # revealed: str
except TypeError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
x = could_raise_returns_bool()
reveal_type(x) # revealed: bool
except ValueError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_D()
reveal_type(x) # revealed: D
x = could_raise_returns_E()
reveal_type(x) # revealed: E
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_memoryview()
reveal_type(x) # revealed: memoryview
x = could_raise_returns_float()
reveal_type(x) # revealed: float
finally:
# TODO: should be `Literal[1] | A | B | C | D | E`
reveal_type(x) # revealed: A | C | E
# TODO: should be `Literal[1] | str | bytes | bool | memoryview | float`
reveal_type(x) # revealed: str | bool | float
reveal_type(x) # revealed: A | C | E
reveal_type(x) # revealed: str | bool | float
```
## Combining `except`, `else` and `finally` branches
@@ -344,94 +364,100 @@ If the exception handler has an `else` branch, we must also take into account th
control flow could have jumped to the `finally` suite from partway through the `else` suite due to
an exception raised *there*.
```py
class A: ...
class B: ...
class C: ...
class D: ...
class E: ...
```py path=single_except_branch.py
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_A() -> A:
return A()
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_B() -> B:
return B()
def could_raise_returns_bool() -> bool:
return True
def could_raise_returns_C() -> C:
return C()
def could_raise_returns_memoryview() -> memoryview:
return memoryview(b"")
def could_raise_returns_D() -> D:
return D()
def could_raise_returns_E() -> E:
return E()
def could_raise_returns_float() -> float:
return 3.14
x = 1
try:
reveal_type(x) # revealed: Literal[1]
x = could_raise_returns_A()
reveal_type(x) # revealed: A
x = could_raise_returns_str()
reveal_type(x) # revealed: str
except TypeError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
x = could_raise_returns_bool()
reveal_type(x) # revealed: bool
else:
reveal_type(x) # revealed: A
x = could_raise_returns_D()
reveal_type(x) # revealed: D
x = could_raise_returns_E()
reveal_type(x) # revealed: E
reveal_type(x) # revealed: str
x = could_raise_returns_memoryview()
reveal_type(x) # revealed: memoryview
x = could_raise_returns_float()
reveal_type(x) # revealed: float
finally:
# TODO: should be `Literal[1] | A | B | C | D | E`
reveal_type(x) # revealed: C | E
# TODO: should be `Literal[1] | str | bytes | bool | memoryview | float`
reveal_type(x) # revealed: bool | float
reveal_type(x) # revealed: C | E
reveal_type(x) # revealed: bool | float
```
The same again, this time with multiple `except` branches:
```py
class F: ...
class G: ...
```py path=multiple_except_branches.py
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_F() -> F:
return F()
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_G() -> G:
return G()
def could_raise_returns_bool() -> bool:
return True
def could_raise_returns_memoryview() -> memoryview:
return memoryview(b"")
def could_raise_returns_float() -> float:
return 3.14
def could_raise_returns_range() -> range:
return range(42)
def could_raise_returns_slice() -> slice:
return slice(None)
x = 1
try:
reveal_type(x) # revealed: Literal[1]
x = could_raise_returns_A()
reveal_type(x) # revealed: A
x = could_raise_returns_str()
reveal_type(x) # revealed: str
except TypeError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
x = could_raise_returns_bool()
reveal_type(x) # revealed: bool
except ValueError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_D()
reveal_type(x) # revealed: D
x = could_raise_returns_E()
reveal_type(x) # revealed: E
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_memoryview()
reveal_type(x) # revealed: memoryview
x = could_raise_returns_float()
reveal_type(x) # revealed: float
else:
reveal_type(x) # revealed: A
x = could_raise_returns_F()
reveal_type(x) # revealed: F
x = could_raise_returns_G()
reveal_type(x) # revealed: G
reveal_type(x) # revealed: str
x = could_raise_returns_range()
reveal_type(x) # revealed: range
x = could_raise_returns_slice()
reveal_type(x) # revealed: slice
finally:
# TODO: should be `Literal[1] | A | B | C | D | E | F | G`
reveal_type(x) # revealed: C | E | G
# TODO: should be `Literal[1] | str | bytes | bool | memoryview | float | range | slice`
reveal_type(x) # revealed: bool | float | slice
reveal_type(x) # revealed: C | E | G
reveal_type(x) # revealed: bool | float | slice
```
## Nested `try`/`except` blocks
@@ -445,101 +471,92 @@ a suite containing statements that could possibly raise exceptions, which would
jumping out of that suite prior to the suite running to completion.
```py
class A: ...
class B: ...
class C: ...
class D: ...
class E: ...
class F: ...
class G: ...
class H: ...
class I: ...
class J: ...
class K: ...
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_A() -> A:
return A()
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_B() -> B:
return B()
def could_raise_returns_bool() -> bool:
return True
def could_raise_returns_C() -> C:
return C()
def could_raise_returns_memoryview() -> memoryview:
return memoryview(b"")
def could_raise_returns_D() -> D:
return D()
def could_raise_returns_float() -> float:
return 3.14
def could_raise_returns_E() -> E:
return E()
def could_raise_returns_range() -> range:
return range(42)
def could_raise_returns_F() -> F:
return F()
def could_raise_returns_slice() -> slice:
return slice(None)
def could_raise_returns_G() -> G:
return G()
def could_raise_returns_complex() -> complex:
return 3j
def could_raise_returns_H() -> H:
return H()
def could_raise_returns_bytearray() -> bytearray:
return bytearray()
def could_raise_returns_I() -> I:
return I()
class Foo: ...
class Bar: ...
def could_raise_returns_J() -> J:
return J()
def could_raise_returns_Foo() -> Foo:
return Foo()
def could_raise_returns_K() -> K:
return K()
def could_raise_returns_Bar() -> Bar:
return Bar()
x = 1
try:
try:
reveal_type(x) # revealed: Literal[1]
x = could_raise_returns_A()
reveal_type(x) # revealed: A
x = could_raise_returns_str()
reveal_type(x) # revealed: str
except TypeError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
x = could_raise_returns_bool()
reveal_type(x) # revealed: bool
except ValueError:
reveal_type(x) # revealed: Literal[1] | A
x = could_raise_returns_D()
reveal_type(x) # revealed: D
x = could_raise_returns_E()
reveal_type(x) # revealed: E
reveal_type(x) # revealed: Literal[1] | str
x = could_raise_returns_memoryview()
reveal_type(x) # revealed: memoryview
x = could_raise_returns_float()
reveal_type(x) # revealed: float
else:
reveal_type(x) # revealed: A
x = could_raise_returns_F()
reveal_type(x) # revealed: F
x = could_raise_returns_G()
reveal_type(x) # revealed: G
reveal_type(x) # revealed: str
x = could_raise_returns_range()
reveal_type(x) # revealed: range
x = could_raise_returns_slice()
reveal_type(x) # revealed: slice
finally:
# TODO: should be `Literal[1] | A | B | C | D | E | F | G`
reveal_type(x) # revealed: C | E | G
# TODO: should be `Literal[1] | str | bytes | bool | memoryview | float | range | slice`
reveal_type(x) # revealed: bool | float | slice
x = 2
reveal_type(x) # revealed: Literal[2]
reveal_type(x) # revealed: Literal[2]
except:
reveal_type(x) # revealed: Literal[1, 2] | A | B | C | D | E | F | G
x = could_raise_returns_H()
reveal_type(x) # revealed: H
x = could_raise_returns_I()
reveal_type(x) # revealed: I
reveal_type(x) # revealed: Literal[1, 2] | str | bytes | bool | memoryview | float | range | slice
x = could_raise_returns_complex()
reveal_type(x) # revealed: complex
x = could_raise_returns_bytearray()
reveal_type(x) # revealed: bytearray
else:
reveal_type(x) # revealed: Literal[2]
x = could_raise_returns_J()
reveal_type(x) # revealed: J
x = could_raise_returns_K()
reveal_type(x) # revealed: K
x = could_raise_returns_Foo()
reveal_type(x) # revealed: Foo
x = could_raise_returns_Bar()
reveal_type(x) # revealed: Bar
finally:
# TODO: should be `Literal[1, 2] | A | B | C | D | E | F | G | H | I | J | K`
reveal_type(x) # revealed: I | K
# TODO: should be `Literal[1, 2] | str | bytes | bool | memoryview | float | range | slice | complex | bytearray | Foo | Bar`
reveal_type(x) # revealed: bytearray | Bar
# Either one `except` branch or the `else`
# must have been taken and completed to get here:
reveal_type(x) # revealed: I | K
reveal_type(x) # revealed: bytearray | Bar
```
## Nested scopes inside `try` blocks
@@ -548,64 +565,58 @@ Shadowing a variable in an inner scope has no effect on type inference of the va
in the outer scope:
```py
class A: ...
class B: ...
class C: ...
class D: ...
class E: ...
def could_raise_returns_str() -> str:
return "foo"
def could_raise_returns_A() -> A:
return A()
def could_raise_returns_bytes() -> bytes:
return b"foo"
def could_raise_returns_B() -> B:
return B()
def could_raise_returns_range() -> range:
return range(42)
def could_raise_returns_C() -> C:
return C()
def could_raise_returns_bytearray() -> bytearray:
return bytearray()
def could_raise_returns_D() -> D:
return D()
def could_raise_returns_E() -> E:
return E()
def could_raise_returns_float() -> float:
return 3.14
x = 1
try:
def foo(param=could_raise_returns_A()):
x = could_raise_returns_A()
def foo(param=could_raise_returns_str()):
x = could_raise_returns_str()
try:
reveal_type(x) # revealed: A
x = could_raise_returns_B()
reveal_type(x) # revealed: B
reveal_type(x) # revealed: str
x = could_raise_returns_bytes()
reveal_type(x) # revealed: bytes
except:
reveal_type(x) # revealed: A | B
x = could_raise_returns_C()
reveal_type(x) # revealed: C
x = could_raise_returns_D()
reveal_type(x) # revealed: D
reveal_type(x) # revealed: str | bytes
x = could_raise_returns_bytearray()
reveal_type(x) # revealed: bytearray
x = could_raise_returns_float()
reveal_type(x) # revealed: float
finally:
# TODO: should be `A | B | C | D`
reveal_type(x) # revealed: B | D
reveal_type(x) # revealed: B | D
# TODO: should be `str | bytes | bytearray | float`
reveal_type(x) # revealed: bytes | float
reveal_type(x) # revealed: bytes | float
x = foo
reveal_type(x) # revealed: def foo(param=A) -> Unknown
reveal_type(x) # revealed: Literal[foo]
except:
reveal_type(x) # revealed: Literal[1] | (def foo(param=A) -> Unknown)
reveal_type(x) # revealed: Literal[1] | Literal[foo]
class Bar:
x = could_raise_returns_E()
reveal_type(x) # revealed: E
x = could_raise_returns_range()
reveal_type(x) # revealed: range
x = Bar
reveal_type(x) # revealed: <class 'Bar'>
reveal_type(x) # revealed: Literal[Bar]
finally:
# TODO: should be `Literal[1] | <class 'foo'> | <class 'Bar'>`
reveal_type(x) # revealed: (def foo(param=A) -> Unknown) | <class 'Bar'>
# TODO: should be `Literal[1] | Literal[foo] | Literal[Bar]`
reveal_type(x) # revealed: Literal[foo] | Literal[Bar]
reveal_type(x) # revealed: (def foo(param=A) -> Unknown) | <class 'Bar'>
reveal_type(x) # revealed: Literal[foo] | Literal[Bar]
```
[1]: https://astral-sh.notion.site/Exception-handler-control-flow-11348797e1ca80bb8ce1e9aedbbe439d

View File

@@ -0,0 +1,30 @@
# Except star
## Except\* with BaseException
```py
try:
help()
except* BaseException as e:
reveal_type(e) # revealed: BaseExceptionGroup
```
## Except\* with specific exception
```py
try:
help()
except* OSError as e:
# TODO(Alex): more precise would be `ExceptionGroup[OSError]`
reveal_type(e) # revealed: BaseExceptionGroup
```
## Except\* with multiple exceptions
```py
try:
help()
except* (TypeError, AttributeError) as e:
# TODO(Alex): more precise would be `ExceptionGroup[TypeError | AttributeError]`.
reveal_type(e) # revealed: BaseExceptionGroup
```

View File

@@ -9,4 +9,5 @@ try:
print
except as e: # error: [invalid-syntax]
reveal_type(e) # revealed: Unknown
```

View File

@@ -0,0 +1,28 @@
# Attribute access
## Boundness
```py
def flag() -> bool: ...
class A:
always_bound = 1
if flag():
union = 1
else:
union = "abc"
if flag():
possibly_unbound = "abc"
reveal_type(A.always_bound) # revealed: Literal[1]
reveal_type(A.union) # revealed: Literal[1] | Literal["abc"]
# error: [possibly-unbound-attribute] "Attribute `possibly_unbound` on type `Literal[A]` is possibly unbound"
reveal_type(A.possibly_unbound) # revealed: Literal["abc"]
# error: [unresolved-attribute] "Type `Literal[A]` has no attribute `non_existent`"
reveal_type(A.non_existent) # revealed: Unknown
```

View File

@@ -0,0 +1,110 @@
# Expressions
## OR
```py
def foo() -> str:
pass
reveal_type(True or False) # revealed: Literal[True]
reveal_type("x" or "y" or "z") # revealed: Literal["x"]
reveal_type("" or "y" or "z") # revealed: Literal["y"]
reveal_type(False or "z") # revealed: Literal["z"]
reveal_type(False or True) # revealed: Literal[True]
reveal_type(False or False) # revealed: Literal[False]
reveal_type(foo() or False) # revealed: str | Literal[False]
reveal_type(foo() or True) # revealed: str | Literal[True]
```
## AND
```py
def foo() -> str:
pass
reveal_type(True and False) # revealed: Literal[False]
reveal_type(False and True) # revealed: Literal[False]
reveal_type(foo() and False) # revealed: str | Literal[False]
reveal_type(foo() and True) # revealed: str | Literal[True]
reveal_type("x" and "y" and "z") # revealed: Literal["z"]
reveal_type("x" and "y" and "") # revealed: Literal[""]
reveal_type("" and "y") # revealed: Literal[""]
```
## Simple function calls to bool
```py
def returns_bool() -> bool:
return True
if returns_bool():
x = True
else:
x = False
reveal_type(x) # revealed: bool
```
## Complex
```py
def foo() -> str:
pass
reveal_type("x" and "y" or "z") # revealed: Literal["y"]
reveal_type("x" or "y" and "z") # revealed: Literal["x"]
reveal_type("" and "y" or "z") # revealed: Literal["z"]
reveal_type("" or "y" and "z") # revealed: Literal["z"]
reveal_type("x" and "y" or "") # revealed: Literal["y"]
reveal_type("x" or "y" and "") # revealed: Literal["x"]
```
## `bool()` function
## Evaluates to builtin
```py path=a.py
redefined_builtin_bool = bool
def my_bool(x) -> bool:
return True
```
```py
from a import redefined_builtin_bool, my_bool
reveal_type(redefined_builtin_bool(0)) # revealed: Literal[False]
reveal_type(my_bool(0)) # revealed: bool
```
## Truthy values
```py
reveal_type(bool(1)) # revealed: Literal[True]
reveal_type(bool((0,))) # revealed: Literal[True]
reveal_type(bool("NON EMPTY")) # revealed: Literal[True]
reveal_type(bool(True)) # revealed: Literal[True]
def foo(): ...
reveal_type(bool(foo)) # revealed: Literal[True]
```
## Falsy values
```py
reveal_type(bool(0)) # revealed: Literal[False]
reveal_type(bool(())) # revealed: Literal[False]
reveal_type(bool(None)) # revealed: Literal[False]
reveal_type(bool("")) # revealed: Literal[False]
reveal_type(bool(False)) # revealed: Literal[False]
reveal_type(bool()) # revealed: Literal[False]
```
## Ambiguous values
```py
reveal_type(bool([])) # revealed: bool
reveal_type(bool({})) # revealed: bool
reveal_type(bool(set())) # revealed: bool
```

View File

@@ -0,0 +1,24 @@
# If expression
## Union
```py
def bool_instance() -> bool:
return True
reveal_type(1 if bool_instance() else 2) # revealed: Literal[1, 2]
```
## Statically known branches
```py
reveal_type(1 if True else 2) # revealed: Literal[1]
reveal_type(1 if "not empty" else 2) # revealed: Literal[1]
reveal_type(1 if (1,) else 2) # revealed: Literal[1]
reveal_type(1 if 1 else 2) # revealed: Literal[1]
reveal_type(1 if False else 2) # revealed: Literal[2]
reveal_type(1 if None else 2) # revealed: Literal[2]
reveal_type(1 if "" else 2) # revealed: Literal[2]
reveal_type(1 if 0 else 2) # revealed: Literal[2]
```

View File

@@ -0,0 +1,111 @@
# PEP 695 Generics
## Class Declarations
Basic PEP 695 generics
```py
class MyBox[T]:
data: T
box_model_number = 695
def __init__(self, data: T):
self.data = data
box: MyBox[int] = MyBox(5)
# TODO should emit a diagnostic here (str is not assignable to int)
wrong_innards: MyBox[int] = MyBox("five")
# TODO reveal int
reveal_type(box.data) # revealed: @Todo
reveal_type(MyBox.box_model_number) # revealed: Literal[695]
```
## Subclassing
```py
class MyBox[T]:
data: T
def __init__(self, data: T):
self.data = data
# TODO not error on the subscripting
# error: [non-subscriptable]
class MySecureBox[T](MyBox[T]): ...
secure_box: MySecureBox[int] = MySecureBox(5)
reveal_type(secure_box) # revealed: MySecureBox
# TODO reveal int
reveal_type(secure_box.data) # revealed: @Todo
```
## Cyclical class definition
In type stubs, classes can reference themselves in their base class definitions. For example, in
`typeshed`, we have `class str(Sequence[str]): ...`.
This should hold true even with generics at play.
```py path=a.pyi
class Seq[T]: ...
# TODO not error on the subscripting
class S[T](Seq[S]): ... # error: [non-subscriptable]
reveal_type(S) # revealed: Literal[S]
```
## Type params
A PEP695 type variable defines a value of type `typing.TypeVar` with attributes `__name__`,
`__bounds__`, `__constraints__`, and `__default__` (the latter three all lazily evaluated):
```py
def f[T, U: A, V: (A, B), W = A, X: A = A1]():
reveal_type(T) # revealed: T
reveal_type(T.__name__) # revealed: Literal["T"]
reveal_type(T.__bound__) # revealed: None
reveal_type(T.__constraints__) # revealed: tuple[()]
reveal_type(T.__default__) # revealed: NoDefault
reveal_type(U) # revealed: U
reveal_type(U.__name__) # revealed: Literal["U"]
reveal_type(U.__bound__) # revealed: type[A]
reveal_type(U.__constraints__) # revealed: tuple[()]
reveal_type(U.__default__) # revealed: NoDefault
reveal_type(V) # revealed: V
reveal_type(V.__name__) # revealed: Literal["V"]
reveal_type(V.__bound__) # revealed: None
reveal_type(V.__constraints__) # revealed: tuple[type[A], type[B]]
reveal_type(V.__default__) # revealed: NoDefault
reveal_type(W) # revealed: W
reveal_type(W.__name__) # revealed: Literal["W"]
reveal_type(W.__bound__) # revealed: None
reveal_type(W.__constraints__) # revealed: tuple[()]
reveal_type(W.__default__) # revealed: type[A]
reveal_type(X) # revealed: X
reveal_type(X.__name__) # revealed: Literal["X"]
reveal_type(X.__bound__) # revealed: type[A]
reveal_type(X.__constraints__) # revealed: tuple[()]
reveal_type(X.__default__) # revealed: type[A1]
class A: ...
class B: ...
class A1(A): ...
```
## Minimum two constraints
A typevar with less than two constraints emits a diagnostic and is treated as unconstrained:
```py
# error: [invalid-typevar-constraints] "TypeVar must have at least two constrained types"
def f[T: (int,)]():
reveal_type(T.__constraints__) # revealed: tuple[()]
```

View File

@@ -0,0 +1,27 @@
# Structures
## Class import following
```py
from b import C as D
E = D
reveal_type(E) # revealed: Literal[C]
```
```py path=b.py
class C: ...
```
## Module member resolution
```py
import b
D = b.C
reveal_type(D) # revealed: Literal[C]
```
```py path=b.py
class C: ...
```

View File

@@ -0,0 +1,8 @@
# Importing builtin module
```py
import builtins
x = builtins.copyright
reveal_type(x) # revealed: Literal[copyright]
```

View File

@@ -2,13 +2,12 @@
## Maybe unbound
`maybe_unbound.py`:
```py
def coinflip() -> bool:
```py path=maybe_unbound.py
def bool_instance() -> bool:
return True
if coinflip():
flag = bool_instance()
if flag:
y = 3
x = y # error: [possibly-unresolved-reference]
@@ -25,21 +24,20 @@ reveal_type(y)
# error: [possibly-unbound-import] "Member `y` of module `maybe_unbound` is possibly unbound"
from maybe_unbound import x, y
reveal_type(x) # revealed: Unknown | Literal[3]
reveal_type(y) # revealed: Unknown | Literal[3]
reveal_type(x) # revealed: Literal[3]
reveal_type(y) # revealed: Literal[3]
```
## Maybe unbound annotated
`maybe_unbound_annotated.py`:
```py
def coinflip() -> bool:
```py path=maybe_unbound_annotated.py
def bool_instance() -> bool:
return True
if coinflip():
y: int = 3
flag = bool_instance()
if flag:
y: int = 3
x = y # error: [possibly-unresolved-reference]
# revealed: Literal[3]
@@ -56,7 +54,7 @@ Importing an annotated name prefers the declared type over the inferred type:
# error: [possibly-unbound-import] "Member `y` of module `maybe_unbound_annotated` is possibly unbound"
from maybe_unbound_annotated import x, y
reveal_type(x) # revealed: Unknown | Literal[3]
reveal_type(x) # revealed: Literal[3]
reveal_type(y) # revealed: int
```
@@ -64,13 +62,11 @@ reveal_type(y) # revealed: int
Importing a possibly undeclared name still gives us its declared type:
`maybe_undeclared.py`:
```py
def coinflip() -> bool:
```py path=maybe_undeclared.py
def bool_instance() -> bool:
return True
if coinflip():
if bool_instance():
x: int
```
@@ -82,29 +78,27 @@ reveal_type(x) # revealed: int
## Reimport
`c.py`:
```py
```py path=c.py
def f(): ...
```
`b.py`:
```py
def coinflip() -> bool:
```py path=b.py
def bool_instance() -> bool:
return True
if coinflip():
flag = bool_instance()
if flag:
from c import f
else:
def f(): ...
```
```py
from b import f
# TODO: We should disambiguate in such cases between `b.f` and `c.f`.
reveal_type(f) # revealed: (def f() -> Unknown) | (def f() -> Unknown)
# TODO: We should disambiguate in such cases, showing `Literal[b.f, c.f]`.
reveal_type(f) # revealed: Literal[f, f]
```
## Reimport with stub declaration
@@ -112,19 +106,16 @@ reveal_type(f) # revealed: (def f() -> Unknown) | (def f() -> Unknown)
When we have a declared type in one path and only an inferred-from-definition type in the other, we
should still be able to unify those:
`c.pyi`:
```pyi
```py path=c.pyi
x: int
```
`b.py`:
```py
def coinflip() -> bool:
```py path=b.py
def bool_instance() -> bool:
return True
if coinflip():
flag = bool_instance()
if flag:
from c import x
else:
x = 1

View File

@@ -3,7 +3,7 @@
## Unresolved import statement
```py
import bar # error: "Cannot resolve imported module `bar`"
import bar # error: "Cannot resolve import `bar`"
reveal_type(bar) # revealed: Unknown
```
@@ -11,16 +11,14 @@ reveal_type(bar) # revealed: Unknown
## Unresolved import from statement
```py
from bar import baz # error: "Cannot resolve imported module `bar`"
from bar import baz # error: "Cannot resolve import `bar`"
reveal_type(baz) # revealed: Unknown
```
## Unresolved import from resolved module
`a.py`:
```py
```py path=a.py
```
```py
@@ -31,10 +29,8 @@ reveal_type(thing) # revealed: Unknown
## Resolved import of symbol from unresolved import
`a.py`:
```py
import foo as foo # error: "Cannot resolve imported module `foo`"
```py path=a.py
import foo as foo # error: "Cannot resolve import `foo`"
reveal_type(foo) # revealed: Unknown
```
@@ -50,9 +46,7 @@ reveal_type(foo) # revealed: Unknown
## No implicit shadowing
`b.py`:
```py
```py path=b.py
x: int
```
@@ -61,28 +55,3 @@ from b import x
x = "foo" # error: [invalid-assignment] "Object of type `Literal["foo"]"
```
## Import cycle
`a.py`:
```py
class A: ...
reveal_type(A.__mro__) # revealed: tuple[<class 'A'>, <class 'object'>]
import b
class C(b.B): ...
reveal_type(C.__mro__) # revealed: tuple[<class 'C'>, <class 'B'>, <class 'A'>, <class 'object'>]
```
`b.py`:
```py
from a import A
class B(A): ...
reveal_type(B.__mro__) # revealed: tuple[<class 'B'>, <class 'A'>, <class 'object'>]
```

View File

@@ -0,0 +1,143 @@
# Relative
## Non-existent
```py path=package/__init__.py
```
```py path=package/bar.py
from .foo import X # error: [unresolved-import]
reveal_type(X) # revealed: Unknown
```
## Simple
```py path=package/__init__.py
```
```py path=package/foo.py
X = 42
```
```py path=package/bar.py
from .foo import X
reveal_type(X) # revealed: Literal[42]
```
## Dotted
```py path=package/__init__.py
```
```py path=package/foo/bar/baz.py
X = 42
```
```py path=package/bar.py
from .foo.bar.baz import X
reveal_type(X) # revealed: Literal[42]
```
## Bare to package
```py path=package/__init__.py
X = 42
```
```py path=package/bar.py
from . import X
reveal_type(X) # revealed: Literal[42]
```
## Non-existent + bare to package
```py path=package/bar.py
from . import X # error: [unresolved-import]
reveal_type(X) # revealed: Unknown
```
## Dunder init
```py path=package/__init__.py
from .foo import X
reveal_type(X) # revealed: Literal[42]
```
```py path=package/foo.py
X = 42
```
## Non-existent + dunder init
```py path=package/__init__.py
from .foo import X # error: [unresolved-import]
reveal_type(X) # revealed: Unknown
```
## Long relative import
```py path=package/__init__.py
```
```py path=package/foo.py
X = 42
```
```py path=package/subpackage/subsubpackage/bar.py
from ...foo import X
reveal_type(X) # revealed: Literal[42]
```
## Unbound symbol
```py path=package/__init__.py
```
```py path=package/foo.py
x # error: [unresolved-reference]
```
```py path=package/bar.py
from .foo import x # error: [unresolved-import]
reveal_type(x) # revealed: Unknown
```
## Bare to module
```py path=package/__init__.py
```
```py path=package/foo.py
X = 42
```
```py path=package/bar.py
# TODO: support submodule imports
from . import foo # error: [unresolved-import]
y = foo.X
# TODO: should be `Literal[42]`
reveal_type(y) # revealed: Unknown
```
## Non-existent + bare to module
```py path=package/__init__.py
```
```py path=package/bar.py
# TODO: support submodule imports
from . import foo # error: [unresolved-import]
reveal_type(foo) # revealed: Unknown
```

Some files were not shown because too many files have changed in this diff Show More