Compare commits

..

6 Commits

Author SHA1 Message Date
Amethyst Reese
f0fa410a02 Minimal prototype using regex 2026-01-12 18:20:12 -08:00
Amethyst Reese
06440dc5ba Update source kind from path mapping 2026-01-09 15:45:22 -08:00
Amethyst Reese
64fd7e900d Create new source types for markdown files 2026-01-08 17:08:23 -08:00
Dylan
c920cf8cdb Bump 0.14.11 (#22462) 2026-01-08 12:51:47 -06:00
Micha Reiser
bb757b5a79 [ty] Don't show diagnostics for excluded files (#22455) 2026-01-08 18:27:28 +01:00
Charlie Marsh
1f49e8ef51 Include configured src directories when resolving graphs (#22451)
## Summary

This PR augments the detected source paths with the user-configured
`src` when computing roots for `ruff analyze graph`.
2026-01-08 15:19:15 +00:00
34 changed files with 547 additions and 322 deletions

View File

@@ -1,5 +1,63 @@
# Changelog
## 0.14.11
Released on 2026-01-08.
### Preview features
- Consolidate diagnostics for matched disable/enable suppression comments ([#22099](https://github.com/astral-sh/ruff/pull/22099))
- Report diagnostics for invalid/unmatched range suppression comments ([#21908](https://github.com/astral-sh/ruff/pull/21908))
- \[`airflow`\] Passing positional argument into `airflow.lineage.hook.HookLineageCollector.create_asset` is not allowed (`AIR303`) ([#22046](https://github.com/astral-sh/ruff/pull/22046))
- \[`refurb`\] Mark `FURB192` fix as always unsafe ([#22210](https://github.com/astral-sh/ruff/pull/22210))
- \[`ruff`\] Add `non-empty-init-module` (`RUF067`) ([#22143](https://github.com/astral-sh/ruff/pull/22143))
### Bug fixes
- Fix GitHub format for multi-line diagnostics ([#22108](https://github.com/astral-sh/ruff/pull/22108))
- \[`flake8-unused-arguments`\] Mark `**kwargs` in `TypeVar` as used (`ARG001`) ([#22214](https://github.com/astral-sh/ruff/pull/22214))
### Rule changes
- Add `help:` subdiagnostics for several Ruff rules that can sometimes appear to disagree with `ty` ([#22331](https://github.com/astral-sh/ruff/pull/22331))
- \[`pylint`\] Demote `PLW1510` fix to display-only ([#22318](https://github.com/astral-sh/ruff/pull/22318))
- \[`pylint`\] Ignore identical members (`PLR1714`) ([#22220](https://github.com/astral-sh/ruff/pull/22220))
- \[`pylint`\] Improve diagnostic range for `PLC0206` ([#22312](https://github.com/astral-sh/ruff/pull/22312))
- \[`ruff`\] Improve fix title for `RUF102` invalid rule code ([#22100](https://github.com/astral-sh/ruff/pull/22100))
- \[`flake8-simplify`\]: Avoid unnecessary builtins import for `SIM105` ([#22358](https://github.com/astral-sh/ruff/pull/22358))
### Configuration
- Allow Python 3.15 as valid `target-version` value in preview ([#22419](https://github.com/astral-sh/ruff/pull/22419))
- Check `required-version` before parsing rules ([#22410](https://github.com/astral-sh/ruff/pull/22410))
- Include configured `src` directories when resolving graphs ([#22451](https://github.com/astral-sh/ruff/pull/22451))
### Documentation
- Update `T201` suggestion to not use root logger to satisfy `LOG015` ([#22059](https://github.com/astral-sh/ruff/pull/22059))
- Fix `iter` example in unsafe fixes doc ([#22118](https://github.com/astral-sh/ruff/pull/22118))
- \[`flake8_print`\] better suggestion for `basicConfig` in `T201` docs ([#22101](https://github.com/astral-sh/ruff/pull/22101))
- \[`pylint`\] Restore the fix safety docs for `PLW0133` ([#22211](https://github.com/astral-sh/ruff/pull/22211))
- Fix Jupyter notebook discovery info for editors ([#22447](https://github.com/astral-sh/ruff/pull/22447))
### Contributors
- [@charliermarsh](https://github.com/charliermarsh)
- [@ntBre](https://github.com/ntBre)
- [@cenviity](https://github.com/cenviity)
- [@njhearp](https://github.com/njhearp)
- [@cbachhuber](https://github.com/cbachhuber)
- [@jelle-openai](https://github.com/jelle-openai)
- [@AlexWaygood](https://github.com/AlexWaygood)
- [@ValdonVitija](https://github.com/ValdonVitija)
- [@BurntSushi](https://github.com/BurntSushi)
- [@Jkhall81](https://github.com/Jkhall81)
- [@PeterJCLaw](https://github.com/PeterJCLaw)
- [@harupy](https://github.com/harupy)
- [@amyreese](https://github.com/amyreese)
- [@sjyangkevin](https://github.com/sjyangkevin)
- [@woodruffw](https://github.com/woodruffw)
## 0.14.10
Released on 2025-12-18.

7
Cargo.lock generated
View File

@@ -2912,7 +2912,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"anyhow",
"argfile",
@@ -2928,6 +2928,7 @@ dependencies = [
"filetime",
"globwalk",
"ignore",
"indexmap",
"indoc",
"insta",
"insta-cmd",
@@ -3171,7 +3172,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"aho-corasick",
"anyhow",
@@ -3529,7 +3530,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.14.10"
version = "0.14.11"
dependencies = [
"console_error_panic_hook",
"console_log",

View File

@@ -150,8 +150,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.14.10/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.10/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.14.11/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.14.11/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -184,7 +184,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
publish = true
authors = { workspace = true }
edition = { workspace = true }
@@ -31,6 +31,7 @@ ruff_options_metadata = { workspace = true, features = ["serde"] }
ruff_python_ast = { workspace = true }
ruff_python_formatter = { workspace = true }
ruff_python_parser = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_server = { workspace = true }
ruff_source_file = { workspace = true }
ruff_text_size = { workspace = true }
@@ -48,6 +49,7 @@ colored = { workspace = true }
filetime = { workspace = true }
globwalk = { workspace = true }
ignore = { workspace = true }
indexmap = { workspace = true }
is-macro = { workspace = true }
itertools = { workspace = true }
jiff = { workspace = true }

View File

@@ -2,6 +2,7 @@ use crate::args::{AnalyzeGraphArgs, ConfigArguments};
use crate::resolve::resolve;
use crate::{ExitStatus, resolve_default_files};
use anyhow::Result;
use indexmap::IndexSet;
use log::{debug, warn};
use path_absolutize::CWD;
use ruff_db::system::{SystemPath, SystemPathBuf};
@@ -11,7 +12,7 @@ use ruff_linter::source_kind::SourceKind;
use ruff_linter::{warn_user, warn_user_once};
use ruff_python_ast::{PySourceType, SourceType};
use ruff_workspace::resolver::{ResolvedFile, match_exclusion, python_files_in_path};
use rustc_hash::FxHashMap;
use rustc_hash::{FxBuildHasher, FxHashMap};
use std::io::Write;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
@@ -59,17 +60,34 @@ pub(crate) fn analyze_graph(
})
.collect::<FxHashMap<_, _>>();
// Create a database from the source roots.
let src_roots = package_roots
.values()
.filter_map(|package| package.as_deref())
.filter_map(|package| package.parent())
.map(Path::to_path_buf)
.filter_map(|path| SystemPathBuf::from_path_buf(path).ok())
.collect();
// Create a database from the source roots, combining configured `src` paths with detected
// package roots. Configured paths are added first so they take precedence, and duplicates
// are removed.
let mut src_roots: IndexSet<SystemPathBuf, FxBuildHasher> = IndexSet::default();
// Add configured `src` paths first (for precedence), filtering to only include existing
// directories.
src_roots.extend(
pyproject_config
.settings
.linter
.src
.iter()
.filter(|path| path.is_dir())
.filter_map(|path| SystemPathBuf::from_path_buf(path.clone()).ok()),
);
// Add detected package roots.
src_roots.extend(
package_roots
.values()
.filter_map(|package| package.as_deref())
.filter_map(|path| path.parent())
.filter_map(|path| SystemPathBuf::from_path_buf(path.to_path_buf()).ok()),
);
let db = ModuleDb::from_src_roots(
src_roots,
src_roots.into_iter().collect(),
pyproject_config
.settings
.analyze

View File

@@ -11,6 +11,7 @@ use itertools::Itertools;
use log::{error, warn};
use rayon::iter::Either::{Left, Right};
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
use regex::{Captures, Regex};
use ruff_db::diagnostic::{
Annotation, Diagnostic, DiagnosticId, DisplayDiagnosticConfig, Severity, Span,
};
@@ -18,6 +19,7 @@ use ruff_linter::message::{EmitterContext, create_panic_diagnostic, render_diagn
use ruff_linter::settings::types::OutputFormat;
use ruff_notebook::NotebookIndex;
use ruff_python_parser::ParseError;
use ruff_python_trivia::textwrap::{dedent, indent};
use rustc_hash::{FxHashMap, FxHashSet};
use thiserror::Error;
use tracing::debug;
@@ -489,6 +491,66 @@ pub(crate) fn format_source(
formatted,
)))
}
SourceKind::Markdown(unformatted_document) => {
// adapted from blacken-docs
// https://github.com/adamchainz/blacken-docs/blob/fb107c1dce25f9206e29297aaa1ed7afc2980a5a/src/blacken_docs/__init__.py#L17
let code_block_regex = Regex::new(
r"(?imsx)
(?<before>
^(?<indent>\ *)```[^\S\r\n]*
(?:python|py|python3|py3)
(?:\ .*?)?\n
)
(?<code>.*?)
(?<after>
^\ *```[^\S\r\n]*$
)
",
)
.unwrap();
let mut changed = false;
let formatted_document =
code_block_regex.replace_all(unformatted_document, |capture: &Captures| {
let (original, [before, code_indent, unformatted_code, after]) =
capture.extract();
let unformatted_code = dedent(unformatted_code);
let options = settings.to_format_options(source_type, &unformatted_code, path);
let formatted_code = if let Some(_range) = range {
unimplemented!()
} else {
// Using `Printed::into_code` requires adding `ruff_formatter` as a direct dependency, and I suspect that Rust can optimize the closure away regardless.
#[expect(clippy::redundant_closure_for_method_calls)]
format_module_source(&unformatted_code, options)
.map(|formatted| formatted.into_code())
};
// TODO: figure out how to properly raise errors from inside closure
if let Ok(formatted_code) = formatted_code {
if formatted_code.len() == unformatted_code.len()
&& formatted_code == *unformatted_code
{
original.to_string()
} else {
changed = true;
let formatted_code = indent(formatted_code.as_str(), code_indent);
format!("{before}{formatted_code}{after}")
}
} else {
original.to_string()
}
});
if changed {
Ok(FormattedSource::Formatted(SourceKind::Markdown(
formatted_document.to_string(),
)))
} else {
Ok(FormattedSource::Unchanged)
}
}
}
}

View File

@@ -714,6 +714,121 @@ fn notebook_basic() -> Result<()> {
Ok(())
}
/// Test that the `src` configuration option is respected.
///
/// This is useful for monorepos where there are multiple source directories that need to be
/// included in the module resolution search path.
#[test]
fn src_option() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
// Create a lib directory with a package.
root.child("lib")
.child("mylib")
.child("__init__.py")
.write_str("def helper(): pass")?;
// Create an app directory with a file that imports from mylib.
root.child("app").child("__init__.py").write_str("")?;
root.child("app")
.child("main.py")
.write_str("from mylib import helper")?;
// Without src configured, the import from mylib won't resolve.
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": []
}
----- stderr -----
"#);
});
// With src = ["lib"], the import should resolve.
root.child("ruff.toml").write_str(indoc::indoc! {r#"
src = ["lib"]
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": [
"lib/mylib/__init__.py"
]
}
----- stderr -----
"#);
});
Ok(())
}
/// Test that glob patterns in `src` are expanded.
#[test]
fn src_glob_expansion() -> Result<()> {
let tempdir = TempDir::new()?;
let root = ChildPath::new(tempdir.path());
// Create multiple lib directories with packages.
root.child("libs")
.child("lib_a")
.child("pkg_a")
.child("__init__.py")
.write_str("def func_a(): pass")?;
root.child("libs")
.child("lib_b")
.child("pkg_b")
.child("__init__.py")
.write_str("def func_b(): pass")?;
// Create an app that imports from both packages.
root.child("app").child("__init__.py").write_str("")?;
root.child("app")
.child("main.py")
.write_str("from pkg_a import func_a\nfrom pkg_b import func_b")?;
// Use a glob pattern to include all lib directories.
root.child("ruff.toml").write_str(indoc::indoc! {r#"
src = ["libs/*"]
"#})?;
insta::with_settings!({
filters => INSTA_FILTERS.to_vec(),
}, {
assert_cmd_snapshot!(command().arg("app").current_dir(&root), @r#"
success: true
exit_code: 0
----- stdout -----
{
"app/__init__.py": [],
"app/main.py": [
"libs/lib_a/pkg_a/__init__.py",
"libs/lib_b/pkg_b/__init__.py"
]
}
----- stderr -----
"#);
});
Ok(())
}
#[test]
fn notebook_with_magic() -> Result<()> {
let tempdir = TempDir::new()?;

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.14.10"
version = "0.14.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -22,6 +22,8 @@ pub enum SourceKind {
Python(String),
/// The source contains a Jupyter notebook.
IpyNotebook(Box<Notebook>),
/// The source contains Markdown text.
Markdown(String),
}
impl SourceKind {
@@ -33,6 +35,7 @@ impl SourceKind {
match self {
SourceKind::IpyNotebook(notebook) => Some(notebook),
SourceKind::Python(_) => None,
SourceKind::Markdown(_) => None,
}
}
@@ -40,20 +43,36 @@ impl SourceKind {
match self {
SourceKind::Python(code) => Some(code),
SourceKind::IpyNotebook(_) => None,
SourceKind::Markdown(_) => None,
}
}
pub fn as_markdown(&self) -> Option<&str> {
match self {
SourceKind::Markdown(code) => Some(code),
SourceKind::Python(_) => None,
SourceKind::IpyNotebook(_) => None,
}
}
pub fn expect_python(self) -> String {
match self {
SourceKind::Python(code) => code,
SourceKind::IpyNotebook(_) => panic!("expected python code"),
_ => panic!("expected python code"),
}
}
pub fn expect_ipy_notebook(self) -> Notebook {
match self {
SourceKind::IpyNotebook(notebook) => *notebook,
SourceKind::Python(_) => panic!("expected ipy notebook"),
_ => panic!("expected ipy notebook"),
}
}
pub fn expect_markdown(self) -> String {
match self {
SourceKind::Markdown(code) => code,
_ => panic!("expected markdown text"),
}
}
@@ -66,6 +85,7 @@ impl SourceKind {
SourceKind::IpyNotebook(cloned)
}
SourceKind::Python(_) => SourceKind::Python(new_source),
SourceKind::Markdown(_) => SourceKind::Markdown(new_source),
}
}
@@ -74,20 +94,28 @@ impl SourceKind {
match self {
SourceKind::Python(source) => source,
SourceKind::IpyNotebook(notebook) => notebook.source_code(),
SourceKind::Markdown(source) => source,
}
}
/// Read the [`SourceKind`] from the given path. Returns `None` if the source is not a Python
/// source file.
pub fn from_path(path: &Path, source_type: PySourceType) -> Result<Option<Self>, SourceError> {
if source_type.is_ipynb() {
let notebook = Notebook::from_path(path)?;
Ok(notebook
.is_python_notebook()
.then_some(Self::IpyNotebook(Box::new(notebook))))
} else {
let contents = std::fs::read_to_string(path)?;
Ok(Some(Self::Python(contents)))
match source_type {
PySourceType::Ipynb => {
let notebook = Notebook::from_path(path)?;
Ok(notebook
.is_python_notebook()
.then_some(Self::IpyNotebook(Box::new(notebook))))
}
PySourceType::Markdown => {
let contents = std::fs::read_to_string(path)?;
Ok(Some(Self::Markdown(contents)))
}
PySourceType::Python | PySourceType::Stub => {
let contents = std::fs::read_to_string(path)?;
Ok(Some(Self::Python(contents)))
}
}
}
@@ -120,6 +148,10 @@ impl SourceKind {
notebook.write(writer)?;
Ok(())
}
SourceKind::Markdown(source) => {
writer.write_all(source.as_bytes())?;
Ok(())
}
}
}
@@ -140,6 +172,10 @@ impl SourceKind {
kind: DiffKind::IpyNotebook(src, dst),
path,
}),
(SourceKind::Markdown(src), SourceKind::Markdown(dst)) => Some(SourceKindDiff {
kind: DiffKind::Markdown(src, dst),
path,
}),
_ => None,
}
}
@@ -212,6 +248,17 @@ impl std::fmt::Display for SourceKindDiff<'_> {
writeln!(f)?;
}
DiffKind::Markdown(original, modified) => {
let mut diff = CodeDiff::new(original, modified);
let relative_path = self.path.map(fs::relativize_path);
if let Some(relative_path) = &relative_path {
diff.header(relative_path, relative_path);
}
writeln!(f, "{diff}")?;
}
}
Ok(())
@@ -222,6 +269,7 @@ impl std::fmt::Display for SourceKindDiff<'_> {
enum DiffKind<'a> {
Python(&'a str, &'a str),
IpyNotebook(&'a Notebook, &'a Notebook),
Markdown(&'a str, &'a str),
}
struct CodeDiff<'a> {

View File

@@ -89,6 +89,8 @@ pub enum PySourceType {
Stub,
/// The source is a Jupyter notebook (`.ipynb`).
Ipynb,
/// The source is a Markdown file (`.md`).
Markdown,
}
impl PySourceType {
@@ -106,6 +108,7 @@ impl PySourceType {
"pyi" => Self::Stub,
"pyw" => Self::Python,
"ipynb" => Self::Ipynb,
"md" => Self::Markdown,
_ => return None,
};
@@ -134,6 +137,10 @@ impl PySourceType {
pub const fn is_ipynb(self) -> bool {
matches!(self, Self::Ipynb)
}
pub const fn is_markdown(self) -> bool {
matches!(self, Self::Markdown)
}
}
impl<P: AsRef<Path>> From<P> for PySourceType {

View File

@@ -334,7 +334,7 @@ impl Format<PyFormatContext<'_>> for FormatEmptyLines {
PySourceType::Stub => {
write!(f, [empty_line()])
}
PySourceType::Python | PySourceType::Ipynb => {
PySourceType::Python | PySourceType::Ipynb | PySourceType::Markdown => {
write!(f, [empty_line(), empty_line()])
}
},

View File

@@ -283,7 +283,9 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
PySourceType::Stub => {
empty_line().fmt(f)?;
}
PySourceType::Python | PySourceType::Ipynb => {
PySourceType::Python
| PySourceType::Ipynb
| PySourceType::Markdown => {
write!(f, [empty_line(), empty_line()])?;
}
},
@@ -324,7 +326,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
PySourceType::Stub => {
empty_line().fmt(f)?;
}
PySourceType::Python | PySourceType::Ipynb => {
PySourceType::Python | PySourceType::Ipynb | PySourceType::Markdown => {
write!(f, [empty_line(), empty_line()])?;
}
},
@@ -376,7 +378,7 @@ impl FormatRule<Suite, PyFormatContext<'_>> for FormatSuite {
PySourceType::Stub => {
empty_line().fmt(f)?;
}
PySourceType::Python | PySourceType::Ipynb => {
PySourceType::Python | PySourceType::Ipynb | PySourceType::Markdown => {
write!(f, [empty_line(), empty_line()])?;
}
},

View File

@@ -525,7 +525,7 @@ pub trait AsMode {
impl AsMode for PySourceType {
fn as_mode(&self) -> Mode {
match self {
PySourceType::Python | PySourceType::Stub => Mode::Module,
PySourceType::Python | PySourceType::Stub | PySourceType::Markdown => Mode::Module,
PySourceType::Ipynb => Mode::Ipython,
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "ruff_wasm"
version = "0.14.10"
version = "0.14.11"
publish = false
authors = { workspace = true }
edition = { workspace = true }

View File

@@ -40,25 +40,22 @@ impl ExcludeFilter {
}
fn matches(&self, path: &SystemPath, mode: GlobFilterCheckMode, directory: bool) -> bool {
// If the path is excluded, return `ignore`
if self.ignore.matched(path, directory).is_ignore() {
return true;
}
match mode {
GlobFilterCheckMode::TopDown => {
match self.ignore.matched(path, directory) {
// No hit or an allow hit means the file or directory is not excluded.
Match::None | Match::Allow => false,
Match::Ignore => true,
}
// No hit or an allow hit means the file or directory is not excluded.
false
}
GlobFilterCheckMode::Adhoc => {
for ancestor in path.ancestors() {
match self.ignore.matched(ancestor, directory) {
// If the path is allowlisted or there's no hit, try the parent to ensure we don't return false
// for a folder where there's an exclude for a parent.
Match::None | Match::Allow => {}
Match::Ignore => return true,
}
}
false
// If the path is allowlisted or there's no hit, try the parent to ensure we don't return false
// for a folder where there's an exclude for a parent.
path.ancestors()
.skip(1)
.any(|ancestor| self.ignore.matched(ancestor, true).is_ignore())
}
}
}
@@ -187,6 +184,12 @@ enum Match {
Allow,
}
impl Match {
const fn is_ignore(self) -> bool {
matches!(self, Match::Ignore)
}
}
#[derive(Debug, Clone, PartialEq, Eq, get_size2::GetSize)]
struct IgnoreGlob {
/// The pattern that was originally parsed.

View File

@@ -17,7 +17,8 @@ from datetime import time
t = time(12, 0, 0)
t = replace(t, minute=30)
reveal_type(t) # revealed: time
# TODO: this should be `time`, once we support specialization of generic protocols
reveal_type(t) # revealed: Unknown
```
## The `__replace__` protocol
@@ -47,7 +48,8 @@ b = a.__replace__(x=3, y=4)
reveal_type(b) # revealed: Point
b = replace(a, x=3, y=4)
reveal_type(b) # revealed: Point
# TODO: this should be `Point`, once we support specialization of generic protocols
reveal_type(b) # revealed: Unknown
```
A call to `replace` does not require all keyword arguments:
@@ -57,7 +59,8 @@ c = a.__replace__(y=4)
reveal_type(c) # revealed: Point
d = replace(a, y=4)
reveal_type(d) # revealed: Point
# TODO: this should be `Point`, once we support specialization of generic protocols
reveal_type(d) # revealed: Unknown
```
Invalid calls to `__replace__` or `replace` will raise an error:
@@ -93,7 +96,8 @@ b = a.__replace__(x=3, y=4)
reveal_type(b) # revealed: Point
b = replace(a, x=3, y=4)
reveal_type(b) # revealed: Point
# TODO: this should be `Point`, once we support specialization of generic protocols
reveal_type(b) # revealed: Unknown
```
Invalid calls to `__replace__` will raise an error:

View File

@@ -561,7 +561,7 @@ from typing_extensions import overload, Generic, TypeVar
from ty_extensions import generic_context, into_callable
T = TypeVar("T")
U = TypeVar("U", covariant=True)
U = TypeVar("U")
class C(Generic[T]):
@overload
@@ -611,9 +611,9 @@ reveal_type(generic_context(D))
# revealed: ty_extensions.GenericContext[T@D, U@D]
reveal_type(generic_context(into_callable(D)))
reveal_type(D("string")) # revealed: D[str, Literal["string"]]
reveal_type(D(1)) # revealed: D[str, Literal[1]]
reveal_type(D(1, "string")) # revealed: D[int, Literal["string"]]
reveal_type(D("string")) # revealed: D[str, str]
reveal_type(D(1)) # revealed: D[str, int]
reveal_type(D(1, "string")) # revealed: D[int, str]
```
### Synthesized methods with dataclasses

View File

@@ -90,11 +90,13 @@ def takes_in_protocol(x: CanIndex[T]) -> T:
def deep_list(x: list[str]) -> None:
reveal_type(takes_in_list(x)) # revealed: list[str]
reveal_type(takes_in_protocol(x)) # revealed: str
# TODO: revealed: str
reveal_type(takes_in_protocol(x)) # revealed: Unknown
def deeper_list(x: list[set[str]]) -> None:
reveal_type(takes_in_list(x)) # revealed: list[set[str]]
reveal_type(takes_in_protocol(x)) # revealed: set[str]
# TODO: revealed: set[str]
reveal_type(takes_in_protocol(x)) # revealed: Unknown
def deep_explicit(x: ExplicitlyImplements[str]) -> None:
reveal_type(takes_in_protocol(x)) # revealed: str
@@ -127,10 +129,12 @@ class Sub(list[int]): ...
class GenericSub(list[T]): ...
reveal_type(takes_in_list(Sub())) # revealed: list[int]
reveal_type(takes_in_protocol(Sub())) # revealed: int
# TODO: revealed: int
reveal_type(takes_in_protocol(Sub())) # revealed: Unknown
reveal_type(takes_in_list(GenericSub[str]())) # revealed: list[str]
reveal_type(takes_in_protocol(GenericSub[str]())) # revealed: str
# TODO: revealed: str
reveal_type(takes_in_protocol(GenericSub[str]())) # revealed: Unknown
class ExplicitSub(ExplicitlyImplements[int]): ...
class ExplicitGenericSub(ExplicitlyImplements[T]): ...

View File

@@ -538,10 +538,6 @@ C[None](b"bytes") # error: [no-matching-overload]
C[None](12)
class D[T, U]:
# we need to use the type variable or else the class is bivariant in T, and
# specializations become meaningless
x: T
@overload
def __init__(self: "D[str, U]", u: U) -> None: ...
@overload
@@ -555,7 +551,7 @@ reveal_type(generic_context(into_callable(D)))
reveal_type(D("string")) # revealed: D[str, Literal["string"]]
reveal_type(D(1)) # revealed: D[str, Literal[1]]
reveal_type(D(1, "string")) # revealed: D[int, Literal["string"]]
reveal_type(D(1, "string")) # revealed: D[Literal[1], Literal["string"]]
```
### Synthesized methods with dataclasses

View File

@@ -85,11 +85,13 @@ def takes_in_protocol[T](x: CanIndex[T]) -> T:
def deep_list(x: list[str]) -> None:
reveal_type(takes_in_list(x)) # revealed: list[str]
reveal_type(takes_in_protocol(x)) # revealed: str
# TODO: revealed: str
reveal_type(takes_in_protocol(x)) # revealed: Unknown
def deeper_list(x: list[set[str]]) -> None:
reveal_type(takes_in_list(x)) # revealed: list[set[str]]
reveal_type(takes_in_protocol(x)) # revealed: set[str]
# TODO: revealed: set[str]
reveal_type(takes_in_protocol(x)) # revealed: Unknown
def deep_explicit(x: ExplicitlyImplements[str]) -> None:
reveal_type(takes_in_protocol(x)) # revealed: str
@@ -122,10 +124,12 @@ class Sub(list[int]): ...
class GenericSub[T](list[T]): ...
reveal_type(takes_in_list(Sub())) # revealed: list[int]
reveal_type(takes_in_protocol(Sub())) # revealed: int
# TODO: revealed: int
reveal_type(takes_in_protocol(Sub())) # revealed: Unknown
reveal_type(takes_in_list(GenericSub[str]())) # revealed: list[str]
reveal_type(takes_in_protocol(GenericSub[str]())) # revealed: str
# TODO: revealed: str
reveal_type(takes_in_protocol(GenericSub[str]())) # revealed: Unknown
class ExplicitSub(ExplicitlyImplements[int]): ...
class ExplicitGenericSub[T](ExplicitlyImplements[T]): ...

View File

@@ -1369,31 +1369,6 @@ impl<'db> Type<'db> {
}
}
/// Returns the number of union clauses in this type. If the type is not a union, returns 1.
pub(crate) fn union_size(self, db: &'db dyn Db) -> usize {
self.as_union()
.map(|union_type| union_type.elements(db).len())
.unwrap_or(1)
}
/// Returns the number of intersection clauses in this type. If the type is a union, this is
/// the maximum of the `intersection_size` of each union element. If the type is not a union
/// nor an intersection, returns 1.
pub(crate) fn intersection_size(self, db: &'db dyn Db) -> usize {
match self {
Type::Intersection(intersection) => {
intersection.positive(db).len() + intersection.negative(db).len()
}
Type::Union(union_type) => union_type
.elements(db)
.iter()
.map(|element| element.intersection_size(db))
.max()
.unwrap_or(1),
_ => 1,
}
}
pub(crate) const fn as_function_literal(self) -> Option<FunctionType<'db>> {
match self {
Type::FunctionLiteral(function_type) => Some(function_type),
@@ -7920,16 +7895,6 @@ impl<'db> TypeVarInstance<'db> {
})
}
/// Returns the bounds or constraints of this typevar. If the typevar is unbounded, returns
/// `object` as its upper bound.
pub(crate) fn require_bound_or_constraints(
self,
db: &'db dyn Db,
) -> TypeVarBoundOrConstraints<'db> {
self.bound_or_constraints(db)
.unwrap_or_else(|| TypeVarBoundOrConstraints::UpperBound(Type::object()))
}
pub(crate) fn default_type(self, db: &'db dyn Db) -> Option<Type<'db>> {
self._default(db).and_then(|d| match d {
TypeVarDefaultEvaluation::Eager(ty) => Some(ty),

View File

@@ -4489,6 +4489,7 @@ impl<'db> BindingError<'db> {
return;
};
let typevar = error.bound_typevar().typevar(context.db());
let argument_type = error.argument_type();
let argument_ty_display = argument_type.display(context.db());
@@ -4501,51 +4502,21 @@ impl<'db> BindingError<'db> {
}
));
let typevar_name = typevar.name(context.db());
match error {
SpecializationError::NoSolution { parameter, .. } => {
diag.set_primary_message(format_args!(
"Argument type `{argument_ty_display}` does not \
satisfy generic parameter annotation `{}",
parameter.display(context.db()),
));
SpecializationError::MismatchedBound { .. } => {
diag.set_primary_message(format_args!("Argument type `{argument_ty_display}` does not satisfy upper bound `{}` of type variable `{typevar_name}`",
typevar.upper_bound(context.db()).expect("type variable should have an upper bound if this error occurs").display(context.db())
));
}
SpecializationError::MismatchedBound { bound_typevar, .. } => {
let typevar = bound_typevar.typevar(context.db());
let typevar_name = typevar.name(context.db());
diag.set_primary_message(format_args!(
"Argument type `{argument_ty_display}` does not \
satisfy upper bound `{}` of type variable `{typevar_name}`",
typevar
.upper_bound(context.db())
.expect(
"type variable should have an upper bound if this error occurs"
)
.display(context.db())
));
}
SpecializationError::MismatchedConstraint { bound_typevar, .. } => {
let typevar = bound_typevar.typevar(context.db());
let typevar_name = typevar.name(context.db());
diag.set_primary_message(format_args!(
"Argument type `{argument_ty_display}` does not \
satisfy constraints ({}) of type variable `{typevar_name}`",
typevar
.constraints(context.db())
.expect(
"type variable should have constraints if this error occurs"
)
.iter()
.format_with(", ", |ty, f| f(&format_args!(
"`{}`",
ty.display(context.db())
)))
));
SpecializationError::MismatchedConstraint { .. } => {
diag.set_primary_message(format_args!("Argument type `{argument_ty_display}` does not satisfy constraints ({}) of type variable `{typevar_name}`",
typevar.constraints(context.db()).expect("type variable should have constraints if this error occurs").iter().map(|ty| format!("`{}`", ty.display(context.db()))).join(", ")
));
}
}
if let Some(typevar_definition) = error.bound_typevar().and_then(|bound_typevar| {
bound_typevar.typevar(context.db()).definition(context.db())
}) {
if let Some(typevar_definition) = typevar.definition(context.db()) {
let module = parsed_module(context.db(), typevar_definition.file(context.db()))
.load(context.db());
let typevar_range = typevar_definition.full_range(context.db(), &module);

View File

@@ -72,7 +72,6 @@ use std::fmt::Display;
use std::ops::Range;
use itertools::Itertools;
use ordermap::map::Entry;
use rustc_hash::{FxHashMap, FxHashSet};
use salsa::plumbing::AsId;
@@ -758,25 +757,9 @@ impl<'db> ConstrainedTypeVar<'db> {
/// Returns the intersection of two range constraints, or `None` if the intersection is empty.
fn intersect(self, db: &'db dyn Db, other: Self) -> IntersectionResult<'db> {
// TODO: For now, we treat some upper bounds as unsimplifiable if they become "too big".
// When intersecting constraints, the upper bounds are also intersected together. If the
// lhs and rhs upper bounds are unions of intersections (e.g. `(a & b) | (c & d)`), then
// intersecting them together will require distributing across every pair of union
// elements. That can quickly balloon in size. We are looking at a better representation
// that would let us model this case more directly, but for now, we punt.
const MAX_UPPER_BOUND_SIZE: usize = 4;
let self_upper = self.upper(db);
let other_upper = other.upper(db);
let estimated_upper_bound_size = self_upper.union_size(db)
* other_upper.union_size(db)
* (self_upper.intersection_size(db) + other_upper.intersection_size(db));
if estimated_upper_bound_size >= MAX_UPPER_BOUND_SIZE {
return IntersectionResult::CannotSimplify;
}
// (s₁ ≤ α ≤ t₁) ∧ (s₂ ≤ α ≤ t₂) = (s₁ s₂) ≤ α ≤ (t₁ ∩ t₂))
let lower = UnionType::from_elements(db, [self.lower(db), other.lower(db)]);
let upper = IntersectionType::from_elements(db, [self_upper, other_upper]);
let upper = IntersectionType::from_elements(db, [self.upper(db), other.upper(db)]);
// If `lower ≰ upper`, then the intersection is empty, since there is no type that is both
// greater than `lower`, and less than `upper`.
@@ -784,8 +767,6 @@ impl<'db> ConstrainedTypeVar<'db> {
return IntersectionResult::Disjoint;
}
// We do not create lower bounds that are unions, or upper bounds that are intersections,
// since those can be broken apart into BDDs over simpler constraints.
if lower.is_union() || upper.is_nontrivial_intersection(db) {
return IntersectionResult::CannotSimplify;
}
@@ -3456,11 +3437,9 @@ impl<'db> PathAssignments<'db> {
);
return Err(PathAssignmentConflict);
}
match self.assignments.entry(assignment) {
Entry::Vacant(entry) => entry.insert(source_order),
Entry::Occupied(_) => return Ok(()),
};
if self.assignments.insert(assignment, source_order).is_some() {
return Ok(());
}
// Then use our sequents to add additional facts that we know to be true. We currently
// reuse the `source_order` of the "real" constraint passed into `walk_edge` when we add
@@ -3762,11 +3741,6 @@ impl<'db> BoundTypeVarInstance<'db> {
/// when this typevar is in inferable position, where we only need _some_ specialization to
/// satisfy the constraint set.
fn valid_specializations(self, db: &'db dyn Db) -> Node<'db> {
if self.paramspec_attr(db).is_some() {
// P.args and P.kwargs are variadic, and do not have an upper bound or constraints.
return Node::AlwaysTrue;
}
// For gradual upper bounds and constraints, we are free to choose any materialization that
// makes the check succeed. In inferable positions, it is most helpful to choose a
// materialization that is as permissive as possible, since that maximizes the number of

View File

@@ -13,6 +13,7 @@ use crate::semantic_index::{SemanticIndex, semantic_index};
use crate::types::class::ClassType;
use crate::types::class_base::ClassBase;
use crate::types::constraints::{ConstraintSet, IteratorConstraintsExtension};
use crate::types::instance::{Protocol, ProtocolInstanceType};
use crate::types::relation::{
HasRelationToVisitor, IsDisjointVisitor, IsEquivalentVisitor, TypeRelation,
};
@@ -1633,43 +1634,12 @@ impl<'db> SpecializationBuilder<'db> {
upper: FxOrderSet<Type<'db>>,
}
impl<'db> Bounds<'db> {
fn add_lower(&mut self, _db: &'db dyn Db, ty: Type<'db>) {
// Lower bounds are unioned. Our type representation is in DNF, so unioning a new
// element is typically cheap (in that it does not involve a combinatorial
// explosion from distributing the clause through an existing disjunction). So we
// don't need to be as clever here as in `add_upper`.
self.lower.insert(ty);
}
fn add_upper(&mut self, db: &'db dyn Db, ty: Type<'db>) {
// Upper bounds are intersectioned. If `ty` is a union, that involves distributing
// the union elements through the existing type. That makes it worth checking first
// whether any of the types in the upper bound are redundant.
// First check if there's an existing upper bound clause that is a subtype of the
// new type. If so, adding the new type does nothing to the intersection.
if self
.upper
.iter()
.any(|existing| existing.is_subtype_of(db, ty))
{
return;
}
// Otherwise remove any existing clauses that are a supertype of the new type,
// since the intersection will clip them to the new type.
self.upper
.retain(|existing| !ty.is_subtype_of(db, *existing));
self.upper.insert(ty);
}
}
// Sort the constraints in each path by their `source_order`s, to ensure that we construct
// any unions or intersections in our type mappings in a stable order. Constraints might
// come out of `PathAssignment`s with identical `source_order`s, but if they do, those
// "tied" constraints will still be ordered in a stable way. So we need a stable sort to
// retain that stable per-tie ordering.
let constraints = constraints.limit_to_valid_specializations(self.db);
let mut sorted_paths = Vec::new();
constraints.for_each_path(self.db, |path| {
let mut path: Vec<_> = path.positive_constraints().collect();
@@ -1690,68 +1660,33 @@ impl<'db> SpecializationBuilder<'db> {
let lower = constraint.lower(self.db);
let upper = constraint.upper(self.db);
let bounds = mappings.entry(typevar).or_default();
bounds.add_lower(self.db, lower);
bounds.add_upper(self.db, upper);
bounds.lower.insert(lower);
bounds.upper.insert(upper);
if let Type::TypeVar(lower_bound_typevar) = lower {
let bounds = mappings.entry(lower_bound_typevar).or_default();
bounds.add_upper(self.db, Type::TypeVar(typevar));
bounds.upper.insert(Type::TypeVar(typevar));
}
if let Type::TypeVar(upper_bound_typevar) = upper {
let bounds = mappings.entry(upper_bound_typevar).or_default();
bounds.add_lower(self.db, Type::TypeVar(typevar));
bounds.lower.insert(Type::TypeVar(typevar));
}
}
for (bound_typevar, bounds) in mappings.drain() {
let variance = formal.variance_of(self.db, bound_typevar);
match bound_typevar
.typevar(self.db)
.require_bound_or_constraints(self.db)
{
TypeVarBoundOrConstraints::UpperBound(bound) => {
let bound = bound.top_materialization(self.db);
let lower = UnionType::from_elements(self.db, bounds.lower);
if !lower.is_assignable_to(self.db, bound) {
// This path does not satisfy the typevar's upper bound, and is
// therefore not a valid specialization.
continue;
}
let upper = IntersectionType::from_elements(
self.db,
bounds.upper.into_iter().chain([bound]),
);
if upper != bound {
self.add_type_mapping(bound_typevar, upper, variance, &mut f);
} else if !lower.is_never() {
self.add_type_mapping(bound_typevar, lower, variance, &mut f);
}
}
TypeVarBoundOrConstraints::Constraints(constraints) => {
// Filter out the typevar constraints that aren't satisfied by this path.
let lower = UnionType::from_elements(self.db, bounds.lower);
let upper = IntersectionType::from_elements(self.db, bounds.upper);
let compatible_constraints =
constraints.elements(self.db).iter().filter(|constraint| {
let constraint_lower = constraint.bottom_materialization(self.db);
let constraint_upper = constraint.top_materialization(self.db);
lower.is_assignable_to(self.db, constraint_lower)
&& constraint_upper.is_assignable_to(self.db, upper)
});
// If only one constraint remains, that's our specialization for this path.
if let Ok(compatible_constraint) = compatible_constraints.exactly_one() {
self.add_type_mapping(
bound_typevar,
*compatible_constraint,
variance,
&mut f,
);
}
}
// Prefer the lower bound (often the concrete actual type seen) over the
// upper bound (which may include TypeVar bounds/constraints). The upper bound
// should only be used as a fallback when no concrete type was inferred.
let lower = UnionType::from_elements(self.db, bounds.lower);
if !lower.is_never() {
self.add_type_mapping(bound_typevar, lower, variance, &mut f);
continue;
}
let upper = IntersectionType::from_elements(self.db, bounds.upper);
if !upper.is_object() {
self.add_type_mapping(bound_typevar, upper, variance, &mut f);
}
}
}
@@ -2032,31 +1967,14 @@ impl<'db> SpecializationBuilder<'db> {
Type::NominalInstance(formal_nominal) => {
formal_nominal.class(self.db).into_generic_alias()
}
Type::ProtocolInstance(_) => {
// TODO: For protocols, we use the new constraint set implementation, which
// will handle implicitly implemented protocols and generic protocols. We
// eventually want this logic to be used for _all_ nominal instances
// (replacing the logic below).
let when = actual.when_constraint_set_assignable_to(
self.db,
formal,
self.inferable,
);
if when
.limit_to_valid_specializations(self.db)
.is_never_satisfied(self.db)
&& (formal.has_typevar(self.db) || actual.has_typevar(self.db))
{
return Err(SpecializationError::NoSolution {
parameter: formal,
argument: actual,
});
}
self.add_type_mappings_from_constraint_set(formal, when, &mut f);
return Ok(());
}
// TODO: This will only handle classes that explicit implement a generic protocol
// by listing it as a base class. To handle classes that implicitly implement a
// generic protocol, we will need to check the types of the protocol members to be
// able to infer the specialization of the protocol that the class implements.
Type::ProtocolInstance(ProtocolInstanceType {
inner: Protocol::FromClass(class),
..
}) => class.into_generic_alias(),
_ => None,
};
@@ -2214,10 +2132,6 @@ impl<'db> SpecializationBuilder<'db> {
#[derive(Clone, Debug, Eq, PartialEq)]
pub(crate) enum SpecializationError<'db> {
NoSolution {
parameter: Type<'db>,
argument: Type<'db>,
},
MismatchedBound {
bound_typevar: BoundTypeVarInstance<'db>,
argument: Type<'db>,
@@ -2229,17 +2143,15 @@ pub(crate) enum SpecializationError<'db> {
}
impl<'db> SpecializationError<'db> {
pub(crate) fn bound_typevar(&self) -> Option<BoundTypeVarInstance<'db>> {
pub(crate) fn bound_typevar(&self) -> BoundTypeVarInstance<'db> {
match self {
Self::NoSolution { .. } => None,
Self::MismatchedBound { bound_typevar, .. } => Some(*bound_typevar),
Self::MismatchedConstraint { bound_typevar, .. } => Some(*bound_typevar),
Self::MismatchedBound { bound_typevar, .. } => *bound_typevar,
Self::MismatchedConstraint { bound_typevar, .. } => *bound_typevar,
}
}
pub(crate) fn argument_type(&self) -> Type<'db> {
match self {
Self::NoSolution { argument, .. } => *argument,
Self::MismatchedBound { argument, .. } => *argument,
Self::MismatchedConstraint { argument, .. } => *argument,
}

View File

@@ -134,29 +134,14 @@ impl<'db> Type<'db> {
disjointness_visitor: &IsDisjointVisitor<'db>,
) -> ConstraintSet<'db> {
let structurally_satisfied = if let Type::ProtocolInstance(self_protocol) = self {
let self_as_nominal = self_protocol.to_nominal_instance();
let other_as_nominal = protocol.to_nominal_instance();
let nominal_match = match self_as_nominal.zip(other_as_nominal) {
Some((self_as_nominal, other_as_nominal)) => self_as_nominal.has_relation_to_impl(
db,
other_as_nominal,
inferable,
relation,
relation_visitor,
disjointness_visitor,
),
_ => ConstraintSet::from(false),
};
nominal_match.or(db, || {
self_protocol.interface(db).has_relation_to_impl(
db,
protocol.interface(db),
inferable,
relation,
relation_visitor,
disjointness_visitor,
)
})
self_protocol.interface(db).has_relation_to_impl(
db,
protocol.interface(db),
inferable,
relation,
relation_visitor,
disjointness_visitor,
)
} else {
protocol
.inner

View File

@@ -910,7 +910,15 @@ impl Session {
let db = self.project_db_mut(path);
match system_path_to_file(db, system_path) {
Ok(file) => db.project().open_file(db, file),
Ok(file) => {
let project = db.project();
// Only mark this file as open if it's part of the project.
// This ensures that we don't show diagnostics for files outside the project.
if project.is_file_included(db, system_path) {
project.open_file(db, file);
}
}
Err(err) => tracing::warn!("Failed to open file {system_path}: {err}"),
}
}

View File

@@ -151,6 +151,41 @@ reveal_type(total)
Ok(())
}
#[test]
fn pull_excluded_file() -> Result<()> {
let _filter = filter_result_id();
let main_path = SystemPath::new("src/foo.py");
let main_content = r#"reveal_type("included")"#;
let excluded_path = SystemPath::new("src/excluded/lib.py");
let excluded_content = r#"reveal_type("Excluded")"#;
let config = r#"
[src]
exclude = ["src/excluded/"]
"#;
let mut server = TestServerBuilder::new()?
.with_workspace(SystemPath::new("src"), None)?
.with_file(main_path, main_content)?
.with_file(excluded_path, excluded_content)?
.with_file(SystemPath::new("ty.toml"), config)?
.enable_pull_diagnostics(true)
.build()
.wait_until_workspaces_are_initialized();
server.open_text_document(main_path, main_content, 1);
let main_diagnostics = server.document_diagnostic_request(main_path, None);
assert_compact_json_snapshot!("main", main_diagnostics);
server.open_text_document(excluded_path, excluded_content, 1);
let excluded_diagnostics = server.document_diagnostic_request(excluded_path, None);
assert_compact_json_snapshot!("excluded", excluded_diagnostics);
Ok(())
}
#[test]
fn document_diagnostic_caching_unchanged() -> Result<()> {
let _filter = filter_result_id();

View File

@@ -0,0 +1,5 @@
---
source: crates/ty_server/tests/e2e/pull_diagnostics.rs
expression: excluded_diagnostics
---
{"kind": "full", "items": []}

View File

@@ -0,0 +1,45 @@
---
source: crates/ty_server/tests/e2e/pull_diagnostics.rs
expression: main_diagnostics
---
{
"kind": "full",
"resultId": "[RESULT_ID]",
"items": [
{
"range": {
"start": {
"line": 0,
"character": 0
},
"end": {
"line": 0,
"character": 11
}
},
"severity": 2,
"code": "undefined-reveal",
"codeDescription": {
"href": "https://ty.dev/rules#undefined-reveal"
},
"source": "ty",
"message": "`reveal_type` used without importing it"
},
{
"range": {
"start": {
"line": 0,
"character": 12
},
"end": {
"line": 0,
"character": 22
}
},
"severity": 3,
"code": "revealed-type",
"source": "ty",
"message": "Revealed type: `Literal[/"included/"]`"
}
]
}

View File

@@ -318,6 +318,7 @@ impl EmbeddedFilePath<'_> {
EmbeddedFilePath::Autogenerated(PySourceType::Python) => "mdtest_snippet.py",
EmbeddedFilePath::Autogenerated(PySourceType::Stub) => "mdtest_snippet.pyi",
EmbeddedFilePath::Autogenerated(PySourceType::Ipynb) => "mdtest_snippet.ipynb",
EmbeddedFilePath::Autogenerated(PySourceType::Markdown) => "mdtest_snippet.md",
EmbeddedFilePath::Explicit(path) => path,
}
}

View File

@@ -80,7 +80,7 @@ You can add the following configuration to `.gitlab-ci.yml` to run a `ruff forma
stage: build
interruptible: true
image:
name: ghcr.io/astral-sh/ruff:0.14.10-alpine
name: ghcr.io/astral-sh/ruff:0.14.11-alpine
before_script:
- cd $CI_PROJECT_DIR
- ruff --version
@@ -106,7 +106,7 @@ Ruff can be used as a [pre-commit](https://pre-commit.com) hook via [`ruff-pre-c
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check
@@ -119,7 +119,7 @@ To enable lint fixes, add the `--fix` argument to the lint hook:
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check
@@ -133,7 +133,7 @@ To avoid running on Jupyter Notebooks, remove `jupyter` from the list of allowed
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -369,7 +369,7 @@ This tutorial has focused on Ruff's command-line interface, but Ruff can also be
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.14.10
rev: v0.14.11
hooks:
# Run the linter.
- id: ruff-check

View File

@@ -4,7 +4,7 @@ build-backend = "maturin"
[project]
name = "ruff"
version = "0.14.10"
version = "0.14.11"
description = "An extremely fast Python linter and code formatter, written in Rust."
authors = [{ name = "Astral Software Inc.", email = "hey@astral.sh" }]
readme = "README.md"

View File

@@ -1,6 +1,6 @@
[project]
name = "scripts"
version = "0.14.10"
version = "0.14.11"
description = ""
authors = ["Charles Marsh <charlie.r.marsh@gmail.com>"]