Compare commits

..

14 Commits

Author SHA1 Message Date
Micha Reiser
1fa5e1fbbc Update salsa 2025-05-08 19:28:35 +02:00
Brent Westbrook
981bd70d39 Convert Message::SyntaxError to use Diagnostic internally (#17784)
## Summary

This PR is a first step toward integration of the new `Diagnostic` type
into ruff. There are two main changes:
- A new `UnifiedFile` enum wrapping `File` for red-knot and a
`SourceFile` for ruff
- ruff's `Message::SyntaxError` variant is now a `Diagnostic` instead of
a `SyntaxErrorMessage`

The second of these changes was mostly just a proof of concept for the
first, and it went pretty smoothly. Converting `DiagnosticMessage`s will
be most of the work in replacing `Message` entirely.

## Test Plan

Existing tests, which show no changes.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-05-08 12:45:51 -04:00
Alex Waygood
0763331f7f [ty] Support extending __all__ with a literal tuple or set as well as a literal list (#17948) 2025-05-08 17:37:25 +01:00
Alex Waygood
da8540862d [ty] Make unused-ignore-comment disabled by default for now (#17955) 2025-05-08 17:21:34 +01:00
Micha Reiser
6a5533c44c [ty] Change default severity for unbound-reference to error (#17936) 2025-05-08 17:54:46 +02:00
Micha Reiser
d608eae126 [ty] Ignore possibly-unresolved-reference by default (#17934) 2025-05-08 17:44:56 +02:00
Micha Reiser
067a8ac574 [ty] Default to latest supported python version (#17938) 2025-05-08 16:58:35 +02:00
Micha Reiser
5eb215e8e5 [ty] Generate and add rules table (#17953) 2025-05-08 16:55:39 +02:00
Zanie Blue
91aa853b9c Update the schemastore script to match changes in ty (#17952)
See https://github.com/astral-sh/ty/pull/273
2025-05-08 09:31:52 -05:00
Brent Westbrook
57bf7dfbd9 [ty] Implement global handling and load-before-global-declaration syntax error (#17637)
Summary
--

This PR resolves both the typing-related and syntax error TODOs added in
#17563 by tracking a set of `global` bindings for each scope. As
discussed below, we avoid the additional AST traversal from ruff by
collecting `Name`s from `global` statements while building the semantic
index and emit a syntax error if the `Name` is already bound in the
current scope at the point of the `global` statement. This has the
downside of separating the error from the `SemanticSyntaxChecker`, but I
plan to explore using this approach in the `SemanticSyntaxChecker`
itself as a follow-up. It seems like this may be a better approach for
ruff as well.

Test Plan
--

Updated all of the related mdtests to remove the TODOs (and add quotes I
forgot on the messages).

There is one remaining TODO, but it requires `nonlocal` support, which
isn't even incorporated into the `SemanticSyntaxChecker` yet.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
2025-05-08 10:30:04 -04:00
Alex Waygood
67cd94ed64 [ty] Add missing bitwise-operator branches for boolean and integer arithmetic (#17949) 2025-05-08 14:10:35 +01:00
Wei Lee
aac862822f [airflow] Fix SQLTableCheckOperator typo (AIR302) (#17946) 2025-05-08 14:34:55 +02:00
Micha Reiser
3755ac9fac Update ty metadata (#17943) 2025-05-08 13:24:31 +02:00
David Peter
4f890b2867 [ty] Update salsa (#17937)
## Summary

* Update salsa to pull in https://github.com/salsa-rs/salsa/pull/850.
* Some refactoring of salsa event callbacks in various `Db`'s due to
https://github.com/salsa-rs/salsa/pull/849

closes https://github.com/astral-sh/ty/issues/108

## Test Plan

Ran `cargo run --bin ty -- -vvv` on a test file to make sure that salsa
Events are still logged.
2025-05-08 12:02:53 +02:00
69 changed files with 2522 additions and 395 deletions

7
.github/mypy-primer-ty.toml vendored Normal file
View File

@@ -0,0 +1,7 @@
#:schema ../ty.schema.json
# Configuration overrides for the mypy primer run
# Enable off-by-default rules.
[rules]
possibly-unresolved-reference = "warn"
unused-ignore-comment = "warn"

View File

@@ -50,6 +50,10 @@ jobs:
run: |
cd ruff
echo "Enabling mypy primer specific configuration overloads (see .github/mypy-primer-ty.toml)"
mkdir -p ~/.config/ty
cp .github/mypy-primer-ty.toml ~/.config/ty/ty.toml
PRIMER_SELECTOR="$(paste -s -d'|' crates/ty_python_semantic/resources/primer/good.txt)"
echo "new commit"

View File

@@ -5,6 +5,7 @@ exclude: |
.github/workflows/release.yml|
crates/ty_vendored/vendor/.*|
crates/ty_project/resources/.*|
crates/ty/docs/rules.md|
crates/ruff_benchmark/resources/.*|
crates/ruff_linter/resources/.*|
crates/ruff_linter/src/rules/.*/snapshots/.*|

8
Cargo.lock generated
View File

@@ -2726,6 +2726,7 @@ dependencies = [
"tracing-indicatif",
"tracing-subscriber",
"ty_project",
"url",
]
[[package]]
@@ -2813,6 +2814,7 @@ dependencies = [
"regex",
"ruff_annotate_snippets",
"ruff_cache",
"ruff_db",
"ruff_diagnostics",
"ruff_macros",
"ruff_notebook",
@@ -3237,7 +3239,7 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "salsa"
version = "0.21.1"
source = "git+https://github.com/salsa-rs/salsa.git?rev=ef93d3830ffb8dc621fef1ba7b234ccef9dc3639#ef93d3830ffb8dc621fef1ba7b234ccef9dc3639"
source = "git+https://github.com/salsa-rs/salsa.git?rev=af69cc11146352ec2eb6d972537e99c473ac3748#af69cc11146352ec2eb6d972537e99c473ac3748"
dependencies = [
"boxcar",
"compact_str",
@@ -3260,12 +3262,12 @@ dependencies = [
[[package]]
name = "salsa-macro-rules"
version = "0.21.1"
source = "git+https://github.com/salsa-rs/salsa.git?rev=ef93d3830ffb8dc621fef1ba7b234ccef9dc3639#ef93d3830ffb8dc621fef1ba7b234ccef9dc3639"
source = "git+https://github.com/salsa-rs/salsa.git?rev=af69cc11146352ec2eb6d972537e99c473ac3748#af69cc11146352ec2eb6d972537e99c473ac3748"
[[package]]
name = "salsa-macros"
version = "0.21.1"
source = "git+https://github.com/salsa-rs/salsa.git?rev=ef93d3830ffb8dc621fef1ba7b234ccef9dc3639#ef93d3830ffb8dc621fef1ba7b234ccef9dc3639"
source = "git+https://github.com/salsa-rs/salsa.git?rev=af69cc11146352ec2eb6d972537e99c473ac3748#af69cc11146352ec2eb6d972537e99c473ac3748"
dependencies = [
"heck",
"proc-macro2",

View File

@@ -124,7 +124,7 @@ rayon = { version = "1.10.0" }
regex = { version = "1.10.2" }
rustc-hash = { version = "2.0.0" }
# When updating salsa, make sure to also update the revision in `fuzz/Cargo.toml`
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "ef93d3830ffb8dc621fef1ba7b234ccef9dc3639" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "af69cc11146352ec2eb6d972537e99c473ac3748" }
schemars = { version = "0.8.16" }
seahash = { version = "4.1.0" }
serde = { version = "1.0.197", features = ["derive"] }

View File

@@ -439,7 +439,7 @@ impl LintCacheData {
.map(|msg| {
// Make sure that all message use the same source file.
assert_eq!(
&msg.file,
msg.file,
messages.first().unwrap().source_file(),
"message uses a different source file"
);

View File

@@ -15,7 +15,7 @@ use rustc_hash::FxHashMap;
use ruff_diagnostics::Diagnostic;
use ruff_linter::codes::Rule;
use ruff_linter::linter::{lint_fix, lint_only, FixTable, FixerResult, LinterResult, ParseSource};
use ruff_linter::message::{Message, SyntaxErrorMessage};
use ruff_linter::message::Message;
use ruff_linter::package::PackageRoot;
use ruff_linter::pyproject_toml::lint_pyproject_toml;
use ruff_linter::settings::types::UnsafeFixes;
@@ -102,11 +102,7 @@ impl Diagnostics {
let name = path.map_or_else(|| "-".into(), Path::to_string_lossy);
let dummy = SourceFileBuilder::new(name, "").finish();
Self::new(
vec![Message::SyntaxError(SyntaxErrorMessage {
message: err.to_string(),
range: TextRange::default(),
file: dummy,
})],
vec![Message::syntax_error(err, TextRange::default(), dummy)],
FxHashMap::default(),
)
}

View File

@@ -59,13 +59,7 @@ type KeyDiagnosticFields = (
Severity,
);
static EXPECTED_TOMLLIB_DIAGNOSTICS: &[KeyDiagnosticFields] = &[(
DiagnosticId::lint("unused-ignore-comment"),
Some("/src/tomllib/_parser.py"),
Some(22299..22333),
"Unused blanket `type: ignore` directive",
Severity::Warning,
)];
static EXPECTED_TOMLLIB_DIAGNOSTICS: &[KeyDiagnosticFields] = &[];
fn tomllib_path(file: &TestFile) -> SystemPathBuf {
SystemPathBuf::from("src").join(file.name())
@@ -203,7 +197,7 @@ fn assert_diagnostics(db: &dyn Db, diagnostics: &[Diagnostic], expected: &[KeyDi
diagnostic.id(),
diagnostic
.primary_span()
.map(|span| span.file())
.map(|span| span.expect_ty_file())
.map(|file| file.path(db).as_str()),
diagnostic
.primary_span()

View File

@@ -1,15 +1,15 @@
use std::{fmt::Formatter, sync::Arc};
use render::{FileResolver, Input};
use ruff_source_file::{SourceCode, SourceFile};
use thiserror::Error;
use ruff_annotate_snippets::Level as AnnotateLevel;
use ruff_text_size::{Ranged, TextRange};
pub use self::render::DisplayDiagnostic;
use crate::files::File;
use crate::Db;
use crate::{files::File, Db};
use self::render::FileResolver;
mod render;
mod stylesheet;
@@ -115,10 +115,9 @@ impl Diagnostic {
/// callers should prefer using this with `write!` instead of `writeln!`.
pub fn display<'a>(
&'a self,
db: &'a dyn Db,
resolver: &'a dyn FileResolver,
config: &'a DisplayDiagnosticConfig,
) -> DisplayDiagnostic<'a> {
let resolver = FileResolver::new(db);
DisplayDiagnostic::new(resolver, config, self)
}
@@ -233,6 +232,16 @@ impl Diagnostic {
self.primary_annotation().map(|ann| ann.tags.as_slice())
}
/// Returns the "primary" span of this diagnostic, panicking if it does not exist.
///
/// This should typically only be used when working with diagnostics in Ruff, where diagnostics
/// are currently required to have a primary span.
///
/// See [`Diagnostic::primary_span`] for more details.
pub fn expect_primary_span(&self) -> Span {
self.primary_span().expect("Expected a primary span")
}
/// Returns a key that can be used to sort two diagnostics into the canonical order
/// in which they should appear when rendered.
pub fn rendering_sort_key<'a>(&'a self, db: &'a dyn Db) -> impl Ord + 'a {
@@ -267,11 +276,7 @@ impl Ord for RenderingSortKey<'_> {
self.diagnostic.primary_span(),
other.diagnostic.primary_span(),
) {
let order = span1
.file()
.path(self.db)
.as_str()
.cmp(span2.file().path(self.db).as_str());
let order = span1.file().path(&self.db).cmp(span2.file().path(&self.db));
if order.is_ne() {
return order;
}
@@ -643,6 +648,10 @@ impl DiagnosticId {
DiagnosticId::UnknownRule => "unknown-rule",
})
}
pub fn is_invalid_syntax(&self) -> bool {
matches!(self, Self::InvalidSyntax)
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Error)]
@@ -668,6 +677,62 @@ impl std::fmt::Display for DiagnosticId {
}
}
/// A unified file representation for both ruff and ty.
///
/// Such a representation is needed for rendering [`Diagnostic`]s that can optionally contain
/// [`Annotation`]s with [`Span`]s that need to refer to the text of a file. However, ty and ruff
/// use very different file types: a `Copy`-able salsa-interned [`File`], and a heavier-weight
/// [`SourceFile`], respectively.
///
/// This enum presents a unified interface to these two types for the sake of creating [`Span`]s and
/// emitting diagnostics from both ty and ruff.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum UnifiedFile {
Ty(File),
Ruff(SourceFile),
}
impl UnifiedFile {
pub fn path<'a>(&'a self, resolver: &'a dyn FileResolver) -> &'a str {
match self {
UnifiedFile::Ty(file) => resolver.path(*file),
UnifiedFile::Ruff(file) => file.name(),
}
}
fn diagnostic_source(&self, resolver: &dyn FileResolver) -> DiagnosticSource {
match self {
UnifiedFile::Ty(file) => DiagnosticSource::Ty(resolver.input(*file)),
UnifiedFile::Ruff(file) => DiagnosticSource::Ruff(file.clone()),
}
}
}
/// A unified wrapper for types that can be converted to a [`SourceCode`].
///
/// As with [`UnifiedFile`], ruff and ty use slightly different representations for source code.
/// [`DiagnosticSource`] wraps both of these and provides the single
/// [`DiagnosticSource::as_source_code`] method to produce a [`SourceCode`] with the appropriate
/// lifetimes.
///
/// See [`UnifiedFile::diagnostic_source`] for a way to obtain a [`DiagnosticSource`] from a file
/// and [`FileResolver`].
#[derive(Clone, Debug)]
enum DiagnosticSource {
Ty(Input),
Ruff(SourceFile),
}
impl DiagnosticSource {
/// Returns this input as a `SourceCode` for convenient querying.
fn as_source_code(&self) -> SourceCode {
match self {
DiagnosticSource::Ty(input) => SourceCode::new(input.text.as_str(), &input.line_index),
DiagnosticSource::Ruff(source) => SourceCode::new(source.source_text(), source.index()),
}
}
}
/// A span represents the source of a diagnostic.
///
/// It consists of a `File` and an optional range into that file. When the
@@ -675,14 +740,14 @@ impl std::fmt::Display for DiagnosticId {
/// the entire file. For example, when the file should be executable but isn't.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Span {
file: File,
file: UnifiedFile,
range: Option<TextRange>,
}
impl Span {
/// Returns the `File` attached to this `Span`.
pub fn file(&self) -> File {
self.file
/// Returns the `UnifiedFile` attached to this `Span`.
pub fn file(&self) -> &UnifiedFile {
&self.file
}
/// Returns the range, if available, attached to this `Span`.
@@ -703,10 +768,38 @@ impl Span {
pub fn with_optional_range(self, range: Option<TextRange>) -> Span {
Span { range, ..self }
}
/// Returns the [`File`] attached to this [`Span`].
///
/// Panics if the file is a [`UnifiedFile::Ruff`] instead of a [`UnifiedFile::Ty`].
pub fn expect_ty_file(&self) -> File {
match self.file {
UnifiedFile::Ty(file) => file,
UnifiedFile::Ruff(_) => panic!("Expected a ty `File`, found a ruff `SourceFile`"),
}
}
/// Returns the [`SourceFile`] attached to this [`Span`].
///
/// Panics if the file is a [`UnifiedFile::Ty`] instead of a [`UnifiedFile::Ruff`].
pub fn expect_ruff_file(&self) -> &SourceFile {
match &self.file {
UnifiedFile::Ty(_) => panic!("Expected a ruff `SourceFile`, found a ty `File`"),
UnifiedFile::Ruff(file) => file,
}
}
}
impl From<File> for Span {
fn from(file: File) -> Span {
let file = UnifiedFile::Ty(file);
Span { file, range: None }
}
}
impl From<SourceFile> for Span {
fn from(file: SourceFile) -> Self {
let file = UnifiedFile::Ruff(file);
Span { file, range: None }
}
}

View File

@@ -16,7 +16,8 @@ use crate::{
};
use super::{
Annotation, Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig, Severity, SubDiagnostic,
Annotation, Diagnostic, DiagnosticFormat, DiagnosticSource, DisplayDiagnosticConfig, Severity,
SubDiagnostic,
};
/// A type that implements `std::fmt::Display` for diagnostic rendering.
@@ -30,17 +31,16 @@ use super::{
/// values. When using Salsa, this most commonly corresponds to the lifetime
/// of a Salsa `Db`.
/// * The lifetime of the diagnostic being rendered.
#[derive(Debug)]
pub struct DisplayDiagnostic<'a> {
config: &'a DisplayDiagnosticConfig,
resolver: FileResolver<'a>,
resolver: &'a dyn FileResolver,
annotate_renderer: AnnotateRenderer,
diag: &'a Diagnostic,
}
impl<'a> DisplayDiagnostic<'a> {
pub(crate) fn new(
resolver: FileResolver<'a>,
resolver: &'a dyn FileResolver,
config: &'a DisplayDiagnosticConfig,
diag: &'a Diagnostic,
) -> DisplayDiagnostic<'a> {
@@ -86,11 +86,13 @@ impl std::fmt::Display for DisplayDiagnostic<'_> {
write!(
f,
" {path}",
path = fmt_styled(self.resolver.path(span.file()), stylesheet.emphasis)
path = fmt_styled(span.file().path(self.resolver), stylesheet.emphasis)
)?;
if let Some(range) = span.range() {
let input = self.resolver.input(span.file());
let start = input.as_source_code().line_column(range.start());
let diagnostic_source = span.file().diagnostic_source(self.resolver);
let start = diagnostic_source
.as_source_code()
.line_column(range.start());
write!(
f,
@@ -115,7 +117,7 @@ impl std::fmt::Display for DisplayDiagnostic<'_> {
.emphasis(stylesheet.emphasis)
.none(stylesheet.none);
let resolved = Resolved::new(&self.resolver, self.diag);
let resolved = Resolved::new(self.resolver, self.diag);
let renderable = resolved.to_renderable(self.config.context);
for diag in renderable.diagnostics.iter() {
writeln!(f, "{}", renderer.render(diag.to_annotate()))?;
@@ -144,7 +146,7 @@ struct Resolved<'a> {
impl<'a> Resolved<'a> {
/// Creates a new resolved set of diagnostics.
fn new(resolver: &FileResolver<'a>, diag: &'a Diagnostic) -> Resolved<'a> {
fn new(resolver: &'a dyn FileResolver, diag: &'a Diagnostic) -> Resolved<'a> {
let mut diagnostics = vec![];
diagnostics.push(ResolvedDiagnostic::from_diagnostic(resolver, diag));
for sub in &diag.inner.subs {
@@ -182,7 +184,7 @@ struct ResolvedDiagnostic<'a> {
impl<'a> ResolvedDiagnostic<'a> {
/// Resolve a single diagnostic.
fn from_diagnostic(
resolver: &FileResolver<'a>,
resolver: &'a dyn FileResolver,
diag: &'a Diagnostic,
) -> ResolvedDiagnostic<'a> {
let annotations: Vec<_> = diag
@@ -190,9 +192,9 @@ impl<'a> ResolvedDiagnostic<'a> {
.annotations
.iter()
.filter_map(|ann| {
let path = resolver.path(ann.span.file);
let input = resolver.input(ann.span.file);
ResolvedAnnotation::new(path, &input, ann)
let path = ann.span.file.path(resolver);
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
})
.collect();
let message = if diag.inner.message.as_str().is_empty() {
@@ -216,7 +218,7 @@ impl<'a> ResolvedDiagnostic<'a> {
/// Resolve a single sub-diagnostic.
fn from_sub_diagnostic(
resolver: &FileResolver<'a>,
resolver: &'a dyn FileResolver,
diag: &'a SubDiagnostic,
) -> ResolvedDiagnostic<'a> {
let annotations: Vec<_> = diag
@@ -224,9 +226,9 @@ impl<'a> ResolvedDiagnostic<'a> {
.annotations
.iter()
.filter_map(|ann| {
let path = resolver.path(ann.span.file);
let input = resolver.input(ann.span.file);
ResolvedAnnotation::new(path, &input, ann)
let path = ann.span.file.path(resolver);
let diagnostic_source = ann.span.file.diagnostic_source(resolver);
ResolvedAnnotation::new(path, &diagnostic_source, ann)
})
.collect();
ResolvedDiagnostic {
@@ -259,10 +261,18 @@ impl<'a> ResolvedDiagnostic<'a> {
continue;
};
let prev_context_ends =
context_after(&prev.input.as_source_code(), context, prev.line_end).get();
let this_context_begins =
context_before(&ann.input.as_source_code(), context, ann.line_start).get();
let prev_context_ends = context_after(
&prev.diagnostic_source.as_source_code(),
context,
prev.line_end,
)
.get();
let this_context_begins = context_before(
&ann.diagnostic_source.as_source_code(),
context,
ann.line_start,
)
.get();
// The boundary case here is when `prev_context_ends`
// is exactly one less than `this_context_begins`. In
// that case, the context windows are adajcent and we
@@ -304,7 +314,7 @@ impl<'a> ResolvedDiagnostic<'a> {
#[derive(Debug)]
struct ResolvedAnnotation<'a> {
path: &'a str,
input: Input,
diagnostic_source: DiagnosticSource,
range: TextRange,
line_start: OneIndexed,
line_end: OneIndexed,
@@ -318,8 +328,12 @@ impl<'a> ResolvedAnnotation<'a> {
/// `path` is the path of the file that this annotation points to.
///
/// `input` is the contents of the file that this annotation points to.
fn new(path: &'a str, input: &Input, ann: &'a Annotation) -> Option<ResolvedAnnotation<'a>> {
let source = input.as_source_code();
fn new(
path: &'a str,
diagnostic_source: &DiagnosticSource,
ann: &'a Annotation,
) -> Option<ResolvedAnnotation<'a>> {
let source = diagnostic_source.as_source_code();
let (range, line_start, line_end) = match (ann.span.range(), ann.message.is_some()) {
// An annotation with no range AND no message is probably(?)
// meaningless, but we should try to render it anyway.
@@ -345,7 +359,7 @@ impl<'a> ResolvedAnnotation<'a> {
};
Some(ResolvedAnnotation {
path,
input: input.clone(),
diagnostic_source: diagnostic_source.clone(),
range,
line_start,
line_end,
@@ -510,8 +524,8 @@ impl<'r> RenderableSnippet<'r> {
!anns.is_empty(),
"creating a renderable snippet requires a non-zero number of annotations",
);
let input = &anns[0].input;
let source = input.as_source_code();
let diagnostic_source = &anns[0].diagnostic_source;
let source = diagnostic_source.as_source_code();
let has_primary = anns.iter().any(|ann| ann.is_primary);
let line_start = context_before(
@@ -527,7 +541,7 @@ impl<'r> RenderableSnippet<'r> {
let snippet_start = source.line_start(line_start);
let snippet_end = source.line_end(line_end);
let snippet = input
let snippet = diagnostic_source
.as_source_code()
.slice(TextRange::new(snippet_start, snippet_end));
@@ -613,7 +627,7 @@ impl<'r> RenderableAnnotation<'r> {
}
}
/// A type that facilitates the retrieval of source code from a `Span`.
/// A trait that facilitates the retrieval of source code from a `Span`.
///
/// At present, this is tightly coupled with a Salsa database. In the future,
/// it is intended for this resolver to become an abstraction providing a
@@ -628,36 +642,24 @@ impl<'r> RenderableAnnotation<'r> {
/// callers will need to pass in a different "resolver" for turning `Span`s
/// into actual file paths/contents. The infrastructure for this isn't fully in
/// place, but this type serves to demarcate the intended abstraction boundary.
pub(crate) struct FileResolver<'a> {
db: &'a dyn Db,
}
impl<'a> FileResolver<'a> {
/// Creates a new resolver from a Salsa database.
pub(crate) fn new(db: &'a dyn Db) -> FileResolver<'a> {
FileResolver { db }
}
pub trait FileResolver {
/// Returns the path associated with the file given.
fn path(&self, file: File) -> &'a str {
relativize_path(
self.db.system().current_directory(),
file.path(self.db).as_str(),
)
}
fn path(&self, file: File) -> &str;
/// Returns the input contents associated with the file given.
fn input(&self, file: File) -> Input {
Input {
text: source_text(self.db, file),
line_index: line_index(self.db, file),
}
}
fn input(&self, file: File) -> Input;
}
impl std::fmt::Debug for FileResolver<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "<salsa based file resolver>")
impl FileResolver for &dyn Db {
fn path(&self, file: File) -> &str {
relativize_path(self.system().current_directory(), file.path(*self).as_str())
}
fn input(&self, file: File) -> Input {
Input {
text: source_text(*self, file),
line_index: line_index(*self, file),
}
}
}
@@ -667,16 +669,9 @@ impl std::fmt::Debug for FileResolver<'_> {
/// This contains the actual content of that input as well as a
/// line index for efficiently querying its contents.
#[derive(Clone, Debug)]
struct Input {
text: SourceText,
line_index: LineIndex,
}
impl Input {
/// Returns this input as a `SourceCode` for convenient querying.
fn as_source_code(&self) -> SourceCode<'_, '_> {
SourceCode::new(self.text.as_str(), &self.line_index)
}
pub struct Input {
pub(crate) text: SourceText,
pub(crate) line_index: LineIndex,
}
/// Returns the line number accounting for the given `len`
@@ -730,6 +725,7 @@ mod tests {
use crate::files::system_path_to_file;
use crate::system::{DbWithWritableSystem, SystemPath};
use crate::tests::TestDb;
use crate::Upcast;
use super::*;
@@ -2174,8 +2170,9 @@ watermelon
fn span(&self, path: &str, line_offset_start: &str, line_offset_end: &str) -> Span {
let span = self.path(path);
let text = source_text(&self.db, span.file());
let line_index = line_index(&self.db, span.file());
let file = span.expect_ty_file();
let text = source_text(&self.db, file);
let line_index = line_index(&self.db, file);
let source = SourceCode::new(text.as_str(), &line_index);
let (line_start, offset_start) = parse_line_offset(line_offset_start);
@@ -2237,7 +2234,7 @@ watermelon
///
/// (This will set the "printed" flag on `Diagnostic`.)
fn render(&self, diag: &Diagnostic) -> String {
diag.display(&self.db, &self.config).to_string()
diag.display(&self.db.upcast(), &self.config).to_string()
}
}

View File

@@ -67,7 +67,7 @@ mod tests {
use crate::system::TestSystem;
use crate::system::{DbWithTestSystem, System};
use crate::vendored::VendoredFileSystem;
use crate::Db;
use crate::{Db, Upcast};
type Events = Arc<Mutex<Vec<salsa::Event>>>;
@@ -136,7 +136,16 @@ mod tests {
}
fn python_version(&self) -> ruff_python_ast::PythonVersion {
ruff_python_ast::PythonVersion::latest()
ruff_python_ast::PythonVersion::latest_ty()
}
}
impl Upcast<dyn Db> for TestDb {
fn upcast(&self) -> &(dyn Db + 'static) {
self
}
fn upcast_mut(&mut self) -> &mut (dyn Db + 'static) {
self
}
}

View File

@@ -44,6 +44,7 @@ toml = { workspace = true, features = ["parse"] }
tracing = { workspace = true }
tracing-indicatif = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
url = { workspace = true }
[dev-dependencies]
indoc = { workspace = true }

View File

@@ -2,7 +2,9 @@
use anyhow::Result;
use crate::{generate_cli_help, generate_docs, generate_json_schema, generate_ty_schema};
use crate::{
generate_cli_help, generate_docs, generate_json_schema, generate_ty_rules, generate_ty_schema,
};
pub(crate) const REGENERATE_ALL_COMMAND: &str = "cargo dev generate-all";
@@ -38,5 +40,6 @@ pub(crate) fn main(args: &Args) -> Result<()> {
generate_docs::main(&generate_docs::Args {
dry_run: args.mode.is_dry_run(),
})?;
generate_ty_rules::main(&generate_ty_rules::Args { mode: args.mode })?;
Ok(())
}

View File

@@ -0,0 +1,144 @@
//! Generates the rules table for ty
#![allow(clippy::print_stdout, clippy::print_stderr)]
use std::borrow::Cow;
use std::fmt::Write as _;
use std::fs;
use std::path::PathBuf;
use anyhow::{bail, Result};
use itertools::Itertools as _;
use pretty_assertions::StrComparison;
use crate::generate_all::{Mode, REGENERATE_ALL_COMMAND};
use crate::ROOT_DIR;
#[derive(clap::Args)]
pub(crate) struct Args {
/// Write the generated table to stdout (rather than to `ty.schema.json`).
#[arg(long, default_value_t, value_enum)]
pub(crate) mode: Mode,
}
pub(crate) fn main(args: &Args) -> Result<()> {
let markdown = generate_markdown();
let filename = "crates/ty/docs/rules.md";
let schema_path = PathBuf::from(ROOT_DIR).join(filename);
match args.mode {
Mode::DryRun => {
println!("{markdown}");
}
Mode::Check => {
let current = fs::read_to_string(schema_path)?;
if current == markdown {
println!("Up-to-date: {filename}");
} else {
let comparison = StrComparison::new(&current, &markdown);
bail!("{filename} changed, please run `{REGENERATE_ALL_COMMAND}`:\n{comparison}");
}
}
Mode::Write => {
let current = fs::read_to_string(&schema_path)?;
if current == markdown {
println!("Up-to-date: {filename}");
} else {
println!("Updating: {filename}");
fs::write(schema_path, markdown.as_bytes())?;
}
}
}
Ok(())
}
fn generate_markdown() -> String {
let registry = &*ty_project::DEFAULT_LINT_REGISTRY;
let mut output = String::new();
let _ = writeln!(&mut output, "# Rules\n");
let mut lints: Vec<_> = registry.lints().iter().collect();
lints.sort_by(|a, b| {
a.default_level()
.cmp(&b.default_level())
.reverse()
.then_with(|| a.name().cmp(&b.name()))
});
for lint in lints {
let _ = writeln!(&mut output, "## `{rule_name}`\n", rule_name = lint.name());
// Increase the header-level by one
let documentation = lint
.documentation_lines()
.map(|line| {
if line.starts_with('#') {
Cow::Owned(format!("#{line}"))
} else {
Cow::Borrowed(line)
}
})
.join("\n");
let _ = writeln!(
&mut output,
r#"**Default level**: {level}
<details>
<summary>{summary}</summary>
{documentation}
### Links
* [Related issues](https://github.com/astral-sh/ty/issues?q=sort%3Aupdated-desc%20is%3Aissue%20is%3Aopen%20{encoded_name})
* [View source](https://github.com/astral-sh/ruff/blob/main/{file}#L{line})
</details>
"#,
level = lint.default_level(),
// GitHub doesn't support markdown in `summary` headers
summary = replace_inline_code(lint.summary()),
encoded_name = url::form_urlencoded::byte_serialize(lint.name().as_str().as_bytes())
.collect::<String>(),
file = url::form_urlencoded::byte_serialize(lint.file().replace('\\', "/").as_bytes())
.collect::<String>(),
line = lint.line(),
);
}
output
}
/// Replaces inline code blocks (`code`) with `<code>code</code>`
fn replace_inline_code(input: &str) -> String {
let mut output = String::new();
let mut parts = input.split('`');
while let Some(before) = parts.next() {
if let Some(between) = parts.next() {
output.push_str(before);
output.push_str("<code>");
output.push_str(between);
output.push_str("</code>");
} else {
output.push_str(before);
}
}
output
}
#[cfg(test)]
mod tests {
use anyhow::Result;
use crate::generate_all::Mode;
use super::{main, Args};
#[test]
fn ty_rules_up_to_date() -> Result<()> {
main(&Args { mode: Mode::Check })
}
}

View File

@@ -15,6 +15,7 @@ mod generate_docs;
mod generate_json_schema;
mod generate_options;
mod generate_rules_table;
mod generate_ty_rules;
mod generate_ty_schema;
mod print_ast;
mod print_cst;
@@ -44,6 +45,7 @@ enum Command {
GenerateTySchema(generate_ty_schema::Args),
/// Generate a Markdown-compatible table of supported lint rules.
GenerateRulesTable,
GenerateTyRules(generate_ty_rules::Args),
/// Generate a Markdown-compatible listing of configuration options.
GenerateOptions,
/// Generate CLI help.
@@ -88,6 +90,7 @@ fn main() -> Result<ExitCode> {
Command::GenerateJSONSchema(args) => generate_json_schema::main(&args)?,
Command::GenerateTySchema(args) => generate_ty_schema::main(&args)?,
Command::GenerateRulesTable => println!("{}", generate_rules_table::generate()),
Command::GenerateTyRules(args) => generate_ty_rules::main(&args)?,
Command::GenerateOptions => println!("{}", generate_options::generate()),
Command::GenerateCliHelp(args) => generate_cli_help::main(&args)?,
Command::GenerateDocs(args) => generate_docs::main(&args)?,

View File

@@ -15,6 +15,7 @@ license = { workspace = true }
[dependencies]
ruff_annotate_snippets = { workspace = true }
ruff_cache = { workspace = true }
ruff_db = { workspace = true }
ruff_diagnostics = { workspace = true, features = ["serde"] }
ruff_notebook = { workspace = true }
ruff_macros = { workspace = true }

View File

@@ -91,14 +91,14 @@ from airflow.operators.sql import (
BaseSQLOperator,
BranchSQLOperator,
SQLColumnCheckOperator,
SQLTablecheckOperator,
SQLTableCheckOperator,
_convert_to_float_if_possible,
parse_boolean,
)
BaseSQLOperator()
BranchSQLOperator()
SQLTablecheckOperator()
SQLTableCheckOperator()
SQLColumnCheckOperator()
_convert_to_float_if_possible()
parse_boolean()

View File

@@ -17,7 +17,7 @@ impl Emitter for AzureEmitter {
context: &EmitterContext,
) -> anyhow::Result<()> {
for message in messages {
let location = if context.is_notebook(message.filename()) {
let location = if context.is_notebook(&message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
LineColumn::default()

View File

@@ -22,7 +22,7 @@ use crate::text_helpers::ShowNonprinting;
/// * Compute the diff from the [`Edit`] because diff calculation is expensive.
pub(super) struct Diff<'a> {
fix: &'a Fix,
source_code: &'a SourceFile,
source_code: SourceFile,
}
impl<'a> Diff<'a> {

View File

@@ -19,7 +19,7 @@ impl Emitter for GithubEmitter {
) -> anyhow::Result<()> {
for message in messages {
let source_location = message.compute_start_location();
let location = if context.is_notebook(message.filename()) {
let location = if context.is_notebook(&message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
LineColumn::default()
@@ -43,7 +43,7 @@ impl Emitter for GithubEmitter {
write!(
writer,
"{path}:{row}:{column}:",
path = relativize_path(message.filename()),
path = relativize_path(&*message.filename()),
row = location.line,
column = location.column,
)?;

View File

@@ -62,7 +62,7 @@ impl Serialize for SerializedMessages<'_> {
let start_location = message.compute_start_location();
let end_location = message.compute_end_location();
let lines = if self.context.is_notebook(message.filename()) {
let lines = if self.context.is_notebook(&message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
json!({
@@ -77,8 +77,8 @@ impl Serialize for SerializedMessages<'_> {
};
let path = self.project_dir.as_ref().map_or_else(
|| relativize_path(message.filename()),
|project_dir| relativize_path_to(message.filename(), project_dir),
|| relativize_path(&*message.filename()),
|project_dir| relativize_path_to(&*message.filename(), project_dir),
);
let mut message_fingerprint = fingerprint(message, &path, 0);

View File

@@ -65,7 +65,7 @@ impl Emitter for GroupedEmitter {
let column_length = calculate_print_width(max_column_length);
// Print the filename.
writeln!(writer, "{}:", relativize_path(filename).underline())?;
writeln!(writer, "{}:", relativize_path(&*filename).underline())?;
// Print each message.
for message in messages {
@@ -73,7 +73,7 @@ impl Emitter for GroupedEmitter {
writer,
"{}",
DisplayGroupedMessage {
notebook_index: context.notebook_index(message.filename()),
notebook_index: context.notebook_index(&message.filename()),
message,
show_fix_status: self.show_fix_status,
unsafe_fixes: self.unsafe_fixes,

View File

@@ -49,8 +49,9 @@ impl Serialize for ExpandedMessages<'_> {
}
pub(crate) fn message_to_json_value(message: &Message, context: &EmitterContext) -> Value {
let source_code = message.source_file().to_source_code();
let notebook_index = context.notebook_index(message.filename());
let source_file = message.source_file();
let source_code = source_file.to_source_code();
let notebook_index = context.notebook_index(&message.filename());
let fix = message.fix().map(|fix| {
json!({

View File

@@ -32,7 +32,7 @@ impl Emitter for JunitEmitter {
report.add_test_suite(test_suite);
} else {
for (filename, messages) in group_messages_by_filename(messages) {
let mut test_suite = TestSuite::new(filename);
let mut test_suite = TestSuite::new(&filename);
test_suite
.extra
.insert(XmlString::new("package"), XmlString::new("org.ruff"));
@@ -44,7 +44,7 @@ impl Emitter for JunitEmitter {
} = message;
let mut status = TestCaseStatus::non_success(NonSuccessKind::Failure);
status.set_message(message.body());
let location = if context.is_notebook(message.filename()) {
let location = if context.is_notebook(&message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
LineColumn::default()
@@ -66,7 +66,7 @@ impl Emitter for JunitEmitter {
},
status,
);
let file_path = Path::new(filename);
let file_path = Path::new(&*filename);
let file_stem = file_path.file_stem().unwrap().to_str().unwrap();
let classname = file_path.parent().unwrap().join(file_stem);
case.set_classname(classname.to_str().unwrap());

View File

@@ -1,8 +1,10 @@
use std::borrow::Cow;
use std::cmp::Ordering;
use std::collections::BTreeMap;
use std::io::Write;
use std::ops::Deref;
use ruff_db::diagnostic::{self as db, Annotation, DiagnosticId, Severity, Span};
use ruff_python_parser::semantic_errors::SemanticSyntaxError;
use rustc_hash::FxHashMap;
@@ -45,7 +47,7 @@ mod text;
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum Message {
Diagnostic(DiagnosticMessage),
SyntaxError(SyntaxErrorMessage),
SyntaxError(db::Diagnostic),
}
/// A diagnostic message corresponding to a rule violation.
@@ -59,14 +61,6 @@ pub struct DiagnosticMessage {
pub noqa_offset: TextSize,
}
/// A syntax error message raised by the parser.
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct SyntaxErrorMessage {
pub message: String,
pub range: TextRange,
pub file: SourceFile,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum MessageKind {
Diagnostic(Rule),
@@ -83,6 +77,17 @@ impl MessageKind {
}
impl Message {
pub fn syntax_error(
message: impl std::fmt::Display,
range: TextRange,
file: SourceFile,
) -> Message {
let mut diag = db::Diagnostic::new(DiagnosticId::InvalidSyntax, Severity::Error, "");
let span = Span::from(file).with_range(range);
diag.annotate(Annotation::primary(span).message(message));
Self::SyntaxError(diag)
}
/// Create a [`Message`] from the given [`Diagnostic`] corresponding to a rule violation.
pub fn from_diagnostic(
diagnostic: Diagnostic,
@@ -114,14 +119,14 @@ impl Message {
.next()
.map_or(TextSize::new(0), TextLen::text_len);
Message::SyntaxError(SyntaxErrorMessage {
message: format!(
Message::syntax_error(
format_args!(
"SyntaxError: {}",
DisplayParseErrorType::new(&parse_error.error)
),
range: TextRange::at(parse_error.location.start(), len),
TextRange::at(parse_error.location.start(), len),
file,
})
)
}
/// Create a [`Message`] from the given [`UnsupportedSyntaxError`].
@@ -129,11 +134,11 @@ impl Message {
unsupported_syntax_error: &UnsupportedSyntaxError,
file: SourceFile,
) -> Message {
Message::SyntaxError(SyntaxErrorMessage {
message: format!("SyntaxError: {unsupported_syntax_error}"),
range: unsupported_syntax_error.range,
Message::syntax_error(
format_args!("SyntaxError: {unsupported_syntax_error}"),
unsupported_syntax_error.range,
file,
})
)
}
/// Create a [`Message`] from the given [`SemanticSyntaxError`].
@@ -141,11 +146,11 @@ impl Message {
semantic_syntax_error: &SemanticSyntaxError,
file: SourceFile,
) -> Message {
Message::SyntaxError(SyntaxErrorMessage {
message: format!("SyntaxError: {semantic_syntax_error}"),
range: semantic_syntax_error.range,
Message::syntax_error(
format_args!("SyntaxError: {semantic_syntax_error}"),
semantic_syntax_error.range,
file,
})
)
}
pub const fn as_diagnostic_message(&self) -> Option<&DiagnosticMessage> {
@@ -168,8 +173,11 @@ impl Message {
}
/// Returns `true` if `self` is a syntax error message.
pub const fn is_syntax_error(&self) -> bool {
matches!(self, Message::SyntaxError(_))
pub fn is_syntax_error(&self) -> bool {
match self {
Message::Diagnostic(_) => false,
Message::SyntaxError(diag) => diag.id().is_invalid_syntax(),
}
}
/// Returns a message kind.
@@ -192,7 +200,11 @@ impl Message {
pub fn body(&self) -> &str {
match self {
Message::Diagnostic(m) => &m.kind.body,
Message::SyntaxError(m) => &m.message,
Message::SyntaxError(m) => m
.primary_annotation()
.expect("Expected a primary annotation for a ruff diagnostic")
.get_message()
.expect("Expected a message for a ruff diagnostic"),
}
}
@@ -234,27 +246,47 @@ impl Message {
}
/// Returns the filename for the message.
pub fn filename(&self) -> &str {
self.source_file().name()
pub fn filename(&self) -> Cow<'_, str> {
match self {
Message::Diagnostic(m) => Cow::Borrowed(m.file.name()),
Message::SyntaxError(diag) => Cow::Owned(
diag.expect_primary_span()
.expect_ruff_file()
.name()
.to_string(),
),
}
}
/// Computes the start source location for the message.
pub fn compute_start_location(&self) -> LineColumn {
self.source_file()
.to_source_code()
.line_column(self.start())
match self {
Message::Diagnostic(m) => m.file.to_source_code().line_column(m.range.start()),
Message::SyntaxError(diag) => diag
.expect_primary_span()
.expect_ruff_file()
.to_source_code()
.line_column(self.start()),
}
}
/// Computes the end source location for the message.
pub fn compute_end_location(&self) -> LineColumn {
self.source_file().to_source_code().line_column(self.end())
match self {
Message::Diagnostic(m) => m.file.to_source_code().line_column(m.range.end()),
Message::SyntaxError(diag) => diag
.expect_primary_span()
.expect_ruff_file()
.to_source_code()
.line_column(self.end()),
}
}
/// Returns the [`SourceFile`] which the message belongs to.
pub fn source_file(&self) -> &SourceFile {
pub fn source_file(&self) -> SourceFile {
match self {
Message::Diagnostic(m) => &m.file,
Message::SyntaxError(m) => &m.file,
Message::Diagnostic(m) => m.file.clone(),
Message::SyntaxError(m) => m.expect_primary_span().expect_ruff_file().clone(),
}
}
}
@@ -275,7 +307,10 @@ impl Ranged for Message {
fn range(&self) -> TextRange {
match self {
Message::Diagnostic(m) => m.range,
Message::SyntaxError(m) => m.range,
Message::SyntaxError(m) => m
.expect_primary_span()
.range()
.expect("Expected range for ruff span"),
}
}
}
@@ -293,11 +328,11 @@ impl Deref for MessageWithLocation<'_> {
}
}
fn group_messages_by_filename(messages: &[Message]) -> BTreeMap<&str, Vec<MessageWithLocation>> {
fn group_messages_by_filename(messages: &[Message]) -> BTreeMap<String, Vec<MessageWithLocation>> {
let mut grouped_messages = BTreeMap::default();
for message in messages {
grouped_messages
.entry(message.filename())
.entry(message.filename().to_string())
.or_insert_with(Vec::new)
.push(MessageWithLocation {
message,

View File

@@ -18,7 +18,7 @@ impl Emitter for PylintEmitter {
context: &EmitterContext,
) -> anyhow::Result<()> {
for message in messages {
let row = if context.is_notebook(message.filename()) {
let row = if context.is_notebook(&message.filename()) {
// We can't give a reasonable location for the structured formats,
// so we show one that's clearly a fallback
OneIndexed::from_zero_indexed(0)
@@ -39,7 +39,7 @@ impl Emitter for PylintEmitter {
writeln!(
writer,
"{path}:{row}: {body}",
path = relativize_path(message.filename()),
path = relativize_path(&*message.filename()),
)?;
}

View File

@@ -57,7 +57,8 @@ impl Serialize for ExpandedMessages<'_> {
}
fn message_to_rdjson_value(message: &Message) -> Value {
let source_code = message.source_file().to_source_code();
let source_file = message.source_file();
let source_code = source_file.to_source_code();
let start_location = source_code.line_column(message.start());
let end_location = source_code.line_column(message.end());

View File

@@ -121,7 +121,7 @@ impl SarifResult {
fn from_message(message: &Message) -> Result<Self> {
let start_location = message.compute_start_location();
let end_location = message.compute_end_location();
let path = normalize_path(message.filename());
let path = normalize_path(&*message.filename());
Ok(Self {
rule: message.rule(),
level: "error".to_string(),
@@ -141,7 +141,7 @@ impl SarifResult {
fn from_message(message: &Message) -> Result<Self> {
let start_location = message.compute_start_location();
let end_location = message.compute_end_location();
let path = normalize_path(message.filename());
let path = normalize_path(&*message.filename());
Ok(Self {
rule: message.rule(),
level: "error".to_string(),

View File

@@ -73,12 +73,12 @@ impl Emitter for TextEmitter {
write!(
writer,
"{path}{sep}",
path = relativize_path(message.filename()).bold(),
path = relativize_path(&*message.filename()).bold(),
sep = ":".cyan(),
)?;
let start_location = message.compute_start_location();
let notebook_index = context.notebook_index(message.filename());
let notebook_index = context.notebook_index(&message.filename());
// Check if we're working on a jupyter notebook and translate positions with cell accordingly
let diagnostic_location = if let Some(notebook_index) = notebook_index {
@@ -191,7 +191,8 @@ impl Display for MessageCodeFrame<'_> {
Vec::new()
};
let source_code = self.message.source_file().to_source_code();
let source_file = self.message.source_file();
let source_code = source_file.to_source_code();
let content_start_index = source_code.line_index(self.message.start());
let mut start_index = content_start_index.saturating_sub(2);

View File

@@ -288,7 +288,7 @@ fn check_names_moved_to_provider(checker: &Checker, expr: &Expr, ranged: TextRan
}
}
["airflow", "operators", "sql", rest] => match *rest {
"BaseSQLOperator" | "BranchSQLOperator" | "SQLTablecheckOperator" => {
"BaseSQLOperator" | "BranchSQLOperator" | "SQLTableCheckOperator" => {
ProviderReplacement::SourceModuleMovedToProvider {
name: (*rest).to_string(),
module: "airflow.providers.common.sql.operators.sql",

View File

@@ -216,7 +216,7 @@ AIR302_common_sql.py:99:1: AIR302 `airflow.operators.sql.BaseSQLOperator` is mov
99 | BaseSQLOperator()
| ^^^^^^^^^^^^^^^ AIR302
100 | BranchSQLOperator()
101 | SQLTablecheckOperator()
101 | SQLTableCheckOperator()
|
= help: Install `apache-airflow-providers-common-sql>=1.1.0` and use `airflow.providers.common.sql.operators.sql.BaseSQLOperator` instead.
@@ -225,26 +225,26 @@ AIR302_common_sql.py:100:1: AIR302 `airflow.operators.sql.BranchSQLOperator` is
99 | BaseSQLOperator()
100 | BranchSQLOperator()
| ^^^^^^^^^^^^^^^^^ AIR302
101 | SQLTablecheckOperator()
101 | SQLTableCheckOperator()
102 | SQLColumnCheckOperator()
|
= help: Install `apache-airflow-providers-common-sql>=1.1.0` and use `airflow.providers.common.sql.operators.sql.BranchSQLOperator` instead.
AIR302_common_sql.py:101:1: AIR302 `airflow.operators.sql.SQLTablecheckOperator` is moved into `common-sql` provider in Airflow 3.0;
AIR302_common_sql.py:101:1: AIR302 `airflow.operators.sql.SQLTableCheckOperator` is moved into `common-sql` provider in Airflow 3.0;
|
99 | BaseSQLOperator()
100 | BranchSQLOperator()
101 | SQLTablecheckOperator()
101 | SQLTableCheckOperator()
| ^^^^^^^^^^^^^^^^^^^^^ AIR302
102 | SQLColumnCheckOperator()
103 | _convert_to_float_if_possible()
|
= help: Install `apache-airflow-providers-common-sql>=1.1.0` and use `airflow.providers.common.sql.operators.sql.SQLTablecheckOperator` instead.
= help: Install `apache-airflow-providers-common-sql>=1.1.0` and use `airflow.providers.common.sql.operators.sql.SQLTableCheckOperator` instead.
AIR302_common_sql.py:102:1: AIR302 `airflow.operators.sql.SQLColumnCheckOperator` is moved into `common-sql` provider in Airflow 3.0;
|
100 | BranchSQLOperator()
101 | SQLTablecheckOperator()
101 | SQLTableCheckOperator()
102 | SQLColumnCheckOperator()
| ^^^^^^^^^^^^^^^^^^^^^^ AIR302
103 | _convert_to_float_if_possible()
@@ -254,7 +254,7 @@ AIR302_common_sql.py:102:1: AIR302 `airflow.operators.sql.SQLColumnCheckOperator
AIR302_common_sql.py:103:1: AIR302 `airflow.operators.sql._convert_to_float_if_possible` is moved into `common-sql` provider in Airflow 3.0;
|
101 | SQLTablecheckOperator()
101 | SQLTableCheckOperator()
102 | SQLColumnCheckOperator()
103 | _convert_to_float_if_possible()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AIR302

View File

@@ -59,6 +59,10 @@ impl PythonVersion {
Self::PY313
}
pub const fn latest_ty() -> Self {
Self::PY313
}
pub const fn as_tuple(self) -> (u8, u8) {
(self.major, self.minor)
}

View File

@@ -14,7 +14,7 @@ use ruff_linter::{
directives::{extract_directives, Flags},
generate_noqa_edits,
linter::check_path,
message::{DiagnosticMessage, Message, SyntaxErrorMessage},
message::{DiagnosticMessage, Message},
package::PackageRoot,
packaging::detect_package_root,
registry::AsRule,
@@ -173,10 +173,10 @@ pub(crate) fn check(
locator.to_index(),
encoding,
)),
Message::SyntaxError(syntax_error_message) => {
Message::SyntaxError(_) => {
if show_syntax_errors {
Some(syntax_error_to_lsp_diagnostic(
syntax_error_message,
&message,
&source_kind,
locator.to_index(),
encoding,
@@ -322,7 +322,7 @@ fn to_lsp_diagnostic(
}
fn syntax_error_to_lsp_diagnostic(
syntax_error: SyntaxErrorMessage,
syntax_error: &Message,
source_kind: &SourceKind,
index: &LineIndex,
encoding: PositionEncoding,
@@ -331,7 +331,7 @@ fn syntax_error_to_lsp_diagnostic(
let cell: usize;
if let Some(notebook_index) = source_kind.as_ipy_notebook().map(Notebook::index) {
NotebookRange { cell, range } = syntax_error.range.to_notebook_range(
NotebookRange { cell, range } = syntax_error.range().to_notebook_range(
source_kind.source_code(),
index,
notebook_index,
@@ -340,7 +340,7 @@ fn syntax_error_to_lsp_diagnostic(
} else {
cell = usize::default();
range = syntax_error
.range
.range()
.to_range(source_kind.source_code(), index, encoding);
}
@@ -353,7 +353,7 @@ fn syntax_error_to_lsp_diagnostic(
code: None,
code_description: None,
source: Some(DIAGNOSTIC_NAME.into()),
message: syntax_error.message,
message: syntax_error.body().to_string(),
related_information: None,
data: None,
},

View File

@@ -195,7 +195,7 @@ impl SourceFile {
}
}
fn index(&self) -> &LineIndex {
pub fn index(&self) -> &LineIndex {
self.inner
.line_index
.get_or_init(|| LineIndex::from_source_text(self.source_text()))

View File

@@ -1,7 +1,7 @@
use std::path::Path;
use js_sys::Error;
use ruff_linter::message::{DiagnosticMessage, Message, SyntaxErrorMessage};
use ruff_linter::message::{DiagnosticMessage, Message};
use ruff_linter::settings::types::PythonVersion;
use serde::{Deserialize, Serialize};
use wasm_bindgen::prelude::*;
@@ -230,15 +230,13 @@ impl Workspace {
.collect(),
}),
},
Message::SyntaxError(SyntaxErrorMessage { message, range, .. }) => {
ExpandedMessage {
code: None,
message,
start_location: source_code.line_column(range.start()).into(),
end_location: source_code.line_column(range.end()).into(),
fix: None,
}
}
Message::SyntaxError(_) => ExpandedMessage {
code: None,
message: message.body().to_string(),
start_location: source_code.line_column(message.range().start()).into(),
end_location: source_code.line_column(message.range().end()).into(),
fix: None,
},
})
.collect();

View File

@@ -1,12 +1,13 @@
[package]
name = "ty"
version = "0.0.0"
edition.workspace = true
rust-version.workspace = true
homepage.workspace = true
documentation.workspace = true
# required for correct pypi metadata
homepage = "https://github.com/astral-sh/ty/"
documentation = "https://github.com/astral-sh/ty/"
# Releases occur in this other repository!
repository = "https://github.com/astral-sh/ty/"
edition.workspace = true
rust-version.workspace = true
authors.workspace = true
license.workspace = true

1554
crates/ty/docs/rules.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -14,6 +14,7 @@ use rayon::ThreadPoolBuilder;
use ruff_db::diagnostic::{Diagnostic, DisplayDiagnosticConfig, Severity};
use ruff_db::max_parallelism;
use ruff_db::system::{OsSystem, SystemPath, SystemPathBuf};
use ruff_db::Upcast;
use salsa::plumbing::ZalsaDatabase;
use ty_project::metadata::options::Options;
use ty_project::watch::ProjectWatcher;
@@ -298,7 +299,11 @@ impl MainLoop {
let diagnostics_count = result.len();
for diagnostic in result {
write!(stdout, "{}", diagnostic.display(db, &display_config))?;
write!(
stdout,
"{}",
diagnostic.display(&db.upcast(), &display_config)
)?;
max_severity = max_severity.max(diagnostic.severity());
}

View File

@@ -1,6 +1,7 @@
use anyhow::Context;
use insta::internals::SettingsBindDropGuard;
use insta_cmd::{assert_cmd_snapshot, get_cargo_bin};
use ruff_python_ast::PythonVersion;
use std::fmt::Write;
use std::path::{Path, PathBuf};
use std::process::Command;
@@ -368,12 +369,12 @@ fn configuration_rule_severity() -> anyhow::Result<()> {
for a in range(0, int(y)):
x = a
print(x) # possibly-unresolved-reference
prin(x) # unresolved-reference
"#,
)?;
// Assert that there's a possibly unresolved reference diagnostic
// and that division-by-zero has a severity of error by default.
// Assert that there's an `unresolved-reference` diagnostic (error)
// and a `division-by-zero` diagnostic (error).
assert_cmd_snapshot!(case.command(), @r"
success: false
exit_code: 1
@@ -388,15 +389,15 @@ fn configuration_rule_severity() -> anyhow::Result<()> {
|
info: `lint:division-by-zero` is enabled by default
warning: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
--> test.py:7:7
error: lint:unresolved-reference: Name `prin` used when not defined
--> test.py:7:1
|
5 | x = a
6 |
7 | print(x) # possibly-unresolved-reference
| ^
7 | prin(x) # unresolved-reference
| ^^^^
|
info: `lint:possibly-unresolved-reference` is enabled by default
info: `lint:unresolved-reference` is enabled by default
Found 2 diagnostics
@@ -408,7 +409,7 @@ fn configuration_rule_severity() -> anyhow::Result<()> {
r#"
[tool.ty.rules]
division-by-zero = "warn" # demote to warn
possibly-unresolved-reference = "ignore"
unresolved-reference = "ignore"
"#,
)?;
@@ -447,12 +448,12 @@ fn cli_rule_severity() -> anyhow::Result<()> {
for a in range(0, int(y)):
x = a
print(x) # possibly-unresolved-reference
prin(x) # unresolved-reference
"#,
)?;
// Assert that there's a possibly unresolved reference diagnostic
// and that division-by-zero has a severity of error by default.
// Assert that there's an `unresolved-reference` diagnostic (error),
// a `division-by-zero` (error) and a unresolved-import (error) diagnostic by default.
assert_cmd_snapshot!(case.command(), @r"
success: false
exit_code: 1
@@ -479,15 +480,15 @@ fn cli_rule_severity() -> anyhow::Result<()> {
|
info: `lint:division-by-zero` is enabled by default
warning: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
--> test.py:9:7
error: lint:unresolved-reference: Name `prin` used when not defined
--> test.py:9:1
|
7 | x = a
8 |
9 | print(x) # possibly-unresolved-reference
| ^
9 | prin(x) # unresolved-reference
| ^^^^
|
info: `lint:possibly-unresolved-reference` is enabled by default
info: `lint:unresolved-reference` is enabled by default
Found 3 diagnostics
@@ -498,7 +499,7 @@ fn cli_rule_severity() -> anyhow::Result<()> {
case
.command()
.arg("--ignore")
.arg("possibly-unresolved-reference")
.arg("unresolved-reference")
.arg("--warn")
.arg("division-by-zero")
.arg("--warn")
@@ -550,12 +551,12 @@ fn cli_rule_severity_precedence() -> anyhow::Result<()> {
for a in range(0, int(y)):
x = a
print(x) # possibly-unresolved-reference
prin(x) # unresolved-reference
"#,
)?;
// Assert that there's a possibly unresolved reference diagnostic
// and that division-by-zero has a severity of error by default.
// Assert that there's a `unresolved-reference` diagnostic (error)
// and a `division-by-zero` (error) by default.
assert_cmd_snapshot!(case.command(), @r"
success: false
exit_code: 1
@@ -570,15 +571,15 @@ fn cli_rule_severity_precedence() -> anyhow::Result<()> {
|
info: `lint:division-by-zero` is enabled by default
warning: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
--> test.py:7:7
error: lint:unresolved-reference: Name `prin` used when not defined
--> test.py:7:1
|
5 | x = a
6 |
7 | print(x) # possibly-unresolved-reference
| ^
7 | prin(x) # unresolved-reference
| ^^^^
|
info: `lint:possibly-unresolved-reference` is enabled by default
info: `lint:unresolved-reference` is enabled by default
Found 2 diagnostics
@@ -588,13 +589,13 @@ fn cli_rule_severity_precedence() -> anyhow::Result<()> {
assert_cmd_snapshot!(
case
.command()
.arg("--error")
.arg("possibly-unresolved-reference")
.arg("--warn")
.arg("unresolved-reference")
.arg("--warn")
.arg("division-by-zero")
// Override the error severity with warning
.arg("--ignore")
.arg("possibly-unresolved-reference"),
.arg("unresolved-reference"),
@r"
success: true
exit_code: 0
@@ -675,7 +676,7 @@ fn cli_unknown_rules() -> anyhow::Result<()> {
fn exit_code_only_warnings() -> anyhow::Result<()> {
let case = TestCase::with_file("test.py", r"print(x) # [unresolved-reference]")?;
assert_cmd_snapshot!(case.command(), @r"
assert_cmd_snapshot!(case.command().arg("--warn").arg("unresolved-reference"), @r"
success: true
exit_code: 0
----- stdout -----
@@ -685,7 +686,7 @@ fn exit_code_only_warnings() -> anyhow::Result<()> {
1 | print(x) # [unresolved-reference]
| ^
|
info: `lint:unresolved-reference` is enabled by default
info: `lint:unresolved-reference` was selected on the command line
Found 1 diagnostic
@@ -759,7 +760,7 @@ fn exit_code_only_info_and_error_on_warning_is_true() -> anyhow::Result<()> {
fn exit_code_no_errors_but_error_on_warning_is_true() -> anyhow::Result<()> {
let case = TestCase::with_file("test.py", r"print(x) # [unresolved-reference]")?;
assert_cmd_snapshot!(case.command().arg("--error-on-warning"), @r"
assert_cmd_snapshot!(case.command().arg("--error-on-warning").arg("--warn").arg("unresolved-reference"), @r"
success: false
exit_code: 1
----- stdout -----
@@ -769,7 +770,7 @@ fn exit_code_no_errors_but_error_on_warning_is_true() -> anyhow::Result<()> {
1 | print(x) # [unresolved-reference]
| ^
|
info: `lint:unresolved-reference` is enabled by default
info: `lint:unresolved-reference` was selected on the command line
Found 1 diagnostic
@@ -792,7 +793,7 @@ fn exit_code_no_errors_but_error_on_warning_is_enabled_in_configuration() -> any
),
])?;
assert_cmd_snapshot!(case.command(), @r"
assert_cmd_snapshot!(case.command().arg("--warn").arg("unresolved-reference"), @r"
success: false
exit_code: 1
----- stdout -----
@@ -802,7 +803,7 @@ fn exit_code_no_errors_but_error_on_warning_is_enabled_in_configuration() -> any
1 | print(x) # [unresolved-reference]
| ^
|
info: `lint:unresolved-reference` is enabled by default
info: `lint:unresolved-reference` was selected on the command line
Found 1 diagnostic
@@ -822,7 +823,7 @@ fn exit_code_both_warnings_and_errors() -> anyhow::Result<()> {
"#,
)?;
assert_cmd_snapshot!(case.command(), @r"
assert_cmd_snapshot!(case.command().arg("--warn").arg("unresolved-reference"), @r"
success: false
exit_code: 1
----- stdout -----
@@ -833,7 +834,7 @@ fn exit_code_both_warnings_and_errors() -> anyhow::Result<()> {
| ^
3 | print(4[1]) # [non-subscriptable]
|
info: `lint:unresolved-reference` is enabled by default
info: `lint:unresolved-reference` was selected on the command line
error: lint:non-subscriptable: Cannot subscript object of type `Literal[4]` with no `__getitem__` method
--> test.py:3:7
@@ -862,7 +863,7 @@ fn exit_code_both_warnings_and_errors_and_error_on_warning_is_true() -> anyhow::
"###,
)?;
assert_cmd_snapshot!(case.command().arg("--error-on-warning"), @r"
assert_cmd_snapshot!(case.command().arg("--warn").arg("unresolved-reference").arg("--error-on-warning"), @r"
success: false
exit_code: 1
----- stdout -----
@@ -873,7 +874,7 @@ fn exit_code_both_warnings_and_errors_and_error_on_warning_is_true() -> anyhow::
| ^
3 | print(4[1]) # [non-subscriptable]
|
info: `lint:unresolved-reference` is enabled by default
info: `lint:unresolved-reference` was selected on the command line
error: lint:non-subscriptable: Cannot subscript object of type `Literal[4]` with no `__getitem__` method
--> test.py:3:7
@@ -902,7 +903,7 @@ fn exit_code_exit_zero_is_true() -> anyhow::Result<()> {
"#,
)?;
assert_cmd_snapshot!(case.command().arg("--exit-zero"), @r"
assert_cmd_snapshot!(case.command().arg("--exit-zero").arg("--warn").arg("unresolved-reference"), @r"
success: true
exit_code: 0
----- stdout -----
@@ -913,7 +914,7 @@ fn exit_code_exit_zero_is_true() -> anyhow::Result<()> {
| ^
3 | print(4[1]) # [non-subscriptable]
|
info: `lint:unresolved-reference` is enabled by default
info: `lint:unresolved-reference` was selected on the command line
error: lint:non-subscriptable: Cannot subscript object of type `Literal[4]` with no `__getitem__` method
--> test.py:3:7
@@ -950,7 +951,7 @@ fn user_configuration() -> anyhow::Result<()> {
for a in range(0, int(y)):
x = a
print(x)
prin(x)
"#,
),
])?;
@@ -962,50 +963,6 @@ fn user_configuration() -> anyhow::Result<()> {
"XDG_CONFIG_HOME"
};
assert_cmd_snapshot!(
case.command().current_dir(case.root().join("project")).env(config_env_var, config_directory.as_os_str()),
@r"
success: true
exit_code: 0
----- stdout -----
warning: lint:division-by-zero: Cannot divide object of type `Literal[4]` by zero
--> main.py:2:5
|
2 | y = 4 / 0
| ^^^^^
3 |
4 | for a in range(0, int(y)):
|
info: `lint:division-by-zero` was selected in the configuration file
warning: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
--> main.py:7:7
|
5 | x = a
6 |
7 | print(x)
| ^
|
info: `lint:possibly-unresolved-reference` is enabled by default
Found 2 diagnostics
----- stderr -----
"
);
// The user-level configuration promotes `possibly-unresolved-reference` to an error.
// Changing the level for `division-by-zero` has no effect, because the project-level configuration
// has higher precedence.
case.write_file(
config_directory.join("ty/ty.toml"),
r#"
[rules]
division-by-zero = "error"
possibly-unresolved-reference = "error"
"#,
)?;
assert_cmd_snapshot!(
case.command().current_dir(case.root().join("project")).env(config_env_var, config_directory.as_os_str()),
@r"
@@ -1022,15 +979,59 @@ fn user_configuration() -> anyhow::Result<()> {
|
info: `lint:division-by-zero` was selected in the configuration file
error: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
--> main.py:7:7
error: lint:unresolved-reference: Name `prin` used when not defined
--> main.py:7:1
|
5 | x = a
6 |
7 | print(x)
| ^
7 | prin(x)
| ^^^^
|
info: `lint:possibly-unresolved-reference` was selected in the configuration file
info: `lint:unresolved-reference` is enabled by default
Found 2 diagnostics
----- stderr -----
"
);
// The user-level configuration sets the severity for `unresolved-reference` to warn.
// Changing the level for `division-by-zero` has no effect, because the project-level configuration
// has higher precedence.
case.write_file(
config_directory.join("ty/ty.toml"),
r#"
[rules]
division-by-zero = "error"
unresolved-reference = "warn"
"#,
)?;
assert_cmd_snapshot!(
case.command().current_dir(case.root().join("project")).env(config_env_var, config_directory.as_os_str()),
@r"
success: true
exit_code: 0
----- stdout -----
warning: lint:division-by-zero: Cannot divide object of type `Literal[4]` by zero
--> main.py:2:5
|
2 | y = 4 / 0
| ^^^^^
3 |
4 | for a in range(0, int(y)):
|
info: `lint:division-by-zero` was selected in the configuration file
warning: lint:unresolved-reference: Name `prin` used when not defined
--> main.py:7:1
|
5 | x = a
6 |
7 | prin(x)
| ^^^^
|
info: `lint:unresolved-reference` was selected in the configuration file
Found 2 diagnostics
@@ -1180,7 +1181,7 @@ fn concise_diagnostics() -> anyhow::Result<()> {
"#,
)?;
assert_cmd_snapshot!(case.command().arg("--output-format=concise"), @r"
assert_cmd_snapshot!(case.command().arg("--output-format=concise").arg("--warn").arg("unresolved-reference"), @r"
success: false
exit_code: 1
----- stdout -----
@@ -1263,6 +1264,80 @@ fn can_handle_large_binop_expressions() -> anyhow::Result<()> {
Ok(())
}
#[test]
fn defaults_to_a_new_python_version() -> anyhow::Result<()> {
let case = TestCase::with_files([
(
"ty.toml",
&*format!(
r#"
[environment]
python-version = "{}"
python-platform = "linux"
"#,
PythonVersion::default()
),
),
(
"main.py",
r#"
import os
os.grantpt(1) # only available on unix, Python 3.13 or newer
"#,
),
])?;
assert_cmd_snapshot!(case.command(), @r"
success: false
exit_code: 1
----- stdout -----
error: lint:unresolved-attribute: Type `<module 'os'>` has no attribute `grantpt`
--> main.py:4:1
|
2 | import os
3 |
4 | os.grantpt(1) # only available on unix, Python 3.13 or newer
| ^^^^^^^^^^
|
info: `lint:unresolved-attribute` is enabled by default
Found 1 diagnostic
----- stderr -----
");
// Use default (which should be latest supported)
let case = TestCase::with_files([
(
"ty.toml",
r#"
[environment]
python-platform = "linux"
"#,
),
(
"main.py",
r#"
import os
os.grantpt(1) # only available on unix, Python 3.13 or newer
"#,
),
])?;
assert_cmd_snapshot!(case.command(), @r"
success: true
exit_code: 0
----- stdout -----
All checks passed!
----- stderr -----
");
Ok(())
}
struct TestCase {
_temp_dir: TempDir,
_settings_scope: SettingsBindDropGuard,

View File

@@ -136,6 +136,7 @@ mod tests {
Annotation, Diagnostic, DiagnosticFormat, DiagnosticId, DisplayDiagnosticConfig, LintName,
Severity, Span,
};
use ruff_db::Upcast;
use ruff_text_size::{Ranged, TextRange};
#[test]
@@ -773,7 +774,7 @@ mod tests {
.message("Cursor offset"),
);
write!(buf, "{}", diagnostic.display(&self.db, &config)).unwrap();
write!(buf, "{}", diagnostic.display(&self.db.upcast(), &config)).unwrap();
buf
}

View File

@@ -191,7 +191,7 @@ mod tests {
Program::from_settings(
&db,
ProgramSettings {
python_version: PythonVersion::latest(),
python_version: PythonVersion::latest_ty(),
python_platform: PythonPlatform::default(),
search_paths: SearchPathSettings {
extra_paths: vec![],

View File

@@ -204,6 +204,7 @@ mod tests {
use ruff_db::diagnostic::{Diagnostic, DiagnosticFormat, DisplayDiagnosticConfig};
use ruff_db::files::{system_path_to_file, File};
use ruff_db::system::{DbWithWritableSystem, SystemPath, SystemPathBuf};
use ruff_db::Upcast;
use ruff_python_ast::PythonVersion;
use ruff_text_size::TextSize;
use ty_python_semantic::{
@@ -227,7 +228,7 @@ mod tests {
Program::from_settings(
&db,
ProgramSettings {
python_version: PythonVersion::latest(),
python_version: PythonVersion::latest_ty(),
python_platform: PythonPlatform::default(),
search_paths: SearchPathSettings {
extra_paths: vec![],
@@ -285,7 +286,7 @@ mod tests {
.format(DiagnosticFormat::Full);
for diagnostic in diagnostics {
let diag = diagnostic.into_diagnostic();
write!(buf, "{}", diag.display(&self.db, &config)).unwrap();
write!(buf, "{}", diag.display(&self.db.upcast(), &config)).unwrap();
}
buf

View File

@@ -126,6 +126,16 @@ impl Upcast<dyn IdeDb> for ProjectDatabase {
}
}
impl Upcast<dyn Db> for ProjectDatabase {
fn upcast(&self) -> &(dyn Db + 'static) {
self
}
fn upcast_mut(&mut self) -> &mut (dyn Db + 'static) {
self
}
}
#[salsa::db]
impl IdeDb for ProjectDatabase {}

View File

@@ -61,7 +61,7 @@ impl Options {
.environment
.as_ref()
.and_then(|env| env.python_version.as_deref().copied())
.unwrap_or_default();
.unwrap_or(PythonVersion::latest_ty());
let python_platform = self
.environment
.as_ref()

View File

@@ -103,7 +103,7 @@ impl Project {
let major =
u8::try_from(major).map_err(|_| ResolveRequiresPythonError::TooLargeMajor(major))?;
let minor =
u8::try_from(minor).map_err(|_| ResolveRequiresPythonError::TooLargeMajor(minor))?;
u8::try_from(minor).map_err(|_| ResolveRequiresPythonError::TooLargeMinor(minor))?;
Ok(Some(
requires_python

View File

@@ -45,6 +45,18 @@ reveal_type(a | a) # revealed: Literal[True]
reveal_type(a | b) # revealed: Literal[True]
reveal_type(b | a) # revealed: Literal[True]
reveal_type(b | b) # revealed: Literal[False]
# bitwise AND
reveal_type(a & a) # revealed: Literal[True]
reveal_type(a & b) # revealed: Literal[False]
reveal_type(b & a) # revealed: Literal[False]
reveal_type(b & b) # revealed: Literal[False]
# bitwise XOR
reveal_type(a ^ a) # revealed: Literal[False]
reveal_type(a ^ b) # revealed: Literal[True]
reveal_type(b ^ a) # revealed: Literal[True]
reveal_type(b ^ b) # revealed: Literal[False]
```
## Arithmetic with a variable

View File

@@ -9,6 +9,9 @@ reveal_type(3 * -1) # revealed: Literal[-3]
reveal_type(-3 // 3) # revealed: Literal[-1]
reveal_type(-3 / 3) # revealed: float
reveal_type(5 % 3) # revealed: Literal[2]
reveal_type(3 | 4) # revealed: Literal[7]
reveal_type(5 & 6) # revealed: Literal[4]
reveal_type(7 ^ 2) # revealed: Literal[5]
# error: [unsupported-operator] "Operator `+` is unsupported between objects of type `Literal[2]` and `Literal["f"]`"
reveal_type(2 + "f") # revealed: Unknown

View File

@@ -252,12 +252,8 @@ def _():
## Load before `global` declaration
This should be an error, but it's not yet.
TODO implement `SemanticSyntaxContext::global`
```py
def f():
x = 1
global x
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
```

View File

@@ -269,6 +269,8 @@ import subexporter
__all__ = []
__all__.extend(["C", "D"])
__all__.extend(("E", "F"))
__all__.extend({"G", "H"})
__all__.extend(subexporter.__all__)
class C: ...
@@ -281,7 +283,7 @@ class D: ...
import exporter
from ty_extensions import dunder_all_names
# revealed: tuple[Literal["A"], Literal["B"], Literal["C"], Literal["D"]]
# revealed: tuple[Literal["A"], Literal["B"], Literal["C"], Literal["D"], Literal["E"], Literal["F"], Literal["G"], Literal["H"]]
reveal_type(dunder_all_names(exporter))
```

View File

@@ -32,8 +32,14 @@ def f():
y = ""
global x
# TODO: error: [invalid-assignment] "Object of type `Literal[""]` is not assignable to `int`"
# error: [invalid-assignment] "Object of type `Literal[""]` is not assignable to `int`"
x = ""
global z
# error: [invalid-assignment] "Object of type `Literal[""]` is not assignable to `int`"
z = ""
z: int
```
## Nested intervening scope
@@ -48,8 +54,7 @@ def outer():
def inner():
global x
# TODO: revealed: int
reveal_type(x) # revealed: str
reveal_type(x) # revealed: int
```
## Narrowing
@@ -87,8 +92,7 @@ def f():
```py
def f():
global x
# TODO this should also not be an error
y = x # error: [unresolved-reference] "Name `x` used when not defined"
y = x
x = 1 # No error.
x = 2
@@ -99,79 +103,111 @@ x = 2
Using a name prior to its `global` declaration in the same scope is a syntax error.
```py
x = 1
def f():
print(x) # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
print(x)
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
print(x)
def f():
global x
print(x) # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
print(x)
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
print(x)
def f():
print(x) # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x, y
print(x)
global x, y # error: [invalid-syntax] "name `x` is used prior to global declaration"
print(x)
def f():
global x, y
print(x) # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x, y
print(x)
global x, y # error: [invalid-syntax] "name `x` is used prior to global declaration"
print(x)
def f():
x = 1 # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
x = 1
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
x = 1
def f():
global x
x = 1 # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
x = 1
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
x = 1
def f():
del x # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x, y
del x
global x, y # error: [invalid-syntax] "name `x` is used prior to global declaration"
del x
def f():
global x, y
del x # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x, y
del x
global x, y # error: [invalid-syntax] "name `x` is used prior to global declaration"
del x
def f():
del x # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
del x
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
del x
def f():
global x
del x # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
del x
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
del x
def f():
del x # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x, y
del x
global x, y # error: [invalid-syntax] "name `x` is used prior to global declaration"
del x
def f():
global x, y
del x # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x, y
del x
global x, y # error: [invalid-syntax] "name `x` is used prior to global declaration"
del x
def f():
print(f"{x=}") # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
print(f"{x=}")
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
# still an error in module scope
x = None # TODO: error: [invalid-syntax] name `x` is used prior to global declaration
global x
x = None
global x # error: [invalid-syntax] "name `x` is used prior to global declaration"
```
## Local bindings override preceding `global` bindings
```py
x = 42
def f():
global x
reveal_type(x) # revealed: Unknown | Literal[42]
x = "56"
reveal_type(x) # revealed: Literal["56"]
```
## Local assignment prevents falling back to the outer scope
```py
x = 42
def f():
# error: [unresolved-reference] "Name `x` used when not defined"
reveal_type(x) # revealed: Unknown
x = "56"
reveal_type(x) # revealed: Literal["56"]
```
## Annotating a `global` binding is a syntax error
```py
x: int = 1
def f():
global x
x: str = "foo" # TODO: error: [invalid-syntax] "annotated name 'x' can't be global"
```

View File

@@ -59,7 +59,7 @@ info: revealed-type: Revealed type
```
```
warning: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
info: lint:possibly-unresolved-reference: Name `x` used when possibly not defined
--> src/mdtest_snippet.py:16:17
|
14 | # revealed: Unknown

View File

@@ -345,7 +345,7 @@ info: `lint:duplicate-base` is enabled by default
```
```
warning: lint:unused-ignore-comment
info: lint:unused-ignore-comment
--> src/mdtest_snippet.py:72:9
|
70 | A,
@@ -388,7 +388,7 @@ info: `lint:duplicate-base` is enabled by default
```
```
warning: lint:unused-ignore-comment
info: lint:unused-ignore-comment
--> src/mdtest_snippet.py:81:13
|
79 | ):

View File

@@ -94,14 +94,16 @@ impl<'db> DunderAllNamesCollector<'db> {
}
/// Extends the current set of names with the names from the given expression which can be
/// either a list of names or a module's `__all__` variable.
/// either a list/tuple/set of string-literal names or a module's `__all__` variable.
///
/// Returns `true` if the expression is a valid list or module `__all__`, `false` otherwise.
fn extend_from_list_or_module(&mut self, expr: &ast::Expr) -> bool {
/// Returns `true` if the expression is a valid list/tuple/set or module `__all__`, `false` otherwise.
fn extend(&mut self, expr: &ast::Expr) -> bool {
match expr {
// `__all__ += [...]`
// `__all__.extend([...])`
ast::Expr::List(ast::ExprList { elts, .. }) => self.add_names(elts),
ast::Expr::List(ast::ExprList { elts, .. })
| ast::Expr::Tuple(ast::ExprTuple { elts, .. })
| ast::Expr::Set(ast::ExprSet { elts, .. }) => self.add_names(elts),
// `__all__ += module.__all__`
// `__all__.extend(module.__all__)`
@@ -155,7 +157,7 @@ impl<'db> DunderAllNamesCollector<'db> {
// `__all__.extend([...])`
// `__all__.extend(module.__all__)`
"extend" => {
if !self.extend_from_list_or_module(argument) {
if !self.extend(argument) {
return false;
}
}
@@ -330,7 +332,7 @@ impl<'db> StatementVisitor<'db> for DunderAllNamesCollector<'db> {
if !is_dunder_all(target) {
return;
}
if !self.extend_from_list_or_module(value) {
if !self.extend(value) {
self.invalid = true;
}
}

View File

@@ -475,12 +475,26 @@ impl RuleSelection {
/// Creates a new rule selection from all known lints in the registry that are enabled
/// according to their default severity.
pub fn from_registry(registry: &LintRegistry) -> Self {
Self::from_registry_with_default(registry, None)
}
/// Creates a new rule selection from all known lints in the registry, including lints that are default by default.
/// Lints that are disabled by default use the `default_severity`.
pub fn all(registry: &LintRegistry, default_severity: Severity) -> Self {
Self::from_registry_with_default(registry, Some(default_severity))
}
fn from_registry_with_default(
registry: &LintRegistry,
default_severity: Option<Severity>,
) -> Self {
let lints = registry
.lints()
.iter()
.filter_map(|lint| {
Severity::try_from(lint.default_level())
.ok()
.or(default_severity)
.map(|severity| (*lint, (severity, LintSource::Default)))
})
.collect();

View File

@@ -176,6 +176,9 @@ pub(crate) struct SemanticIndex<'db> {
/// Map from the file-local [`FileScopeId`] to the salsa-ingredient [`ScopeId`].
scope_ids_by_scope: IndexVec<FileScopeId, ScopeId<'db>>,
/// Map from the file-local [`FileScopeId`] to the set of explicit-global symbols it contains.
globals_by_scope: FxHashMap<FileScopeId, FxHashSet<ScopedSymbolId>>,
/// Use-def map for each scope in this file.
use_def_maps: IndexVec<FileScopeId, Arc<UseDefMap<'db>>>,
@@ -255,6 +258,16 @@ impl<'db> SemanticIndex<'db> {
self.scope_ids_by_scope.iter().copied()
}
pub(crate) fn symbol_is_global_in_scope(
&self,
symbol: ScopedSymbolId,
scope: FileScopeId,
) -> bool {
self.globals_by_scope
.get(&scope)
.is_some_and(|globals| globals.contains(&symbol))
}
/// Returns the id of the parent scope.
pub(crate) fn parent_scope_id(&self, scope_id: FileScopeId) -> Option<FileScopeId> {
let scope = self.scope(scope_id);

View File

@@ -12,7 +12,7 @@ use ruff_python_ast::name::Name;
use ruff_python_ast::visitor::{walk_expr, walk_pattern, walk_stmt, Visitor};
use ruff_python_ast::{self as ast, PySourceType, PythonVersion};
use ruff_python_parser::semantic_errors::{
SemanticSyntaxChecker, SemanticSyntaxContext, SemanticSyntaxError,
SemanticSyntaxChecker, SemanticSyntaxContext, SemanticSyntaxError, SemanticSyntaxErrorKind,
};
use ruff_text_size::TextRange;
@@ -106,6 +106,7 @@ pub(super) struct SemanticIndexBuilder<'db> {
use_def_maps: IndexVec<FileScopeId, UseDefMapBuilder<'db>>,
scopes_by_node: FxHashMap<NodeWithScopeKey, FileScopeId>,
scopes_by_expression: FxHashMap<ExpressionNodeKey, FileScopeId>,
globals_by_scope: FxHashMap<FileScopeId, FxHashSet<ScopedSymbolId>>,
definitions_by_node: FxHashMap<DefinitionNodeKey, Definitions<'db>>,
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
imported_modules: FxHashSet<ModuleName>,
@@ -144,6 +145,7 @@ impl<'db> SemanticIndexBuilder<'db> {
scopes_by_node: FxHashMap::default(),
definitions_by_node: FxHashMap::default(),
expressions_by_node: FxHashMap::default(),
globals_by_scope: FxHashMap::default(),
imported_modules: FxHashSet::default(),
generator_functions: FxHashSet::default(),
@@ -1085,6 +1087,7 @@ impl<'db> SemanticIndexBuilder<'db> {
self.scopes_by_node.shrink_to_fit();
self.generator_functions.shrink_to_fit();
self.eager_snapshots.shrink_to_fit();
self.globals_by_scope.shrink_to_fit();
SemanticIndex {
symbol_tables,
@@ -1093,6 +1096,7 @@ impl<'db> SemanticIndexBuilder<'db> {
definitions_by_node: self.definitions_by_node,
expressions_by_node: self.expressions_by_node,
scope_ids_by_scope: self.scope_ids_by_scope,
globals_by_scope: self.globals_by_scope,
ast_ids,
scopes_by_expression: self.scopes_by_expression,
scopes_by_node: self.scopes_by_node,
@@ -1898,7 +1902,38 @@ where
// Everything in the current block after a terminal statement is unreachable.
self.mark_unreachable();
}
ast::Stmt::Global(ast::StmtGlobal { range: _, names }) => {
for name in names {
let symbol_id = self.add_symbol(name.id.clone());
let symbol_table = self.current_symbol_table();
let symbol = symbol_table.symbol(symbol_id);
if symbol.is_bound() || symbol.is_declared() || symbol.is_used() {
self.report_semantic_error(SemanticSyntaxError {
kind: SemanticSyntaxErrorKind::LoadBeforeGlobalDeclaration {
name: name.to_string(),
start: name.range.start(),
},
range: name.range,
python_version: self.python_version,
});
}
let scope_id = self.current_scope();
self.globals_by_scope
.entry(scope_id)
.or_default()
.insert(symbol_id);
}
walk_stmt(self, stmt);
}
ast::Stmt::Delete(ast::StmtDelete { targets, range: _ }) => {
for target in targets {
if let ast::Expr::Name(ast::ExprName { id, .. }) = target {
let symbol_id = self.add_symbol(id.clone());
self.current_symbol_table().mark_symbol_used(symbol_id);
}
}
walk_stmt(self, stmt);
}
_ => {
walk_stmt(self, stmt);
}
@@ -2387,7 +2422,8 @@ impl SemanticSyntaxContext for SemanticIndexBuilder<'_> {
self.source_text().as_str()
}
// TODO(brent) handle looking up `global` bindings
// We handle the one syntax error that relies on this method (`LoadBeforeGlobalDeclaration`)
// directly in `visit_stmt`, so this just returns a placeholder value.
fn global(&self, _name: &str) -> Option<TextRange> {
None
}

View File

@@ -33,7 +33,7 @@ declare_lint! {
pub(crate) static UNUSED_IGNORE_COMMENT = {
summary: "detects unused `type: ignore` comments",
status: LintStatus::preview("1.0.0"),
default_level: Level::Warn,
default_level: Level::Ignore,
}
}

View File

@@ -2734,7 +2734,7 @@ mod tests {
Program::get(&db)
.set_python_version(&mut db)
.to(PythonVersion::latest());
.to(PythonVersion::latest_ty());
for class in KnownClass::iter() {
assert_ne!(

View File

@@ -521,7 +521,7 @@ impl Drop for DiagnosticGuard<'_, '_> {
};
let expected_file = self.ctx.file();
let got_file = ann.get_span().file();
let got_file = ann.get_span().expect_ty_file();
assert_eq!(
expected_file,
got_file,

View File

@@ -646,7 +646,7 @@ declare_lint! {
/// must be assigned the value `False` at runtime; the type checker will consider its value to
/// be `True`. If annotated, it must be annotated as a type that can accept `bool` values.
pub(crate) static INVALID_TYPE_CHECKING_CONSTANT = {
summary: "detects invalid TYPE_CHECKING constant assignments",
summary: "detects invalid `TYPE_CHECKING` constant assignments",
status: LintStatus::preview("1.0.0"),
default_level: Level::Error,
}
@@ -851,7 +851,7 @@ declare_lint! {
pub(crate) static POSSIBLY_UNRESOLVED_REFERENCE = {
summary: "detects references to possibly undefined names",
status: LintStatus::preview("1.0.0"),
default_level: Level::Warn,
default_level: Level::Ignore,
}
}
@@ -1039,7 +1039,7 @@ declare_lint! {
pub(crate) static UNRESOLVED_REFERENCE = {
summary: "detects references to names that are not defined",
status: LintStatus::preview("1.0.0"),
default_level: Level::Warn,
default_level: Level::Error,
}
}

View File

@@ -56,7 +56,7 @@ use crate::semantic_index::definition::{
use crate::semantic_index::expression::{Expression, ExpressionKind};
use crate::semantic_index::narrowing_constraints::ConstraintKey;
use crate::semantic_index::symbol::{
FileScopeId, NodeWithScopeKind, NodeWithScopeRef, ScopeId, ScopeKind,
FileScopeId, NodeWithScopeKind, NodeWithScopeRef, ScopeId, ScopeKind, ScopedSymbolId,
};
use crate::semantic_index::{semantic_index, EagerSnapshotResult, SemanticIndex};
use crate::symbol::{
@@ -1387,9 +1387,29 @@ impl<'db> TypeInferenceBuilder<'db> {
.kind(self.db())
.category(self.context.in_stub())
.is_binding());
let use_def = self.index.use_def_map(binding.file_scope(self.db()));
let declarations = use_def.declarations_at_binding(binding);
let file_scope_id = binding.file_scope(self.db());
let symbol_table = self.index.symbol_table(file_scope_id);
let use_def = self.index.use_def_map(file_scope_id);
let mut bound_ty = ty;
let symbol_id = binding.symbol(self.db());
let global_use_def_map = self.index.use_def_map(FileScopeId::global());
let declarations = if self.skip_non_global_scopes(file_scope_id, symbol_id) {
let symbol_name = symbol_table.symbol(symbol_id).name();
match self
.index
.symbol_table(FileScopeId::global())
.symbol_id_by_name(symbol_name)
{
Some(id) => global_use_def_map.public_declarations(id),
// This case is a syntax error (load before global declaration) but ignore that here
None => use_def.declarations_at_binding(binding),
}
} else {
use_def.declarations_at_binding(binding)
};
let declared_ty = symbol_from_declarations(self.db(), declarations)
.map(|SymbolAndQualifiers { symbol, .. }| {
symbol.ignore_possibly_unbound().unwrap_or(Type::unknown())
@@ -1415,6 +1435,19 @@ impl<'db> TypeInferenceBuilder<'db> {
self.types.bindings.insert(binding, bound_ty);
}
/// Returns `true` if `symbol_id` should be looked up in the global scope, skipping intervening
/// local scopes.
fn skip_non_global_scopes(
&self,
file_scope_id: FileScopeId,
symbol_id: ScopedSymbolId,
) -> bool {
!file_scope_id.is_global()
&& self
.index
.symbol_is_global_in_scope(symbol_id, file_scope_id)
}
fn add_declaration(
&mut self,
node: AnyNodeRef,
@@ -5256,6 +5289,20 @@ impl<'db> TypeInferenceBuilder<'db> {
}
};
let current_file = self.file();
let skip_non_global_scopes = symbol_table
.symbol_id_by_name(symbol_name)
.is_some_and(|symbol_id| self.skip_non_global_scopes(file_scope_id, symbol_id));
if skip_non_global_scopes {
return symbol(
db,
FileScopeId::global().to_scope_id(db, current_file),
symbol_name,
);
}
// If it's a function-like scope and there is one or more binding in this scope (but
// none of those bindings are visible from where we are in the control flow), we cannot
// fallback to any bindings in enclosing scopes. As such, we can immediately short-circuit
@@ -5273,8 +5320,6 @@ impl<'db> TypeInferenceBuilder<'db> {
constraint_keys.push((file_scope_id, ConstraintKey::UseId(use_id)));
}
let current_file = self.file();
// Walk up parent scopes looking for a possible enclosing scope that may have a
// definition of this name visible to us (would be `LOAD_DEREF` at runtime.)
// Note that we skip the scope containing the use that we are resolving, since we
@@ -5773,6 +5818,18 @@ impl<'db> TypeInferenceBuilder<'db> {
}
}),
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::BitOr) => {
Some(Type::IntLiteral(n | m))
}
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::BitAnd) => {
Some(Type::IntLiteral(n & m))
}
(Type::IntLiteral(n), Type::IntLiteral(m), ast::Operator::BitXor) => {
Some(Type::IntLiteral(n ^ m))
}
(Type::BytesLiteral(lhs), Type::BytesLiteral(rhs), ast::Operator::Add) => {
let bytes = [&**lhs.value(self.db()), &**rhs.value(self.db())].concat();
Some(Type::bytes_literal(self.db(), &bytes))
@@ -5828,6 +5885,14 @@ impl<'db> TypeInferenceBuilder<'db> {
Some(Type::BooleanLiteral(b1 | b2))
}
(Type::BooleanLiteral(b1), Type::BooleanLiteral(b2), ast::Operator::BitAnd) => {
Some(Type::BooleanLiteral(b1 & b2))
}
(Type::BooleanLiteral(b1), Type::BooleanLiteral(b2), ast::Operator::BitXor) => {
Some(Type::BooleanLiteral(b1 ^ b2))
}
(Type::BooleanLiteral(bool_value), right, op) => self.infer_binary_expression_type(
node,
emitted_division_by_zero_diagnostic,

View File

@@ -76,8 +76,9 @@ fn to_lsp_diagnostic(
encoding: crate::PositionEncoding,
) -> Diagnostic {
let range = if let Some(span) = diagnostic.primary_span() {
let index = line_index(db.upcast(), span.file());
let source = source_text(db.upcast(), span.file());
let file = span.expect_ty_file();
let index = line_index(db.upcast(), file);
let source = source_text(db.upcast(), file);
span.range()
.map(|range| range.to_lsp_range(&source, &index, encoding))

View File

@@ -1,4 +1,5 @@
use camino::{Utf8Component, Utf8PathBuf};
use ruff_db::diagnostic::Severity;
use ruff_db::files::{File, Files};
use ruff_db::system::{
CaseSensitivity, DbWithWritableSystem, InMemorySystem, OsSystem, System, SystemPath,
@@ -25,7 +26,7 @@ pub(crate) struct Db {
impl Db {
pub(crate) fn setup() -> Self {
let rule_selection = RuleSelection::from_registry(default_lint_registry());
let rule_selection = RuleSelection::all(default_lint_registry(), Severity::Info);
Self {
system: MdtestSystem::in_memory(),

View File

@@ -13,6 +13,7 @@ use ruff_db::panic::catch_unwind;
use ruff_db::parsed::parsed_module;
use ruff_db::system::{DbWithWritableSystem as _, SystemPath, SystemPathBuf};
use ruff_db::testing::{setup_logging, setup_logging_with_filter};
use ruff_db::Upcast;
use ruff_source_file::{LineIndex, OneIndexed};
use std::backtrace::BacktraceStatus;
use std::fmt::Write;
@@ -464,7 +465,7 @@ fn create_diagnostic_snapshot(
writeln!(snapshot).unwrap();
}
writeln!(snapshot, "```").unwrap();
write!(snapshot, "{}", diag.display(db, &display_config)).unwrap();
write!(snapshot, "{}", diag.display(&db.upcast(), &display_config)).unwrap();
writeln!(snapshot, "```").unwrap();
}
snapshot

View File

@@ -373,7 +373,7 @@ impl Diagnostic {
self.inner.primary_span().and_then(|span| {
Some(Range::from_file_range(
&workspace.db,
FileRange::new(span.file(), span.range()?),
FileRange::new(span.expect_ty_file(), span.range()?),
workspace.position_encoding,
))
})
@@ -383,7 +383,7 @@ impl Diagnostic {
pub fn display(&self, workspace: &Workspace) -> JsString {
let config = DisplayDiagnosticConfig::default().color(false);
self.inner
.display(workspace.db.upcast(), &config)
.display(&workspace.db.upcast(), &config)
.to_string()
.into()
}

View File

@@ -30,7 +30,7 @@ ty_python_semantic = { path = "../crates/ty_python_semantic" }
ty_vendored = { path = "../crates/ty_vendored" }
libfuzzer-sys = { git = "https://github.com/rust-fuzz/libfuzzer", default-features = false }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "2c869364a9592d06fdf45c422e1e4a7265a8fe8a" }
salsa = { git = "https://github.com/salsa-rs/salsa.git", rev = "af69cc11146352ec2eb6d972537e99c473ac3748" }
similar = { version = "2.5.0" }
tracing = { version = "0.1.40" }

View File

@@ -32,17 +32,19 @@ struct TestDb {
files: Files,
system: TestSystem,
vendored: VendoredFileSystem,
events: Arc<Mutex<Vec<salsa::Event>>>,
rule_selection: Arc<RuleSelection>,
}
impl TestDb {
fn new() -> Self {
Self {
storage: salsa::Storage::default(),
storage: salsa::Storage::new(Some(Box::new({
move |event| {
tracing::trace!("event: {:?}", event);
}
}))),
system: TestSystem::default(),
vendored: ty_vendored::file_system().clone(),
events: Arc::default(),
files: Files::default(),
rule_selection: RuleSelection::from_registry(default_lint_registry()).into(),
}
@@ -103,14 +105,7 @@ impl SemanticDb for TestDb {
}
#[salsa::db]
impl salsa::Database for TestDb {
fn salsa_event(&self, event: &dyn Fn() -> salsa::Event) {
let event = event();
tracing::trace!("event: {:?}", event);
let mut events = self.events.lock().unwrap();
events.push(event);
}
}
impl salsa::Database for TestDb {}
fn setup_db() -> TestDb {
let db = TestDb::new();

View File

@@ -1,8 +1,12 @@
"""Update ruff.json in schemastore.
This script will clone astral-sh/schemastore, update the schema and push the changes
This script will clone `astral-sh/schemastore`, update the schema and push the changes
to a new branch tagged with the ruff git hash. You should see a URL to create the PR
to schemastore in the CLI.
Usage:
uv run --only-dev scripts/update_schemastore.py
"""
from __future__ import annotations
@@ -14,11 +18,17 @@ from subprocess import check_call, check_output
from tempfile import TemporaryDirectory
from typing import NamedTuple, assert_never
ruff_repo = "https://github.com/astral-sh/ruff"
root = Path(
check_output(["git", "rev-parse", "--show-toplevel"], text=True).strip(),
)
ruff_json = Path("schemas/json/ruff.json")
# The remote URL for the `ruff` repository.
RUFF_REPO = "https://github.com/astral-sh/ruff"
# The path to the root of the `ruff` repository.
RUFF_ROOT = Path(__file__).parent.parent
# The path to the JSON schema in the `ruff` repository.
RUFF_SCHEMA = RUFF_ROOT / "ruff.schema.json"
# The path to the JSON schema in the `schemastore` repository.
RUFF_JSON = Path("schemas/json/ruff.json")
class SchemastoreRepos(NamedTuple):
@@ -49,7 +59,7 @@ class GitProtocol(enum.Enum):
def update_schemastore(
schemastore_path: Path, schemastore_repos: SchemastoreRepos
) -> None:
if not schemastore_path.is_dir():
if not (schemastore_path / ".git").is_dir():
check_call(
["git", "clone", schemastore_repos.fork, schemastore_path, "--depth=1"]
)
@@ -78,13 +88,13 @@ def update_schemastore(
)
# Run npm install
src = schemastore_path.joinpath("src")
src = schemastore_path / "src"
check_call(["npm", "install"], cwd=schemastore_path)
# Update the schema and format appropriately
schema = json.loads(root.joinpath("ruff.schema.json").read_text())
schema = json.loads(RUFF_SCHEMA.read_text())
schema["$id"] = "https://json.schemastore.org/ruff.json"
src.joinpath(ruff_json).write_text(
(src / RUFF_JSON).write_text(
json.dumps(dict(schema.items()), indent=2, ensure_ascii=False),
)
check_call(
@@ -93,7 +103,7 @@ def update_schemastore(
"--plugin",
"prettier-plugin-sort-json",
"--write",
ruff_json,
RUFF_JSON,
],
cwd=src,
)
@@ -102,7 +112,7 @@ def update_schemastore(
# https://stackoverflow.com/a/9393642/3549270
if check_output(["git", "status", "-s"], cwd=schemastore_path).strip():
# Schema has changed, commit and push
commit_url = f"{ruff_repo}/commit/{current_sha}"
commit_url = f"{RUFF_REPO}/commit/{current_sha}"
commit_body = (
f"This updates ruff's JSON schema to [{current_sha}]({commit_url})"
)
@@ -146,14 +156,12 @@ def determine_git_protocol(argv: list[str] | None = None) -> GitProtocol:
def main() -> None:
schemastore_repos = determine_git_protocol().schemastore_repos()
schemastore_existing = root.joinpath("schemastore")
schemastore_existing = RUFF_ROOT / "schemastore"
if schemastore_existing.is_dir():
update_schemastore(schemastore_existing, schemastore_repos)
else:
with TemporaryDirectory() as temp_dir:
update_schemastore(
Path(temp_dir).joinpath("schemastore"), schemastore_repos
)
with TemporaryDirectory(prefix="ruff-schemastore-") as temp_dir:
update_schemastore(Path(temp_dir), schemastore_repos)
if __name__ == "__main__":

View File

@@ -551,7 +551,7 @@
]
},
"invalid-type-checking-constant": {
"title": "detects invalid TYPE_CHECKING constant assignments",
"title": "detects invalid `TYPE_CHECKING` constant assignments",
"description": "## What it does\nChecks for a value other than `False` assigned to the `TYPE_CHECKING` variable, or an\nannotation not assignable from `bool`.\n\n## Why is this bad?\nThe name `TYPE_CHECKING` is reserved for a flag that can be used to provide conditional\ncode seen only by the type checker, and not at runtime. Normally this flag is imported from\n`typing` or `typing_extensions`, but it can also be defined locally. If defined locally, it\nmust be assigned the value `False` at runtime; the type checker will consider its value to\nbe `True`. If annotated, it must be annotated as a type that can accept `bool` values.",
"default": "error",
"oneOf": [
@@ -653,7 +653,7 @@
"possibly-unresolved-reference": {
"title": "detects references to possibly undefined names",
"description": "## What it does\nChecks for references to names that are possibly not defined.\n\n## Why is this bad?\nUsing an undefined variable will raise a `NameError` at runtime.\n\n## Example\n\n```python\nfor i in range(0):\n x = i\n\nprint(x) # NameError: name 'x' is not defined\n```",
"default": "warn",
"default": "ignore",
"oneOf": [
{
"$ref": "#/definitions/Level"
@@ -783,7 +783,7 @@
"unresolved-reference": {
"title": "detects references to names that are not defined",
"description": "## What it does\nChecks for references to names that are not defined.\n\n## Why is this bad?\nUsing an undefined variable will raise a `NameError` at runtime.\n\n## Example\n\n```python\nprint(x) # NameError: name 'x' is not defined\n```",
"default": "warn",
"default": "error",
"oneOf": [
{
"$ref": "#/definitions/Level"
@@ -813,7 +813,7 @@
"unused-ignore-comment": {
"title": "detects unused `type: ignore` comments",
"description": "## What it does\nChecks for `type: ignore` or `ty: ignore` directives that are no longer applicable.\n\n## Why is this bad?\nA `type: ignore` directive that no longer matches any diagnostic violations is likely\nincluded by mistake, and should be removed to avoid confusion.\n\n## Examples\n```py\na = 20 / 2 # ty: ignore[division-by-zero]\n```\n\nUse instead:\n\n```py\na = 20 / 2\n```",
"default": "warn",
"default": "ignore",
"oneOf": [
{
"$ref": "#/definitions/Level"