Compare commits

...

10 Commits

Author SHA1 Message Date
Dhruv Manilawala
2b47794278 Do not emit Unknown token 2024-03-11 09:56:19 +05:30
Dhruv Manilawala
e9d5ca2fe1 Remove Pattern::Invalid variant (#10294)
## Summary

This PR removes the `Pattern::Invalid` variant. There are no references
of it in the parser.
2024-03-08 23:29:11 +05:30
Dhruv Manilawala
07057c6a35 Remove skip_until parser method (#10293)
## Summary

This PR removes the `skip_until` parser method. The main use case for it
was for error recovery which we want to isolate only in list parsing.

There are two references which are removed:
1. Parsing a list of match arguments in a class pattern. Take the
following code snippet as an example:

	```python
	match foo:
		case Foo(bar.z=1, baz):
			pass
	```
This is a syntax error as the keyword argument pattern can only have an
identifier but here it's an attribute node. Now, to move on to the next
argument (`baz`), the parser would skip until the end of the argument to
recover. What we will do now is to parse the value as a pattern (per
spec) thus moving the parser ahead and add the node with an empty
identifier.

	The above code will produce the following AST:

	<details><summary><b>AST</b></summary>
	<p>
	
	```rs
	Module(
	    ModModule {
	        range: 0..52,
	        body: [
	            Match(
	                StmtMatch {
	                    range: 0..51,
	                    subject: Name(
	                        ExprName {
	                            range: 6..9,
	                            id: "foo",
	                            ctx: Load,
	                        },
	                    ),
	                    cases: [
	                        MatchCase {
	                            range: 15..51,
	                            pattern: MatchClass(
	                                PatternMatchClass {
	                                    range: 20..37,
	                                    cls: Name(
	                                        ExprName {
	                                            range: 20..23,
	                                            id: "Foo",
	                                            ctx: Load,
	                                        },
	                                    ),
	                                    arguments: PatternArguments {
	                                        range: 24..37,
	                                        patterns: [
	                                            MatchAs(
	                                                PatternMatchAs {
	                                                    range: 33..36,
	                                                    pattern: None,
	                                                    name: Some(
	                                                        Identifier {
	                                                            id: "baz",
range: 33..36,
	                                                        },
	                                                    ),
	                                                },
	                                            ),
	                                        ],
	                                        keywords: [
	                                            PatternKeyword {
	                                                range: 24..31,
	                                                attr: Identifier {
	                                                    id: "",
	                                                    range: 31..31,
	                                                },
	                                                pattern: MatchValue(
	                                                    PatternMatchValue {
	                                                        range: 30..31,
value: NumberLiteral(
ExprNumberLiteral {
range: 30..31,
value: Int(
	                                                                    1,
	                                                                ),
	                                                            },
	                                                        ),
	                                                    },
	                                                ),
	                                            },
	                                        ],
	                                    },
	                                },
	                            ),
	                            guard: None,
	                            body: [
	                                Pass(
	                                    StmtPass {
	                                        range: 47..51,
	                                    },
	                                ),
	                            ],
	                        },
	                    ],
	                },
	            ),
	        ],
	    },
	)
	```
	
	</p>
	</details> 

2. Parsing a list of parameters. Here, our list parsing method makes
sure to only call the parse element function when it's a valid list
element. A parameter can start either with a `Star`, `DoubleStar`, or
`Name` token which corresponds to the 3 `if` conditions. Thus, the
`else` block is not required as the list parsing will recover without
it.
2024-03-08 23:28:08 +05:30
Dhruv Manilawala
3af851109f Improve various assignment target error (#10288)
## Summary

This PR improves error related things around assignment nodes, mainly
the following:
1. Rename parse error variant:
	a. `AssignmentError` -> `InvalidAssignmentTarget`
	b. `NamedAssignmentError` -> `InvalidNamedAssignmentTarget`
	c. `AugAssignmentError` -> `InvalidAugmnetedAssignmentTarget`
2. Add `InvalidDeleteTarget` for invalid `del` targets
a. Add helper function to check if it's a valid delete target similar to
other target check functions.
4. Fix: named assignment target can only be a `Name` node

## Test Plan

Various test cases locally. As mentioned in my previous PR, I want to
keep the testing part separate.
2024-03-08 23:24:08 +05:30
Dhruv Manilawala
c0c065f1ca Remove deprecated parsing list functions (#10271)
## Summary

This PR removes the deprecated parsing list functions and updates the
references to use the new functions.

There are now 4 functions to accommodate this pattern. They are divided
into 2 groups: one to parse a sequence of elements and the other to
parse a sequence of elements _separated_ by a comma. In each of the
groups, there are 2 functions: one collects and returns all the parsed
elements as a vector and the other delegates the collection part to the
user. This separation is achieved by using `Fn` and `FnMut` to allow
mutation in the later case.

The error recovery context has been updated to accommodate the new
sequence kind. Currently, the terminator token kinds only contain the
necessary token to end the list and not necessarily the ones which might
help in error recovery. This will be updated as I go through the testing
phase. This phase is basically coming up with a bunch of invalid
programs to check how the parser is acting and how can we help in the
recovery phase.


## Test Plan

Currently, my plan is to keep the testing part separate than the actual
update. This doesn't mean I'm not testing locally, but it's not
thorough. The main reason is to keep the diffs to a minimal and writing
test cases will require some effort which I want to decouple with the
actual change. This is ok here as it's not getting merged into `main`
but the parser PR.
2024-03-08 23:22:34 +05:30
Dhruv Manilawala
e837888c37 Rename to include "token" in method name (#10287)
Small quality of life improvement to rename the following method:
1. `current_kind` -> `current_token_kind`
2. `current_range` -> `current_token_range`

It's a PR for visibility.
2024-03-08 07:33:18 +05:30
Dhruv Manilawala
79861da8f6 Encapsulate Program fields (#10270)
## Summary

This PR updates the fields in `Program` struct to be private and exposes
methods to get the values. The motivation behind this is to encapsulate
the internal representation of the parsed program which we could alter
in the future.
2024-03-07 21:49:10 +05:30
Dhruv Manilawala
035ac75fae Assert the parser is at augmented assign token (#10269)
## Summary

This PR updates fixes one of the `FIXME` comment to assert that the
parser is at one of the possible augmented assignment token when parsing
an augmented assignment statement.

## Test Plan

1. Add valid test cases for all the possible augmented assignment tokens
2. Add invalid test cases similar to assignment statement
2024-03-07 18:37:09 +05:30
Dhruv Manilawala
b5cc384bb1 Fix tests and clippy warnings 2024-03-07 18:23:34 +05:30
Victor Hugo Gomes
78ee6441a7 Replace LALRPOP parser with hand-written parser
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-03-07 16:44:46 +05:30
254 changed files with 40006 additions and 5513 deletions

3
Cargo.lock generated
View File

@@ -2296,9 +2296,11 @@ dependencies = [
name = "ruff_python_parser"
version = "0.0.0"
dependencies = [
"annotate-snippets 0.9.2",
"anyhow",
"bitflags 2.4.2",
"bstr",
"drop_bomb",
"insta",
"is-macro",
"itertools 0.12.1",
@@ -2306,6 +2308,7 @@ dependencies = [
"lalrpop-util",
"memchr",
"ruff_python_ast",
"ruff_source_file",
"ruff_text_size",
"rustc-hash",
"static_assertions",

View File

@@ -523,7 +523,7 @@ from module import =
----- stdout -----
----- stderr -----
error: Failed to parse main.py:2:20: Unexpected token '='
error: Failed to parse main.py:2:20: Unexpected token =
"###);
Ok(())

View File

@@ -728,11 +728,11 @@ fn stdin_parse_error() {
success: false
exit_code: 1
----- stdout -----
-:1:17: E999 SyntaxError: Unexpected token '='
-:1:17: E999 SyntaxError: Unexpected token =
Found 1 error.
----- stderr -----
error: Failed to parse at 1:17: Unexpected token '='
error: Failed to parse at 1:17: Unexpected token =
"###);
}

View File

@@ -52,6 +52,7 @@ file_resolver.exclude = [
file_resolver.extend_exclude = [
"crates/ruff_linter/resources/",
"crates/ruff_python_formatter/resources/",
"crates/ruff_python_parser/resources/",
]
file_resolver.force_exclude = false
file_resolver.include = [

View File

@@ -194,7 +194,7 @@ impl DisplayParseError {
// Translate the byte offset to a location in the originating source.
let location =
if let Some(jupyter_index) = source_kind.as_ipy_notebook().map(Notebook::index) {
let source_location = source_code.source_location(error.offset);
let source_location = source_code.source_location(error.location.start());
ErrorLocation::Cell(
jupyter_index
@@ -208,7 +208,7 @@ impl DisplayParseError {
},
)
} else {
ErrorLocation::File(source_code.source_location(error.offset))
ErrorLocation::File(source_code.source_location(error.location.start()))
};
Self {
@@ -275,27 +275,7 @@ impl<'a> DisplayParseErrorType<'a> {
impl Display for DisplayParseErrorType<'_> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self.0 {
ParseErrorType::Eof => write!(f, "Expected token but reached end of file."),
ParseErrorType::ExtraToken(ref tok) => write!(
f,
"Got extraneous token: {tok}",
tok = TruncateAtNewline(&tok)
),
ParseErrorType::InvalidToken => write!(f, "Got invalid token"),
ParseErrorType::UnrecognizedToken(ref tok, ref expected) => {
if let Some(expected) = expected.as_ref() {
write!(
f,
"Expected '{expected}', but got {tok}",
tok = TruncateAtNewline(&tok)
)
} else {
write!(f, "Unexpected token {tok}", tok = TruncateAtNewline(&tok))
}
}
ParseErrorType::Lexical(ref error) => write!(f, "{error}"),
}
write!(f, "{}", TruncateAtNewline(&self.0))
}
}

View File

@@ -77,6 +77,8 @@ fn is_empty_or_null_fstring_element(element: &ast::FStringElement) -> bool {
ast::FStringElement::Expression(ast::FStringExpressionElement { expression, .. }) => {
is_empty_or_null_string(expression)
}
#[allow(deprecated)]
ast::FStringElement::Invalid(_) => false,
}
}

View File

@@ -14,5 +14,3 @@ bom_unsorted.py:1:1: I001 [*] Import block is un-sorted or un-formatted
2 |-import bar
1 |+import bar
2 |+import foo

View File

@@ -81,7 +81,7 @@ pub(crate) fn syntax_error(
parse_error: &ParseError,
locator: &Locator,
) {
let rest = locator.after(parse_error.offset);
let rest = locator.after(parse_error.location.start());
// Try to create a non-empty range so that the diagnostic can print a caret at the
// right position. This requires that we retrieve the next character, if any, and take its length
@@ -95,6 +95,6 @@ pub(crate) fn syntax_error(
SyntaxError {
message: format!("{}", DisplayParseErrorType::new(&parse_error.error)),
},
TextRange::at(parse_error.offset, len),
TextRange::at(parse_error.location.start(), len),
));
}

View File

@@ -8,5 +8,3 @@ E999.py:3:1: E999 SyntaxError: unindent does not match any outer indentation lev
| ^ E999
4 |
|

View File

@@ -77,7 +77,10 @@ pub(crate) fn assert_on_string_literal(checker: &mut Checker, test: &Expr) {
ast::FStringElement::Literal(ast::FStringLiteralElement {
value, ..
}) => value.is_empty(),
ast::FStringElement::Expression(_) => false,
#[allow(deprecated)]
ast::FStringElement::Expression(_) | ast::FStringElement::Invalid(_) => {
false
}
})
}
}) {
@@ -89,7 +92,10 @@ pub(crate) fn assert_on_string_literal(checker: &mut Checker, test: &Expr) {
ast::FStringElement::Literal(ast::FStringLiteralElement {
value, ..
}) => !value.is_empty(),
ast::FStringElement::Expression(_) => false,
#[allow(deprecated)]
ast::FStringElement::Expression(_) | ast::FStringElement::Invalid(_) => {
false
}
})
}
}) {

View File

@@ -200,5 +200,7 @@ fn is_allowed_value(expr: &Expr) -> bool {
| Expr::Starred(_)
| Expr::Slice(_)
| Expr::IpyEscapeCommand(_) => false,
#[allow(deprecated)]
Expr::Invalid(_) => false,
}
}

View File

@@ -110,5 +110,3 @@ UP027.py:10:17: UP027 [*] Replace unpacked list comprehension with a generator e
14 14 |
15 15 | # Should not change
16 16 | foo = [fn(x) for x in items]

View File

@@ -160,6 +160,8 @@ pub(crate) fn bit_count(checker: &mut Checker, call: &ExprCall) {
| Expr::EllipsisLiteral(_)
| Expr::Attribute(_)
| Expr::Subscript(_) => false,
#[allow(deprecated)]
Expr::Invalid(_) => false,
};
let replacement = if parenthesize {

View File

@@ -339,5 +339,6 @@ const fn is_valid_enclosing_node(node: AnyNodeRef) -> bool {
| AnyNodeRef::FString(_)
| AnyNodeRef::StringLiteral(_)
| AnyNodeRef::BytesLiteral(_) => false,
AnyNodeRef::FStringInvalidElement(_) | AnyNodeRef::ExprInvalid(_) => unreachable!(),
}
}

View File

@@ -234,6 +234,7 @@ pub enum ComparablePattern<'a> {
MatchStar(PatternMatchStar<'a>),
MatchAs(PatternMatchAs<'a>),
MatchOr(PatternMatchOr<'a>),
Invalid,
}
impl<'a> From<&'a ast::Pattern> for ComparablePattern<'a> {
@@ -513,6 +514,7 @@ impl<'a> From<&'a ast::ExceptHandler> for ComparableExceptHandler<'a> {
pub enum ComparableFStringElement<'a> {
Literal(&'a str),
FStringExpressionElement(FStringExpressionElement<'a>),
Invalid,
}
#[derive(Debug, PartialEq, Eq, Hash)]
@@ -540,6 +542,8 @@ impl<'a> From<&'a ast::FStringElement> for ComparableFStringElement<'a> {
.map(|spec| spec.elements.iter().map(Into::into).collect()),
})
}
#[allow(deprecated)]
ast::FStringElement::Invalid(_) => Self::Invalid,
}
}
}
@@ -864,6 +868,7 @@ pub enum ComparableExpr<'a> {
Tuple(ExprTuple<'a>),
Slice(ExprSlice<'a>),
IpyEscapeCommand(ExprIpyEscapeCommand<'a>),
Invalid,
}
impl<'a> From<&'a Box<ast::Expr>> for Box<ComparableExpr<'a>> {
@@ -1092,6 +1097,8 @@ impl<'a> From<&'a ast::Expr> for ComparableExpr<'a> {
value,
range: _,
}) => Self::IpyEscapeCommand(ExprIpyEscapeCommand { kind: *kind, value }),
#[allow(deprecated)]
ast::Expr::Invalid(_) => Self::Invalid,
}
}
}

View File

@@ -38,6 +38,7 @@ pub enum ExpressionRef<'a> {
Tuple(&'a ast::ExprTuple),
Slice(&'a ast::ExprSlice),
IpyEscapeCommand(&'a ast::ExprIpyEscapeCommand),
Invalid(&'a ast::ExprInvalid),
}
impl<'a> From<&'a Box<Expr>> for ExpressionRef<'a> {
@@ -81,6 +82,8 @@ impl<'a> From<&'a Expr> for ExpressionRef<'a> {
Expr::Tuple(value) => ExpressionRef::Tuple(value),
Expr::Slice(value) => ExpressionRef::Slice(value),
Expr::IpyEscapeCommand(value) => ExpressionRef::IpyEscapeCommand(value),
#[allow(deprecated)]
Expr::Invalid(value) => ExpressionRef::Invalid(value),
}
}
}
@@ -285,6 +288,7 @@ impl<'a> From<ExpressionRef<'a>> for AnyNodeRef<'a> {
ExpressionRef::IpyEscapeCommand(expression) => {
AnyNodeRef::ExprIpyEscapeCommand(expression)
}
ExpressionRef::Invalid(expression) => AnyNodeRef::ExprInvalid(expression),
}
}
}
@@ -324,6 +328,7 @@ impl Ranged for ExpressionRef<'_> {
ExpressionRef::Tuple(expression) => expression.range(),
ExpressionRef::Slice(expression) => expression.range(),
ExpressionRef::IpyEscapeCommand(expression) => expression.range(),
ExpressionRef::Invalid(expression) => expression.range(),
}
}
}

View File

@@ -259,6 +259,9 @@ pub fn any_over_expr(expr: &Expr, func: &dyn Fn(&Expr) -> bool) -> bool {
| Expr::NoneLiteral(_)
| Expr::EllipsisLiteral(_)
| Expr::IpyEscapeCommand(_) => false,
#[allow(deprecated)]
Expr::Invalid(_) => false,
}
}
@@ -313,7 +316,8 @@ pub fn any_over_f_string_element(
func: &dyn Fn(&Expr) -> bool,
) -> bool {
match element {
ast::FStringElement::Literal(_) => false,
#[allow(deprecated)]
ast::FStringElement::Literal(_) | ast::FStringElement::Invalid(_) => false,
ast::FStringElement::Expression(ast::FStringExpressionElement {
expression,
format_spec,
@@ -1288,6 +1292,8 @@ fn is_non_empty_f_string(expr: &ast::ExprFString) -> bool {
Expr::Name(_) => false,
Expr::Slice(_) => false,
Expr::IpyEscapeCommand(_) => false,
#[allow(deprecated)]
Expr::Invalid(_) => false,
// These literals may or may not be empty.
Expr::FString(f_string) => is_non_empty_f_string(f_string),
@@ -1302,6 +1308,8 @@ fn is_non_empty_f_string(expr: &ast::ExprFString) -> bool {
f_string.elements.iter().all(|element| match element {
FStringElement::Literal(string_literal) => !string_literal.is_empty(),
FStringElement::Expression(f_string) => inner(&f_string.expression),
#[allow(deprecated)]
FStringElement::Invalid(_) => false,
})
}
})
@@ -1325,6 +1333,8 @@ fn is_empty_f_string(expr: &ast::ExprFString) -> bool {
expression,
..
}) => inner(expression),
#[allow(deprecated)]
FStringElement::Invalid(_) => false,
})
}
_ => false,
@@ -1337,6 +1347,8 @@ fn is_empty_f_string(expr: &ast::ExprFString) -> bool {
f_string.elements.iter().all(|element| match element {
FStringElement::Literal(string_literal) => string_literal.is_empty(),
FStringElement::Expression(f_string) => inner(&f_string.expression),
#[allow(deprecated)]
FStringElement::Invalid(_) => false,
})
}
})
@@ -1602,7 +1614,7 @@ mod tests {
fn any_over_stmt_type_alias() {
let seen = RefCell::new(Vec::new());
let name = Expr::Name(ExprName {
id: "x".to_string(),
id: "x".into(),
range: TextRange::default(),
ctx: ExprContext::Load,
});

View File

@@ -86,9 +86,11 @@ pub enum AnyNode {
ExprTuple(ast::ExprTuple),
ExprSlice(ast::ExprSlice),
ExprIpyEscapeCommand(ast::ExprIpyEscapeCommand),
ExprInvalid(ast::ExprInvalid),
ExceptHandlerExceptHandler(ast::ExceptHandlerExceptHandler),
FStringExpressionElement(ast::FStringExpressionElement),
FStringLiteralElement(ast::FStringLiteralElement),
FStringInvalidElement(ast::FStringInvalidElement),
FStringFormatSpec(ast::FStringFormatSpec),
PatternMatchValue(ast::PatternMatchValue),
PatternMatchSingleton(ast::PatternMatchSingleton),
@@ -170,6 +172,7 @@ impl AnyNode {
| AnyNode::ExprCall(_)
| AnyNode::FStringExpressionElement(_)
| AnyNode::FStringLiteralElement(_)
| AnyNode::FStringInvalidElement(_)
| AnyNode::FStringFormatSpec(_)
| AnyNode::ExprFString(_)
| AnyNode::ExprStringLiteral(_)
@@ -186,6 +189,7 @@ impl AnyNode {
| AnyNode::ExprTuple(_)
| AnyNode::ExprSlice(_)
| AnyNode::ExprIpyEscapeCommand(_)
| AnyNode::ExprInvalid(_)
| AnyNode::ExceptHandlerExceptHandler(_)
| AnyNode::PatternMatchValue(_)
| AnyNode::PatternMatchSingleton(_)
@@ -252,6 +256,8 @@ impl AnyNode {
AnyNode::ExprTuple(node) => Some(Expr::Tuple(node)),
AnyNode::ExprSlice(node) => Some(Expr::Slice(node)),
AnyNode::ExprIpyEscapeCommand(node) => Some(Expr::IpyEscapeCommand(node)),
#[allow(deprecated)]
AnyNode::ExprInvalid(range) => Some(Expr::Invalid(range)),
AnyNode::ModModule(_)
| AnyNode::ModExpression(_)
@@ -283,6 +289,7 @@ impl AnyNode {
| AnyNode::ExceptHandlerExceptHandler(_)
| AnyNode::FStringExpressionElement(_)
| AnyNode::FStringLiteralElement(_)
| AnyNode::FStringInvalidElement(_)
| AnyNode::FStringFormatSpec(_)
| AnyNode::PatternMatchValue(_)
| AnyNode::PatternMatchSingleton(_)
@@ -364,6 +371,7 @@ impl AnyNode {
| AnyNode::ExprCall(_)
| AnyNode::FStringExpressionElement(_)
| AnyNode::FStringLiteralElement(_)
| AnyNode::FStringInvalidElement(_)
| AnyNode::FStringFormatSpec(_)
| AnyNode::ExprFString(_)
| AnyNode::ExprStringLiteral(_)
@@ -380,6 +388,7 @@ impl AnyNode {
| AnyNode::ExprTuple(_)
| AnyNode::ExprSlice(_)
| AnyNode::ExprIpyEscapeCommand(_)
| AnyNode::ExprInvalid(_)
| AnyNode::ExceptHandlerExceptHandler(_)
| AnyNode::PatternMatchValue(_)
| AnyNode::PatternMatchSingleton(_)
@@ -469,6 +478,7 @@ impl AnyNode {
| AnyNode::ExprCall(_)
| AnyNode::FStringExpressionElement(_)
| AnyNode::FStringLiteralElement(_)
| AnyNode::FStringInvalidElement(_)
| AnyNode::FStringFormatSpec(_)
| AnyNode::ExprFString(_)
| AnyNode::ExprStringLiteral(_)
@@ -485,6 +495,7 @@ impl AnyNode {
| AnyNode::ExprTuple(_)
| AnyNode::ExprSlice(_)
| AnyNode::ExprIpyEscapeCommand(_)
| AnyNode::ExprInvalid(_)
| AnyNode::ExceptHandlerExceptHandler(_)
| AnyNode::PatternArguments(_)
| AnyNode::PatternKeyword(_)
@@ -559,6 +570,7 @@ impl AnyNode {
| AnyNode::ExprCall(_)
| AnyNode::FStringExpressionElement(_)
| AnyNode::FStringLiteralElement(_)
| AnyNode::FStringInvalidElement(_)
| AnyNode::FStringFormatSpec(_)
| AnyNode::ExprFString(_)
| AnyNode::ExprStringLiteral(_)
@@ -575,6 +587,7 @@ impl AnyNode {
| AnyNode::ExprTuple(_)
| AnyNode::ExprSlice(_)
| AnyNode::ExprIpyEscapeCommand(_)
| AnyNode::ExprInvalid(_)
| AnyNode::PatternMatchValue(_)
| AnyNode::PatternMatchSingleton(_)
| AnyNode::PatternMatchSequence(_)
@@ -674,6 +687,7 @@ impl AnyNode {
Self::ExprCall(node) => AnyNodeRef::ExprCall(node),
Self::FStringExpressionElement(node) => AnyNodeRef::FStringExpressionElement(node),
Self::FStringLiteralElement(node) => AnyNodeRef::FStringLiteralElement(node),
Self::FStringInvalidElement(node) => AnyNodeRef::FStringInvalidElement(node),
Self::FStringFormatSpec(node) => AnyNodeRef::FStringFormatSpec(node),
Self::ExprFString(node) => AnyNodeRef::ExprFString(node),
Self::ExprStringLiteral(node) => AnyNodeRef::ExprStringLiteral(node),
@@ -690,6 +704,7 @@ impl AnyNode {
Self::ExprTuple(node) => AnyNodeRef::ExprTuple(node),
Self::ExprSlice(node) => AnyNodeRef::ExprSlice(node),
Self::ExprIpyEscapeCommand(node) => AnyNodeRef::ExprIpyEscapeCommand(node),
Self::ExprInvalid(node) => AnyNodeRef::ExprInvalid(node),
Self::ExceptHandlerExceptHandler(node) => AnyNodeRef::ExceptHandlerExceptHandler(node),
Self::PatternMatchValue(node) => AnyNodeRef::PatternMatchValue(node),
Self::PatternMatchSingleton(node) => AnyNodeRef::PatternMatchSingleton(node),
@@ -3354,6 +3369,40 @@ impl AstNode for ast::ExprIpyEscapeCommand {
} = self;
}
}
impl AstNode for ast::ExprInvalid {
fn cast(kind: AnyNode) -> Option<Self>
where
Self: Sized,
{
if let AnyNode::ExprInvalid(node) = kind {
Some(node)
} else {
None
}
}
fn cast_ref(kind: AnyNodeRef) -> Option<&Self> {
if let AnyNodeRef::ExprInvalid(node) = kind {
Some(node)
} else {
None
}
}
fn as_any_node_ref(&self) -> AnyNodeRef {
AnyNodeRef::from(self)
}
fn into_any_node(self) -> AnyNode {
AnyNode::from(self)
}
fn visit_preorder<'a, V>(&'a self, _visitor: &mut V)
where
V: PreorderVisitor<'a> + ?Sized,
{
}
}
impl AstNode for ast::ExceptHandlerExceptHandler {
fn cast(kind: AnyNode) -> Option<Self>
where
@@ -4573,6 +4622,8 @@ impl From<Expr> for AnyNode {
Expr::Tuple(node) => AnyNode::ExprTuple(node),
Expr::Slice(node) => AnyNode::ExprSlice(node),
Expr::IpyEscapeCommand(node) => AnyNode::ExprIpyEscapeCommand(node),
#[allow(deprecated)]
Expr::Invalid(range) => AnyNode::ExprInvalid(range),
}
}
}
@@ -4591,6 +4642,8 @@ impl From<FStringElement> for AnyNode {
match element {
FStringElement::Literal(node) => AnyNode::FStringLiteralElement(node),
FStringElement::Expression(node) => AnyNode::FStringExpressionElement(node),
#[allow(deprecated)]
FStringElement::Invalid(node) => AnyNode::FStringInvalidElement(node),
}
}
}
@@ -4996,6 +5049,12 @@ impl From<ast::ExprIpyEscapeCommand> for AnyNode {
}
}
impl From<ast::ExprInvalid> for AnyNode {
fn from(node: ast::ExprInvalid) -> Self {
AnyNode::ExprInvalid(node)
}
}
impl From<ast::ExceptHandlerExceptHandler> for AnyNode {
fn from(node: ast::ExceptHandlerExceptHandler) -> Self {
AnyNode::ExceptHandlerExceptHandler(node)
@@ -5202,6 +5261,7 @@ impl Ranged for AnyNode {
AnyNode::ExprCall(node) => node.range(),
AnyNode::FStringExpressionElement(node) => node.range(),
AnyNode::FStringLiteralElement(node) => node.range(),
AnyNode::FStringInvalidElement(node) => node.range(),
AnyNode::FStringFormatSpec(node) => node.range(),
AnyNode::ExprFString(node) => node.range(),
AnyNode::ExprStringLiteral(node) => node.range(),
@@ -5218,6 +5278,7 @@ impl Ranged for AnyNode {
AnyNode::ExprTuple(node) => node.range(),
AnyNode::ExprSlice(node) => node.range(),
AnyNode::ExprIpyEscapeCommand(node) => node.range(),
AnyNode::ExprInvalid(node) => node.range(),
AnyNode::ExceptHandlerExceptHandler(node) => node.range(),
AnyNode::PatternMatchValue(node) => node.range(),
AnyNode::PatternMatchSingleton(node) => node.range(),
@@ -5299,6 +5360,7 @@ pub enum AnyNodeRef<'a> {
ExprCall(&'a ast::ExprCall),
FStringExpressionElement(&'a ast::FStringExpressionElement),
FStringLiteralElement(&'a ast::FStringLiteralElement),
FStringInvalidElement(&'a ast::FStringInvalidElement),
FStringFormatSpec(&'a ast::FStringFormatSpec),
ExprFString(&'a ast::ExprFString),
ExprStringLiteral(&'a ast::ExprStringLiteral),
@@ -5315,6 +5377,7 @@ pub enum AnyNodeRef<'a> {
ExprTuple(&'a ast::ExprTuple),
ExprSlice(&'a ast::ExprSlice),
ExprIpyEscapeCommand(&'a ast::ExprIpyEscapeCommand),
ExprInvalid(&'a ast::ExprInvalid),
ExceptHandlerExceptHandler(&'a ast::ExceptHandlerExceptHandler),
PatternMatchValue(&'a ast::PatternMatchValue),
PatternMatchSingleton(&'a ast::PatternMatchSingleton),
@@ -5395,6 +5458,7 @@ impl<'a> AnyNodeRef<'a> {
AnyNodeRef::ExprCall(node) => NonNull::from(*node).cast(),
AnyNodeRef::FStringExpressionElement(node) => NonNull::from(*node).cast(),
AnyNodeRef::FStringLiteralElement(node) => NonNull::from(*node).cast(),
AnyNodeRef::FStringInvalidElement(node) => NonNull::from(*node).cast(),
AnyNodeRef::FStringFormatSpec(node) => NonNull::from(*node).cast(),
AnyNodeRef::ExprFString(node) => NonNull::from(*node).cast(),
AnyNodeRef::ExprStringLiteral(node) => NonNull::from(*node).cast(),
@@ -5411,6 +5475,7 @@ impl<'a> AnyNodeRef<'a> {
AnyNodeRef::ExprTuple(node) => NonNull::from(*node).cast(),
AnyNodeRef::ExprSlice(node) => NonNull::from(*node).cast(),
AnyNodeRef::ExprIpyEscapeCommand(node) => NonNull::from(*node).cast(),
AnyNodeRef::ExprInvalid(node) => NonNull::from(*node).cast(),
AnyNodeRef::ExceptHandlerExceptHandler(node) => NonNull::from(*node).cast(),
AnyNodeRef::PatternMatchValue(node) => NonNull::from(*node).cast(),
AnyNodeRef::PatternMatchSingleton(node) => NonNull::from(*node).cast(),
@@ -5497,6 +5562,7 @@ impl<'a> AnyNodeRef<'a> {
AnyNodeRef::ExprCall(_) => NodeKind::ExprCall,
AnyNodeRef::FStringExpressionElement(_) => NodeKind::FStringExpressionElement,
AnyNodeRef::FStringLiteralElement(_) => NodeKind::FStringLiteralElement,
AnyNodeRef::FStringInvalidElement(_) => NodeKind::FStringInvalidElement,
AnyNodeRef::FStringFormatSpec(_) => NodeKind::FStringFormatSpec,
AnyNodeRef::ExprFString(_) => NodeKind::ExprFString,
AnyNodeRef::ExprStringLiteral(_) => NodeKind::ExprStringLiteral,
@@ -5513,6 +5579,7 @@ impl<'a> AnyNodeRef<'a> {
AnyNodeRef::ExprTuple(_) => NodeKind::ExprTuple,
AnyNodeRef::ExprSlice(_) => NodeKind::ExprSlice,
AnyNodeRef::ExprIpyEscapeCommand(_) => NodeKind::ExprIpyEscapeCommand,
AnyNodeRef::ExprInvalid(_) => NodeKind::ExprInvalid,
AnyNodeRef::ExceptHandlerExceptHandler(_) => NodeKind::ExceptHandlerExceptHandler,
AnyNodeRef::PatternMatchValue(_) => NodeKind::PatternMatchValue,
AnyNodeRef::PatternMatchSingleton(_) => NodeKind::PatternMatchSingleton,
@@ -5594,6 +5661,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprCall(_)
| AnyNodeRef::FStringExpressionElement(_)
| AnyNodeRef::FStringLiteralElement(_)
| AnyNodeRef::FStringInvalidElement(_)
| AnyNodeRef::FStringFormatSpec(_)
| AnyNodeRef::ExprFString(_)
| AnyNodeRef::ExprStringLiteral(_)
@@ -5610,6 +5678,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprTuple(_)
| AnyNodeRef::ExprSlice(_)
| AnyNodeRef::ExprIpyEscapeCommand(_)
| AnyNodeRef::ExprInvalid(_)
| AnyNodeRef::ExceptHandlerExceptHandler(_)
| AnyNodeRef::PatternMatchValue(_)
| AnyNodeRef::PatternMatchSingleton(_)
@@ -5675,7 +5744,8 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprList(_)
| AnyNodeRef::ExprTuple(_)
| AnyNodeRef::ExprSlice(_)
| AnyNodeRef::ExprIpyEscapeCommand(_) => true,
| AnyNodeRef::ExprIpyEscapeCommand(_)
| AnyNodeRef::ExprInvalid(_) => true,
AnyNodeRef::ModModule(_)
| AnyNodeRef::ModExpression(_)
@@ -5707,6 +5777,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExceptHandlerExceptHandler(_)
| AnyNodeRef::FStringExpressionElement(_)
| AnyNodeRef::FStringLiteralElement(_)
| AnyNodeRef::FStringInvalidElement(_)
| AnyNodeRef::FStringFormatSpec(_)
| AnyNodeRef::PatternMatchValue(_)
| AnyNodeRef::PatternMatchSingleton(_)
@@ -5787,6 +5858,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprCall(_)
| AnyNodeRef::FStringExpressionElement(_)
| AnyNodeRef::FStringLiteralElement(_)
| AnyNodeRef::FStringInvalidElement(_)
| AnyNodeRef::FStringFormatSpec(_)
| AnyNodeRef::ExprFString(_)
| AnyNodeRef::ExprStringLiteral(_)
@@ -5803,6 +5875,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprTuple(_)
| AnyNodeRef::ExprSlice(_)
| AnyNodeRef::ExprIpyEscapeCommand(_)
| AnyNodeRef::ExprInvalid(_)
| AnyNodeRef::ExceptHandlerExceptHandler(_)
| AnyNodeRef::PatternMatchValue(_)
| AnyNodeRef::PatternMatchSingleton(_)
@@ -5892,6 +5965,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprCall(_)
| AnyNodeRef::FStringExpressionElement(_)
| AnyNodeRef::FStringLiteralElement(_)
| AnyNodeRef::FStringInvalidElement(_)
| AnyNodeRef::FStringFormatSpec(_)
| AnyNodeRef::ExprFString(_)
| AnyNodeRef::ExprStringLiteral(_)
@@ -5908,6 +5982,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprTuple(_)
| AnyNodeRef::ExprSlice(_)
| AnyNodeRef::ExprIpyEscapeCommand(_)
| AnyNodeRef::ExprInvalid(_)
| AnyNodeRef::PatternArguments(_)
| AnyNodeRef::PatternKeyword(_)
| AnyNodeRef::ExceptHandlerExceptHandler(_)
@@ -5982,6 +6057,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprCall(_)
| AnyNodeRef::FStringExpressionElement(_)
| AnyNodeRef::FStringLiteralElement(_)
| AnyNodeRef::FStringInvalidElement(_)
| AnyNodeRef::FStringFormatSpec(_)
| AnyNodeRef::ExprFString(_)
| AnyNodeRef::ExprStringLiteral(_)
@@ -5998,6 +6074,7 @@ impl<'a> AnyNodeRef<'a> {
| AnyNodeRef::ExprTuple(_)
| AnyNodeRef::ExprSlice(_)
| AnyNodeRef::ExprIpyEscapeCommand(_)
| AnyNodeRef::ExprInvalid(_)
| AnyNodeRef::PatternMatchValue(_)
| AnyNodeRef::PatternMatchSingleton(_)
| AnyNodeRef::PatternMatchSequence(_)
@@ -6136,6 +6213,7 @@ impl<'a> AnyNodeRef<'a> {
AnyNodeRef::StringLiteral(node) => node.visit_preorder(visitor),
AnyNodeRef::BytesLiteral(node) => node.visit_preorder(visitor),
AnyNodeRef::ElifElseClause(node) => node.visit_preorder(visitor),
AnyNodeRef::ExprInvalid(_) | AnyNodeRef::FStringInvalidElement(_) => {}
}
}
@@ -6692,6 +6770,12 @@ impl<'a> From<&'a ast::ExprIpyEscapeCommand> for AnyNodeRef<'a> {
}
}
impl<'a> From<&'a ast::ExprInvalid> for AnyNodeRef<'a> {
fn from(node: &'a ast::ExprInvalid) -> Self {
AnyNodeRef::ExprInvalid(node)
}
}
impl<'a> From<&'a ast::ExceptHandlerExceptHandler> for AnyNodeRef<'a> {
fn from(node: &'a ast::ExceptHandlerExceptHandler) -> Self {
AnyNodeRef::ExceptHandlerExceptHandler(node)
@@ -6872,6 +6956,8 @@ impl<'a> From<&'a Expr> for AnyNodeRef<'a> {
Expr::Tuple(node) => AnyNodeRef::ExprTuple(node),
Expr::Slice(node) => AnyNodeRef::ExprSlice(node),
Expr::IpyEscapeCommand(node) => AnyNodeRef::ExprIpyEscapeCommand(node),
#[allow(deprecated)]
Expr::Invalid(node) => AnyNodeRef::ExprInvalid(node),
}
}
}
@@ -6890,6 +6976,8 @@ impl<'a> From<&'a FStringElement> for AnyNodeRef<'a> {
match element {
FStringElement::Expression(node) => AnyNodeRef::FStringExpressionElement(node),
FStringElement::Literal(node) => AnyNodeRef::FStringLiteralElement(node),
#[allow(deprecated)]
FStringElement::Invalid(node) => AnyNodeRef::FStringInvalidElement(node),
}
}
}
@@ -7024,6 +7112,7 @@ impl Ranged for AnyNodeRef<'_> {
AnyNodeRef::ExprCall(node) => node.range(),
AnyNodeRef::FStringExpressionElement(node) => node.range(),
AnyNodeRef::FStringLiteralElement(node) => node.range(),
AnyNodeRef::FStringInvalidElement(node) => node.range(),
AnyNodeRef::FStringFormatSpec(node) => node.range(),
AnyNodeRef::ExprFString(node) => node.range(),
AnyNodeRef::ExprStringLiteral(node) => node.range(),
@@ -7040,6 +7129,7 @@ impl Ranged for AnyNodeRef<'_> {
AnyNodeRef::ExprTuple(node) => node.range(),
AnyNodeRef::ExprSlice(node) => node.range(),
AnyNodeRef::ExprIpyEscapeCommand(node) => node.range(),
AnyNodeRef::ExprInvalid(node) => node.range(),
AnyNodeRef::ExceptHandlerExceptHandler(node) => node.range(),
AnyNodeRef::PatternMatchValue(node) => node.range(),
AnyNodeRef::PatternMatchSingleton(node) => node.range(),
@@ -7123,6 +7213,7 @@ pub enum NodeKind {
ExprCall,
FStringExpressionElement,
FStringLiteralElement,
FStringInvalidElement,
FStringFormatSpec,
ExprFString,
ExprStringLiteral,
@@ -7139,6 +7230,7 @@ pub enum NodeKind {
ExprTuple,
ExprSlice,
ExprIpyEscapeCommand,
ExprInvalid,
ExceptHandlerExceptHandler,
PatternMatchValue,
PatternMatchSingleton,

View File

@@ -1,6 +1,7 @@
#![allow(clippy::derive_partial_eq_without_eq)]
use std::cell::OnceCell;
use std::fmt;
use std::fmt::Debug;
use std::ops::Deref;
@@ -622,6 +623,10 @@ pub enum Expr {
// Jupyter notebook specific
#[is(name = "ipy_escape_command_expr")]
IpyEscapeCommand(ExprIpyEscapeCommand),
// TODO(dhruvmanila): Remove this variant
#[is(name = "invalid_expr")]
Invalid(ExprInvalid),
}
impl Expr {
@@ -655,6 +660,25 @@ impl Expr {
}
}
#[derive(Clone, Debug, PartialEq)]
pub struct ExprInvalid {
pub value: String,
pub range: TextRange,
}
impl From<ExprInvalid> for Expr {
fn from(payload: ExprInvalid) -> Self {
#[allow(deprecated)]
Expr::Invalid(payload)
}
}
impl Ranged for ExprInvalid {
fn range(&self) -> TextRange {
self.range
}
}
/// An AST node used to represent a IPython escape command at the expression level.
///
/// For example,
@@ -966,6 +990,18 @@ impl Deref for FStringLiteralElement {
}
}
#[derive(Clone, Debug, PartialEq)]
pub struct FStringInvalidElement {
pub value: String,
pub range: TextRange,
}
impl Ranged for FStringInvalidElement {
fn range(&self) -> TextRange {
self.range
}
}
/// Transforms a value prior to formatting it.
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, is_macro::Is)]
#[repr(i8)]
@@ -1194,6 +1230,9 @@ impl From<FString> for Expr {
pub enum FStringElement {
Literal(FStringLiteralElement),
Expression(FStringExpressionElement),
// TODO(dhruvmanila): Remove this variant
Invalid(FStringInvalidElement),
}
impl Ranged for FStringElement {
@@ -1201,6 +1240,8 @@ impl Ranged for FStringElement {
match self {
FStringElement::Literal(node) => node.range(),
FStringElement::Expression(node) => node.range(),
#[allow(deprecated)]
FStringElement::Invalid(node) => node.range(),
}
}
}
@@ -3253,10 +3294,17 @@ impl IpyEscapeKind {
}
}
/// An `Identifier` with an empty `id` is invalid.
///
/// For example, in the following code `id` will be empty.
/// ```python
/// def 1():
/// ...
/// ```
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct Identifier {
id: String,
range: TextRange,
pub id: String,
pub range: TextRange,
}
impl Identifier {
@@ -3267,6 +3315,10 @@ impl Identifier {
range,
}
}
pub fn is_valid(&self) -> bool {
!self.id.is_empty()
}
}
impl Identifier {
@@ -3694,6 +3746,8 @@ impl Ranged for crate::Expr {
Self::Tuple(node) => node.range(),
Self::Slice(node) => node.range(),
Self::IpyEscapeCommand(node) => node.range(),
#[allow(deprecated)]
Self::Invalid(node) => node.range(),
}
}
}

View File

@@ -113,6 +113,10 @@ impl Transformer for Relocator {
Expr::IpyEscapeCommand(nodes::ExprIpyEscapeCommand { range, .. }) => {
*range = self.range;
}
#[allow(deprecated)]
Expr::Invalid(nodes::ExprInvalid { range, .. }) => {
*range = self.range;
}
}
walk_expr(self, expr);
}

View File

@@ -564,6 +564,8 @@ pub fn walk_expr<'a, V: Visitor<'a> + ?Sized>(visitor: &mut V, expr: &'a Expr) {
}
}
Expr::IpyEscapeCommand(_) => {}
#[allow(deprecated)]
Expr::Invalid(_) => {}
}
}

View File

@@ -304,6 +304,8 @@ where
Expr::Tuple(expr) => expr.visit_preorder(visitor),
Expr::Slice(expr) => expr.visit_preorder(visitor),
Expr::IpyEscapeCommand(expr) => expr.visit_preorder(visitor),
#[allow(deprecated)]
Expr::Invalid(_) => {}
}
}
@@ -526,6 +528,8 @@ pub fn walk_f_string_element<'a, V: PreorderVisitor<'a> + ?Sized>(
match f_string_element {
FStringElement::Expression(element) => element.visit_preorder(visitor),
FStringElement::Literal(element) => element.visit_preorder(visitor),
#[allow(deprecated)]
FStringElement::Invalid(_) => {}
}
}
visitor.leave_node(node);

View File

@@ -552,7 +552,8 @@ pub fn walk_expr<V: Transformer + ?Sized>(visitor: &V, expr: &mut Expr) {
visitor.visit_expr(expr);
}
}
Expr::IpyEscapeCommand(_) => {}
#[allow(deprecated)]
Expr::IpyEscapeCommand(_) | Expr::Invalid(_) => {}
}
}

View File

@@ -1192,6 +1192,8 @@ impl<'a> Generator<'a> {
Expr::IpyEscapeCommand(ast::ExprIpyEscapeCommand { kind, value, .. }) => {
self.p(&format!("{kind}{value}"));
}
#[allow(deprecated)]
Expr::Invalid(ast::ExprInvalid { value, .. }) => self.p(value.as_str()),
}
}
@@ -1360,6 +1362,8 @@ impl<'a> Generator<'a> {
*conversion,
format_spec.as_deref(),
),
#[allow(deprecated)]
ast::FStringElement::Invalid(ast::FStringInvalidElement { value, .. }) => self.p(value),
}
}

View File

@@ -0,0 +1,25 @@
use ruff_formatter::write;
use ruff_python_ast::AnyNodeRef;
use ruff_python_ast::ExprInvalid;
use crate::expression::parentheses::{NeedsParentheses, OptionalParentheses};
use crate::prelude::*;
#[derive(Default)]
pub struct FormatExprInvalid;
impl FormatNodeRule<ExprInvalid> for FormatExprInvalid {
fn fmt_fields(&self, item: &ExprInvalid, f: &mut PyFormatter) -> FormatResult<()> {
write!(f, [source_text_slice(item.range)])
}
}
impl NeedsParentheses for ExprInvalid {
fn needs_parentheses(
&self,
_parent: AnyNodeRef,
_context: &PyFormatContext,
) -> OptionalParentheses {
OptionalParentheses::BestFit
}
}

View File

@@ -36,6 +36,7 @@ pub(crate) mod expr_ellipsis_literal;
pub(crate) mod expr_f_string;
pub(crate) mod expr_generator;
pub(crate) mod expr_if;
pub(crate) mod expr_invalid;
pub(crate) mod expr_ipy_escape_command;
pub(crate) mod expr_lambda;
pub(crate) mod expr_list;
@@ -108,6 +109,8 @@ impl FormatRule<Expr, PyFormatContext<'_>> for FormatExpr {
Expr::Tuple(expr) => expr.format().fmt(f),
Expr::Slice(expr) => expr.format().fmt(f),
Expr::IpyEscapeCommand(expr) => expr.format().fmt(f),
#[allow(deprecated)]
Expr::Invalid(expr) => expr.format().fmt(f),
});
let parenthesize = match parentheses {
Parentheses::Preserve => is_expression_parenthesized(
@@ -296,6 +299,8 @@ fn format_with_parentheses_comments(
Expr::Tuple(expr) => FormatNodeRule::fmt_fields(expr.format().rule(), expr, f),
Expr::Slice(expr) => FormatNodeRule::fmt_fields(expr.format().rule(), expr, f),
Expr::IpyEscapeCommand(expr) => FormatNodeRule::fmt_fields(expr.format().rule(), expr, f),
#[allow(deprecated)]
Expr::Invalid(expr) => FormatNodeRule::fmt_fields(expr.format().rule(), expr, f),
});
leading_comments(leading_outer).fmt(f)?;
@@ -492,6 +497,8 @@ impl NeedsParentheses for Expr {
Expr::Tuple(expr) => expr.needs_parentheses(parent, context),
Expr::Slice(expr) => expr.needs_parentheses(parent, context),
Expr::IpyEscapeCommand(expr) => expr.needs_parentheses(parent, context),
#[allow(deprecated)]
Expr::Invalid(expr) => expr.needs_parentheses(parent, context),
}
}
}
@@ -800,6 +807,10 @@ impl<'input> CanOmitOptionalParenthesesVisitor<'input> {
| Expr::IpyEscapeCommand(_) => {
return;
}
#[allow(deprecated)]
Expr::Invalid(_) => {
return;
}
};
walk_expr(self, expr);
@@ -1141,6 +1152,8 @@ pub(crate) fn is_expression_huggable(expr: &Expr, context: &PyFormatContext) ->
| Expr::BytesLiteral(_)
| Expr::FString(_)
| Expr::EllipsisLiteral(_) => false,
#[allow(deprecated)]
Expr::Invalid(_) => false,
}
}
@@ -1201,6 +1214,8 @@ pub(crate) fn is_splittable_expression(expr: &Expr, context: &PyFormatContext) -
| Expr::EllipsisLiteral(_)
| Expr::Slice(_)
| Expr::IpyEscapeCommand(_) => false,
#[allow(deprecated)]
Expr::Invalid(_) => false,
// Expressions that insert split points when parenthesized.
Expr::Compare(_)

View File

@@ -2066,6 +2066,42 @@ impl<'ast> IntoFormat<PyFormatContext<'ast>> for ast::ExprIpyEscapeCommand {
}
}
impl FormatRule<ast::ExprInvalid, PyFormatContext<'_>>
for crate::expression::expr_invalid::FormatExprInvalid
{
#[inline]
fn fmt(&self, node: &ast::ExprInvalid, f: &mut PyFormatter) -> FormatResult<()> {
FormatNodeRule::<ast::ExprInvalid>::fmt(self, node, f)
}
}
impl<'ast> AsFormat<PyFormatContext<'ast>> for ast::ExprInvalid {
type Format<'a> = FormatRefWithRule<
'a,
ast::ExprInvalid,
crate::expression::expr_invalid::FormatExprInvalid,
PyFormatContext<'ast>,
>;
fn format(&self) -> Self::Format<'_> {
FormatRefWithRule::new(
self,
crate::expression::expr_invalid::FormatExprInvalid::default(),
)
}
}
impl<'ast> IntoFormat<PyFormatContext<'ast>> for ast::ExprInvalid {
type Format = FormatOwnedWithRule<
ast::ExprInvalid,
crate::expression::expr_invalid::FormatExprInvalid,
PyFormatContext<'ast>,
>;
fn into_format(self) -> Self::Format {
FormatOwnedWithRule::new(
self,
crate::expression::expr_invalid::FormatExprInvalid::default(),
)
}
}
impl FormatRule<ast::ExceptHandlerExceptHandler, PyFormatContext<'_>>
for crate::other::except_handler_except_handler::FormatExceptHandlerExceptHandler
{

View File

@@ -135,7 +135,7 @@ pub fn format_module_source(
let source_type = options.source_type();
let (tokens, comment_ranges) =
tokens_and_ranges(source, source_type).map_err(|err| ParseError {
offset: err.location(),
location: err.location(),
error: ParseErrorType::Lexical(err.into_error()),
})?;
let module = parse_tokens(tokens, source, source_type.as_mode())?;

View File

@@ -37,6 +37,7 @@ impl Format<PyFormatContext<'_>> for FormatFStringElement<'_> {
FStringElement::Expression(expression) => {
FormatFStringExpressionElement::new(expression, self.context).fmt(f)
}
FStringElement::Invalid(_) => unreachable!(),
}
}
}

View File

@@ -73,7 +73,7 @@ pub fn format_range(
let (tokens, comment_ranges) =
tokens_and_ranges(source, options.source_type()).map_err(|err| ParseError {
offset: err.location(),
location: err.location(),
error: ParseErrorType::Lexical(err.into_error()),
})?;
@@ -646,6 +646,7 @@ impl Format<PyFormatContext<'_>> for FormatEnclosingNode<'_> {
AnyNodeRef::MatchCase(node) => node.format().fmt(f),
AnyNodeRef::Decorator(node) => node.format().fmt(f),
AnyNodeRef::ElifElseClause(node) => node.format().fmt(f),
AnyNodeRef::ExprInvalid(node) => node.format().fmt(f),
AnyNodeRef::ExprBoolOp(_)
| AnyNodeRef::ExprNamed(_)
@@ -706,7 +707,8 @@ impl Format<PyFormatContext<'_>> for FormatEnclosingNode<'_> {
| AnyNodeRef::TypeParamTypeVar(_)
| AnyNodeRef::TypeParamTypeVarTuple(_)
| AnyNodeRef::TypeParamParamSpec(_)
| AnyNodeRef::BytesLiteral(_) => {
| AnyNodeRef::BytesLiteral(_)
| AnyNodeRef::FStringInvalidElement(_) => {
panic!("Range formatting only supports formatting logical lines")
}
}

View File

@@ -11,7 +11,6 @@ input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/empty_mult
## Output
```python
```

View File

@@ -9,7 +9,6 @@ input_file: crates/ruff_python_formatter/resources/test/fixtures/ruff/empty_trai
## Output
```python
```

View File

@@ -19,6 +19,7 @@ ruff_text_size = { path = "../ruff_text_size" }
anyhow = { workspace = true }
bitflags = { workspace = true }
drop_bomb = { workspace = true }
bstr = { workspace = true }
is-macro = { workspace = true }
itertools = { workspace = true }
@@ -30,7 +31,10 @@ unicode-ident = { workspace = true }
unicode_names2 = { workspace = true }
[dev-dependencies]
insta = { workspace = true }
ruff_source_file = { path = "../ruff_source_file" }
annotate-snippets = { workspace = true }
insta = { workspace = true, features = ["glob"] }
[build-dependencies]
anyhow = { workspace = true }

View File

@@ -5,7 +5,7 @@ use std::path::{Path, PathBuf};
use tiny_keccak::{Hasher, Sha3};
fn main() {
const SOURCE: &str = "src/python.lalrpop";
const SOURCE: &str = "src/lalrpop/python.lalrpop";
println!("cargo:rerun-if-changed={SOURCE}");
let target;
@@ -14,12 +14,12 @@ fn main() {
#[cfg(feature = "lalrpop")]
{
let out_dir = PathBuf::from(std::env::var_os("OUT_DIR").unwrap());
target = out_dir.join("src/python.rs");
target = out_dir.join("src/lalrpop/python.rs");
}
#[cfg(not(feature = "lalrpop"))]
{
target = PathBuf::from("src/python.rs");
error = "python.lalrpop and src/python.rs doesn't match. This is a ruff_python_parser bug. Please report it unless you are editing ruff_python_parser. Run `lalrpop src/python.lalrpop` to build ruff_python_parser again.";
target = PathBuf::from("src/lalrpop/python.rs");
error = "python.lalrpop and src/lalrpop/python.rs doesn't match. This is a ruff_python_parser bug. Please report it unless you are editing ruff_python_parser. Run `lalrpop src/lalrpop/python.lalrpop` to build ruff_python_parser again.";
}
let Some(message) = requires_lalrpop(SOURCE, &target) else {

View File

@@ -0,0 +1,6 @@
# Check http://editorconfig.org for more information
# This is the main config file for this project:
root = true
[*.py]
insert_final_newline = false

View File

@@ -0,0 +1,6 @@
f(b=20, c)
f(**b, *c)
# Duplicate keyword argument
f(a=20, a=30)

View File

@@ -0,0 +1,7 @@
a = (🐶
# comment 🐶
)
a = (🐶 +
# comment
🐶)

View File

@@ -0,0 +1,2 @@
# TODO(micha): The offset of the generated error message is off by one.
lambda a, b=20, c: 1

View File

@@ -0,0 +1,9 @@
lambda a, a: 1
lambda a, *, a: 1
lambda a, a=20: 1
lambda a, *a: 1
lambda a, *, **a: 1

View File

@@ -0,0 +1,7 @@
a = pass = c
a + b
a = b = pass = c
a + b

View File

@@ -0,0 +1,7 @@
a = = c
a + b

View File

@@ -0,0 +1,2 @@
# TODO(micha): The range of the generated error message is off by one.
def f(a, b=20, c): pass

View File

@@ -0,0 +1,12 @@
def f(a, a): pass
def f2(a, *, a): pass
def f3(a, a=20): pass
def f4(a, *a): pass
def f5(a, *, **a): pass
# TODO(micha): This is inconsistent. All other examples only highlight the argument name.
def f6(a, a: str): pass

View File

@@ -0,0 +1,24 @@
# FIXME: The type param related error message and the parser recovery are looking pretty good **except**
# that the lexer never recovers from the unclosed `[`, resulting in it lexing `NonLogicalNewline` tokens instead of `Newline` tokens.
# That's because the parser has no way of feeding the error recovery back to the lexer,
# so they don't agree on the state of the world which can lead to all kind of errors further down in the file.
# This is not just a problem with parentheses but also with the transformation made by the
# `SoftKeywordTransformer` because the `Parser` and `Transfomer` may not agree if they're
# currently in a position where the `type` keyword is allowed or not.
# That roughly means that any kind of recovery can lead to unrelated syntax errors
# on following lines.
def unclosed[A, *B(test: name):
pass
a + b
def keyword[A, await](): ...
def not_a_type_param[A, |, B](): ...
def multiple_commas[A,,B](): ...
def multiple_trailing_commas[A,,](): ...
def multiple_commas_and_recovery[A,,100](): ...

View File

@@ -0,0 +1,7 @@
if True:
pass
elif False:
pass
elf:
pass

View File

@@ -0,0 +1,3 @@
# FIXME(micha): This creates two syntax errors instead of just one (and overlapping ones)
if True)):
pass

View File

@@ -0,0 +1,8 @@
# Improving the recovery would require changing the lexer to emit an extra dedent token after `a + b`.
if True:
pass
a + b
pass
a = 10

View File

@@ -0,0 +1,6 @@
if True:
a + b
if False: # This if statement has neither an indent nor a newline.

View File

@@ -0,0 +1,4 @@
if True
pass
a = 10

View File

@@ -0,0 +1,11 @@
from abc import a, b,
a + b
from abc import ,,
from abc import
from abc import (a, b, c
a + b

View File

@@ -0,0 +1,42 @@
# Regression test: https://github.com/astral-sh/ruff/issues/6895
# First we test, broadly, that various kinds of assignments are now
# rejected by the parser. e.g., `5 = 3`, `5 += 3`, `(5): int = 3`.
5 = 3
5 += 3
(5): int = 3
# Now we exhaustively test all possible cases where assignment can fail.
x or y = 42
(x := 5) = 42
x + y = 42
-x = 42
(lambda _: 1) = 42
a if b else c = 42
{"a": 5} = 42
{a} = 42
[x for x in xs] = 42
{x for x in xs} = 42
{x: x * 2 for x in xs} = 42
(x for x in xs) = 42
await x = 42
(yield x) = 42
(yield from xs) = 42
a < b < c = 42
foo() = 42
f"{quux}" = 42
f"{foo} and {bar}" = 42
"foo" = 42
b"foo" = 42
123 = 42
True = 42
None = 42
... = 42
*foo() = 42
[x, foo(), y] = [42, 42, 42]
[[a, b], [[42]], d] = [[1, 2], [[3]], 4]
(x, foo(), y) = (42, 42, 42)

View File

@@ -0,0 +1,34 @@
# This is similar to `./invalid_assignment_targets.py`, but for augmented
# assignment targets.
x or y += 42
(x := 5) += 42
x + y += 42
-x += 42
(lambda _: 1) += 42
a if b else c += 42
{"a": 5} += 42
{a} += 42
[x for x in xs] += 42
{x for x in xs} += 42
{x: x * 2 for x in xs} += 42
(x for x in xs) += 42
await x += 42
(yield x) += 42
(yield from xs) += 42
a < b < c += 42
foo() += 42
f"{quux}" += 42
f"{foo} and {bar}" += 42
"foo" += 42
b"foo" += 42
123 += 42
True += 42
None += 42
... += 42
*foo() += 42
[x, foo(), y] += [42, 42, 42]
[[a, b], [[42]], d] += [[1, 2], [[3]], 4]
(x, foo(), y) += (42, 42, 42)

View File

@@ -0,0 +1,3 @@
# This test previously passed before the assignment operator checking
# above, but we include it here for good measure.
(5 := 3)

View File

@@ -0,0 +1 @@
x = {y for y in (1, 2, 3)}

View File

@@ -0,0 +1,11 @@
lambda: 1
lambda a, b, c: 1
lambda a, b=20, c=30: 1
lambda *, a, b, c: 1
lambda *, a, b=20, c=30: 1
lambda a, b, c, *, d, e: 0

View File

@@ -0,0 +1 @@
x = [y for y in (1, 2, 3)]

View File

@@ -0,0 +1,4 @@
if x := 1:
pass
(x := 5)

View File

@@ -0,0 +1 @@
x: int = 1

View File

@@ -0,0 +1,40 @@
x = (1, 2, 3)
(x, y) = (1, 2, 3)
[x, y] = (1, 2, 3)
x.y = (1, 2, 3)
x[y] = (1, 2, 3)
(x, *y) = (1, 2, 3)
# This last group of tests checks that assignments we expect to be parsed
# (including some interesting ones) continue to be parsed successfully.
*foo = 42
[x, y, z] = [1, 2, 3]
(x, y, z) = (1, 2, 3)
x[0] = 42
# This is actually a type error, not a syntax error. So check that it
# doesn't fail parsing.
5[0] = 42
x[1:2] = [42]
# This is actually a type error, not a syntax error. So check that it
# doesn't fail parsing.
5[1:2] = [42]
foo.bar = 42
# This is actually an attribute error, not a syntax error. So check that
# it doesn't fail parsing.
"foo".y = 42
foo = 42

View File

@@ -0,0 +1,18 @@
x += 1
x.y += (1, 2, 3)
x[y] += (1, 2, 3)
# All possible augmented assignment tokens
x += 1
x -= 1
x *= 1
x /= 1
x //= 1
x %= 1
x **= 1
x &= 1
x |= 1
x ^= 1
x <<= 1
x >>= 1
x @= 1

View File

@@ -0,0 +1,3 @@
del x
del x.y
del x[y]

View File

@@ -0,0 +1,2 @@
for x in (1, 2, 3):
pass

View File

@@ -0,0 +1,38 @@
def no_parameters():
pass
def positional_parameters(a, b, c):
pass
def positional_parameters_with_default_values(a, b=20, c=30):
pass
def keyword_only_parameters(*, a, b, c):
pass
def keyword_only_parameters_with_defaults(*, a, b=20, c=30):
pass
def positional_and_keyword_parameters(a, b, c, *, d, e, f):
pass
def positional_and_keyword_parameters_with_defaults(a, b, c, *, d, e=20, f=30):
pass
def positional_and_keyword_parameters_with_defaults_and_varargs(
a, b, c, *args, d, e=20, f=30
):
pass
def positional_and_keyword_parameters_with_defaults_and_varargs_and_kwargs(
a, b, c, *args, d, e=20, f=30, **kwargs
):
pass

View File

@@ -0,0 +1,2 @@
with 1 as x:
pass

View File

@@ -0,0 +1,221 @@
use std::fmt;
use ruff_text_size::TextRange;
use crate::{
lexer::{LexicalError, LexicalErrorType},
Tok, TokenKind,
};
/// Represents represent errors that occur during parsing and are
/// returned by the `parse_*` functions.
#[derive(Debug, PartialEq)]
pub struct ParseError {
pub error: ParseErrorType,
pub location: TextRange,
}
impl std::ops::Deref for ParseError {
type Target = ParseErrorType;
fn deref(&self) -> &Self::Target {
&self.error
}
}
impl std::error::Error for ParseError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
Some(&self.error)
}
}
impl fmt::Display for ParseError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "{} at byte range {:?}", &self.error, self.location)
}
}
impl From<LexicalError> for ParseError {
fn from(error: LexicalError) -> Self {
ParseError {
location: error.location(),
error: ParseErrorType::Lexical(error.into_error()),
}
}
}
impl ParseError {
pub fn error(self) -> ParseErrorType {
self.error
}
}
/// Represents the different types of errors that can occur during parsing of an f-string.
#[derive(Debug, Clone, PartialEq)]
pub enum FStringErrorType {
/// Expected a right brace after an opened left brace.
UnclosedLbrace,
/// An invalid conversion flag was encountered.
InvalidConversionFlag,
/// A single right brace was encountered.
SingleRbrace,
/// Unterminated string.
UnterminatedString,
/// Unterminated triple-quoted string.
UnterminatedTripleQuotedString,
/// A lambda expression without parentheses was encountered.
LambdaWithoutParentheses,
}
impl std::fmt::Display for FStringErrorType {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
use FStringErrorType::{
InvalidConversionFlag, LambdaWithoutParentheses, SingleRbrace, UnclosedLbrace,
UnterminatedString, UnterminatedTripleQuotedString,
};
match self {
UnclosedLbrace => write!(f, "expecting '}}'"),
InvalidConversionFlag => write!(f, "invalid conversion character"),
SingleRbrace => write!(f, "single '}}' is not allowed"),
UnterminatedString => write!(f, "unterminated string"),
UnterminatedTripleQuotedString => write!(f, "unterminated triple-quoted string"),
LambdaWithoutParentheses => {
write!(f, "lambda expressions are not allowed without parentheses")
}
}
}
}
/// Represents the different types of errors that can occur during parsing.
#[derive(Debug, PartialEq)]
pub enum ParseErrorType {
/// An unexpected error occurred.
OtherError(String),
/// An empty slice was found during parsing, e.g `l[]`.
EmptySlice,
/// An invalid expression was found in the assignment `target`.
InvalidAssignmentTarget,
/// An invalid expression was found in the named assignment `target`.
InvalidNamedAssignmentTarget,
/// An invalid expression was found in the augmented assignment `target`.
InvalidAugmentedAssignmentTarget,
/// An invalid expression was found in the delete `target`.
InvalidDeleteTarget,
/// Multiple simple statements were found in the same line without a `;` separating them.
SimpleStmtsInSameLine,
/// An unexpected indentation was found during parsing.
UnexpectedIndentation,
/// The statement being parsed cannot be `async`.
StmtIsNotAsync(TokenKind),
/// A parameter was found after a vararg
ParamFollowsVarKeywordParam,
/// A positional argument follows a keyword argument.
PositionalArgumentError,
/// An iterable argument unpacking `*args` follows keyword argument unpacking `**kwargs`.
UnpackedArgumentError,
/// A non-default argument follows a default argument.
DefaultArgumentError,
/// A simple statement and a compound statement was found in the same line.
SimpleStmtAndCompoundStmtInSameLine,
/// An invalid `match` case pattern was found.
InvalidMatchPatternLiteral { pattern: TokenKind },
/// The parser expected a specific token that was not found.
ExpectedToken {
expected: TokenKind,
found: TokenKind,
},
/// A duplicate argument was found in a function definition.
DuplicateArgumentError(String),
/// A keyword argument was repeated.
DuplicateKeywordArgumentError(String),
/// An f-string error containing the [`FStringErrorType`].
FStringError(FStringErrorType),
/// Parser encountered an error during lexing.
Lexical(LexicalErrorType),
// RustPython specific.
/// Parser encountered an extra token
ExtraToken(Tok),
/// Parser encountered an invalid token
InvalidToken,
/// Parser encountered an unexpected token
UnrecognizedToken(Tok, Option<String>),
/// Parser encountered an unexpected end of input
Eof,
}
impl std::error::Error for ParseErrorType {}
impl std::fmt::Display for ParseErrorType {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
ParseErrorType::OtherError(msg) => write!(f, "{msg}"),
ParseErrorType::ExpectedToken { found, expected } => {
write!(f, "expected {expected:?}, found {found:?}")
}
ParseErrorType::Lexical(ref lex_error) => write!(f, "{lex_error}"),
ParseErrorType::SimpleStmtsInSameLine => {
write!(f, "use `;` to separate simple statements")
}
ParseErrorType::SimpleStmtAndCompoundStmtInSameLine => write!(
f,
"compound statements not allowed in the same line as simple statements"
),
ParseErrorType::StmtIsNotAsync(kind) => {
write!(f, "`{kind:?}` statement cannot be async")
}
ParseErrorType::UnpackedArgumentError => {
write!(
f,
"iterable argument unpacking follows keyword argument unpacking"
)
}
ParseErrorType::PositionalArgumentError => {
write!(f, "positional argument follows keyword argument unpacking")
}
ParseErrorType::EmptySlice => write!(f, "slice cannot be empty"),
ParseErrorType::ParamFollowsVarKeywordParam => {
write!(f, "parameters cannot follow var-keyword parameter")
}
ParseErrorType::DefaultArgumentError => {
write!(f, "non-default argument follows default argument")
}
ParseErrorType::InvalidMatchPatternLiteral { pattern } => {
write!(f, "invalid pattern `{pattern:?}`")
}
ParseErrorType::UnexpectedIndentation => write!(f, "unexpected indentation"),
ParseErrorType::InvalidAssignmentTarget => write!(f, "invalid assignment target"),
ParseErrorType::InvalidNamedAssignmentTarget => {
write!(f, "invalid named assignment target")
}
ParseErrorType::InvalidAugmentedAssignmentTarget => {
write!(f, "invalid augmented assignment target")
}
ParseErrorType::InvalidDeleteTarget => {
write!(f, "invalid delete target")
}
ParseErrorType::DuplicateArgumentError(arg_name) => {
write!(f, "duplicate argument '{arg_name}' in function definition")
}
ParseErrorType::DuplicateKeywordArgumentError(arg_name) => {
write!(f, "keyword argument repeated: {arg_name}")
}
ParseErrorType::FStringError(ref fstring_error) => {
write!(f, "f-string: {fstring_error}")
}
// RustPython specific.
ParseErrorType::Eof => write!(f, "Got unexpected EOF"),
ParseErrorType::ExtraToken(ref tok) => write!(f, "Got extraneous token: {tok:?}"),
ParseErrorType::InvalidToken => write!(f, "Got invalid token"),
ParseErrorType::UnrecognizedToken(ref tok, ref expected) => {
if *tok == Tok::Indent {
write!(f, "Unexpected indent")
} else if expected.as_deref() == Some("Indent") {
write!(f, "Expected an indented block")
} else {
write!(f, "Unexpected token {tok}")
}
}
}
}
}

View File

@@ -1,236 +0,0 @@
use std::hash::BuildHasherDefault;
// Contains functions that perform validation and parsing of arguments and parameters.
// Checks apply both to functions and to lambdas.
use crate::lexer::{LexicalError, LexicalErrorType};
use ruff_python_ast::{self as ast};
use ruff_text_size::{Ranged, TextRange, TextSize};
use rustc_hash::FxHashSet;
pub(crate) struct ArgumentList {
pub(crate) args: Vec<ast::Expr>,
pub(crate) keywords: Vec<ast::Keyword>,
}
// Perform validation of function/lambda arguments in a function definition.
pub(crate) fn validate_arguments(arguments: &ast::Parameters) -> Result<(), LexicalError> {
let mut all_arg_names = FxHashSet::with_capacity_and_hasher(
arguments.posonlyargs.len()
+ arguments.args.len()
+ usize::from(arguments.vararg.is_some())
+ arguments.kwonlyargs.len()
+ usize::from(arguments.kwarg.is_some()),
BuildHasherDefault::default(),
);
let posonlyargs = arguments.posonlyargs.iter();
let args = arguments.args.iter();
let kwonlyargs = arguments.kwonlyargs.iter();
let vararg: Option<&ast::Parameter> = arguments.vararg.as_deref();
let kwarg: Option<&ast::Parameter> = arguments.kwarg.as_deref();
for arg in posonlyargs
.chain(args)
.chain(kwonlyargs)
.map(|arg| &arg.parameter)
.chain(vararg)
.chain(kwarg)
{
let range = arg.range;
let arg_name = arg.name.as_str();
if !all_arg_names.insert(arg_name) {
return Err(LexicalError::new(
LexicalErrorType::DuplicateArgumentError(arg_name.to_string().into_boxed_str()),
range.start(),
));
}
}
Ok(())
}
pub(crate) fn validate_pos_params(
args: &(
Vec<ast::ParameterWithDefault>,
Vec<ast::ParameterWithDefault>,
),
) -> Result<(), LexicalError> {
let (posonlyargs, args) = args;
#[allow(clippy::skip_while_next)]
let first_invalid = posonlyargs
.iter()
.chain(args.iter()) // for all args
.skip_while(|arg| arg.default.is_none()) // starting with args without default
.skip_while(|arg| arg.default.is_some()) // and then args with default
.next(); // there must not be any more args without default
if let Some(invalid) = first_invalid {
return Err(LexicalError::new(
LexicalErrorType::DefaultArgumentError,
invalid.parameter.start(),
));
}
Ok(())
}
type FunctionArgument = (
Option<(TextSize, TextSize, Option<ast::Identifier>)>,
ast::Expr,
);
// Parse arguments as supplied during a function/lambda *call*.
pub(crate) fn parse_arguments(
function_arguments: Vec<FunctionArgument>,
) -> Result<ArgumentList, LexicalError> {
// First, run through the comments to determine the number of positional and keyword arguments.
let mut keyword_names = FxHashSet::with_capacity_and_hasher(
function_arguments.len(),
BuildHasherDefault::default(),
);
let mut double_starred = false;
let mut num_args = 0;
let mut num_keywords = 0;
for (name, value) in &function_arguments {
if let Some((start, _end, name)) = name {
// Check for duplicate keyword arguments in the call.
if let Some(keyword_name) = &name {
if !keyword_names.insert(keyword_name.to_string()) {
return Err(LexicalError::new(
LexicalErrorType::DuplicateKeywordArgumentError(
keyword_name.to_string().into_boxed_str(),
),
*start,
));
}
} else {
double_starred = true;
}
num_keywords += 1;
} else {
// Positional arguments mustn't follow keyword arguments.
if num_keywords > 0 && !is_starred(value) {
return Err(LexicalError::new(
LexicalErrorType::PositionalArgumentError,
value.start(),
));
// Allow starred arguments after keyword arguments but
// not after double-starred arguments.
} else if double_starred {
return Err(LexicalError::new(
LexicalErrorType::UnpackedArgumentError,
value.start(),
));
}
num_args += 1;
}
}
// Second, push the arguments into vectors of exact capacity. This avoids a vector resize later
// on when these vectors are boxed into slices.
let mut args = Vec::with_capacity(num_args);
let mut keywords = Vec::with_capacity(num_keywords);
for (name, value) in function_arguments {
if let Some((start, end, name)) = name {
keywords.push(ast::Keyword {
arg: name,
value,
range: TextRange::new(start, end),
});
} else {
args.push(value);
}
}
Ok(ArgumentList { args, keywords })
}
// Check if an expression is a starred expression.
const fn is_starred(exp: &ast::Expr) -> bool {
exp.is_starred_expr()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::parser::parse_suite;
use crate::ParseErrorType;
macro_rules! function_and_lambda {
($($name:ident: $code:expr,)*) => {
$(
#[test]
fn $name() {
let parse_ast = crate::parser::parse_suite($code, );
insta::assert_debug_snapshot!(parse_ast);
}
)*
}
}
function_and_lambda! {
test_function_no_args_with_ranges: "def f(): pass",
test_function_pos_args_with_ranges: "def f(a, b, c): pass",
}
function_and_lambda! {
test_function_no_args: "def f(): pass",
test_function_pos_args: "def f(a, b, c): pass",
test_function_pos_args_with_defaults: "def f(a, b=20, c=30): pass",
test_function_kw_only_args: "def f(*, a, b, c): pass",
test_function_kw_only_args_with_defaults: "def f(*, a, b=20, c=30): pass",
test_function_pos_and_kw_only_args: "def f(a, b, c, *, d, e, f): pass",
test_function_pos_and_kw_only_args_with_defaults: "def f(a, b, c, *, d, e=20, f=30): pass",
test_function_pos_and_kw_only_args_with_defaults_and_varargs: "def f(a, b, c, *args, d, e=20, f=30): pass",
test_function_pos_and_kw_only_args_with_defaults_and_varargs_and_kwargs: "def f(a, b, c, *args, d, e=20, f=30, **kwargs): pass",
test_lambda_no_args: "lambda: 1",
test_lambda_pos_args: "lambda a, b, c: 1",
test_lambda_pos_args_with_defaults: "lambda a, b=20, c=30: 1",
test_lambda_kw_only_args: "lambda *, a, b, c: 1",
test_lambda_kw_only_args_with_defaults: "lambda *, a, b=20, c=30: 1",
test_lambda_pos_and_kw_only_args: "lambda a, b, c, *, d, e: 0",
}
fn function_parse_error(src: &str) -> LexicalErrorType {
let parse_ast = parse_suite(src);
parse_ast
.map_err(|e| match e.error {
ParseErrorType::Lexical(e) => e,
_ => panic!("Expected LexicalError"),
})
.expect_err("Expected error")
}
macro_rules! function_and_lambda_error {
($($name:ident: $code:expr, $error:expr,)*) => {
$(
#[test]
fn $name() {
let error = function_parse_error($code);
assert_eq!(error, $error);
}
)*
}
}
function_and_lambda_error! {
// Check definitions
test_duplicates_f1: "def f(a, a): pass", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_f2: "def f(a, *, a): pass", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_f3: "def f(a, a=20): pass", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_f4: "def f(a, *a): pass", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_f5: "def f(a, *, **a): pass", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_l1: "lambda a, a: 1", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_l2: "lambda a, *, a: 1", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_l3: "lambda a, a=20: 1", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_l4: "lambda a, *a: 1", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_duplicates_l5: "lambda a, *, **a: 1", LexicalErrorType::DuplicateArgumentError("a".to_string().into_boxed_str()),
test_default_arg_error_f: "def f(a, b=20, c): pass", LexicalErrorType::DefaultArgumentError,
test_default_arg_error_l: "lambda a, b=20, c: 1", LexicalErrorType::DefaultArgumentError,
// Check some calls.
test_positional_arg_error_f: "f(b=20, c)", LexicalErrorType::PositionalArgumentError,
test_unpacked_arg_error_f: "f(**b, *c)", LexicalErrorType::UnpackedArgumentError,
test_duplicate_kw_f1: "f(a=20, a=30)", LexicalErrorType::DuplicateKeywordArgumentError("a".to_string().into_boxed_str()),
}
}

View File

@@ -5,7 +5,9 @@ These routines are named in a way that supports qualified use. For example,
`invalid::assignment_targets`.
*/
use {ruff_python_ast::Expr, ruff_text_size::TextSize};
use ruff_python_ast::Expr;
use ruff_text_size::TextRange;
use crate::lexer::{LexicalError, LexicalErrorType};
@@ -37,42 +39,44 @@ pub(crate) fn assignment_target(target: &Expr) -> Result<(), LexicalError> {
#[allow(clippy::enum_glob_use)]
use self::Expr::*;
let err = |location: TextSize| -> LexicalError {
let err = |location: TextRange| -> LexicalError {
let error = LexicalErrorType::AssignmentError;
LexicalError::new(error, location)
};
match *target {
BoolOp(ref e) => Err(err(e.range.start())),
Named(ref e) => Err(err(e.range.start())),
BinOp(ref e) => Err(err(e.range.start())),
UnaryOp(ref e) => Err(err(e.range.start())),
Lambda(ref e) => Err(err(e.range.start())),
If(ref e) => Err(err(e.range.start())),
Dict(ref e) => Err(err(e.range.start())),
Set(ref e) => Err(err(e.range.start())),
ListComp(ref e) => Err(err(e.range.start())),
SetComp(ref e) => Err(err(e.range.start())),
DictComp(ref e) => Err(err(e.range.start())),
Generator(ref e) => Err(err(e.range.start())),
Await(ref e) => Err(err(e.range.start())),
Yield(ref e) => Err(err(e.range.start())),
YieldFrom(ref e) => Err(err(e.range.start())),
Compare(ref e) => Err(err(e.range.start())),
Call(ref e) => Err(err(e.range.start())),
BoolOp(ref e) => Err(err(e.range)),
Named(ref e) => Err(err(e.range)),
BinOp(ref e) => Err(err(e.range)),
UnaryOp(ref e) => Err(err(e.range)),
Lambda(ref e) => Err(err(e.range)),
If(ref e) => Err(err(e.range)),
Dict(ref e) => Err(err(e.range)),
Set(ref e) => Err(err(e.range)),
ListComp(ref e) => Err(err(e.range)),
SetComp(ref e) => Err(err(e.range)),
DictComp(ref e) => Err(err(e.range)),
Generator(ref e) => Err(err(e.range)),
Await(ref e) => Err(err(e.range)),
Yield(ref e) => Err(err(e.range)),
YieldFrom(ref e) => Err(err(e.range)),
Compare(ref e) => Err(err(e.range)),
Call(ref e) => Err(err(e.range)),
// FString is recursive, but all its forms are invalid as an
// assignment target, so we can reject it without exploring it.
FString(ref e) => Err(err(e.range.start())),
StringLiteral(ref e) => Err(err(e.range.start())),
BytesLiteral(ref e) => Err(err(e.range.start())),
NumberLiteral(ref e) => Err(err(e.range.start())),
BooleanLiteral(ref e) => Err(err(e.range.start())),
NoneLiteral(ref e) => Err(err(e.range.start())),
EllipsisLiteral(ref e) => Err(err(e.range.start())),
FString(ref e) => Err(err(e.range)),
StringLiteral(ref e) => Err(err(e.range)),
BytesLiteral(ref e) => Err(err(e.range)),
NumberLiteral(ref e) => Err(err(e.range)),
BooleanLiteral(ref e) => Err(err(e.range)),
NoneLiteral(ref e) => Err(err(e.range)),
EllipsisLiteral(ref e) => Err(err(e.range)),
#[allow(deprecated)]
Invalid(ref e) => Err(err(e.range)),
// This isn't in the Python grammar but is Jupyter notebook specific.
// It seems like this should be an error. It does also seem like the
// parser prevents this from ever appearing as an assignment target
// anyway. ---AG
IpyEscapeCommand(ref e) => Err(err(e.range.start())),
IpyEscapeCommand(ref e) => Err(err(e.range)),
// The only nested expressions allowed as an assignment target
// are star exprs, lists and tuples.
Starred(ref e) => assignment_target(&e.value),
@@ -89,611 +93,3 @@ pub(crate) fn assignment_target(target: &Expr) -> Result<(), LexicalError> {
Name(_) => Ok(()),
}
}
#[cfg(test)]
mod tests {
use crate::parse_suite;
// First we test, broadly, that various kinds of assignments are now
// rejected by the parser. e.g., `5 = 3`, `5 += 3`, `(5): int = 3`.
// Regression test: https://github.com/astral-sh/ruff/issues/6895
#[test]
fn err_literal_assignment() {
let ast = parse_suite(r"5 = 3");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
// This test previously passed before the assignment operator checking
// above, but we include it here for good measure.
#[test]
fn err_assignment_expr() {
let ast = parse_suite(r"(5 := 3)");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: UnrecognizedToken(
ColonEqual,
None,
),
offset: 3,
},
)
"###);
}
#[test]
fn err_literal_augment_assignment() {
let ast = parse_suite(r"5 += 3");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_literal_annotation_assignment() {
let ast = parse_suite(r"(5): int = 3");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 1,
},
)
"###);
}
// Now we exhaustively test all possible cases where assignment can fail.
#[test]
fn err_bool_op() {
let ast = parse_suite(r"x or y = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_named_expr() {
let ast = parse_suite(r"(x := 5) = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 1,
},
)
"###);
}
#[test]
fn err_bin_op() {
let ast = parse_suite(r"x + y = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_unary_op() {
let ast = parse_suite(r"-x = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_lambda() {
let ast = parse_suite(r"(lambda _: 1) = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 1,
},
)
"###);
}
#[test]
fn err_if_exp() {
let ast = parse_suite(r"a if b else c = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_dict() {
let ast = parse_suite(r"{'a':5} = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_set() {
let ast = parse_suite(r"{a} = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_list_comp() {
let ast = parse_suite(r"[x for x in xs] = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_set_comp() {
let ast = parse_suite(r"{x for x in xs} = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_dict_comp() {
let ast = parse_suite(r"{x: x*2 for x in xs} = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_generator_exp() {
let ast = parse_suite(r"(x for x in xs) = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_await() {
let ast = parse_suite(r"await x = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_yield() {
let ast = parse_suite(r"(yield x) = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 1,
},
)
"###);
}
#[test]
fn err_yield_from() {
let ast = parse_suite(r"(yield from xs) = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 1,
},
)
"###);
}
#[test]
fn err_compare() {
let ast = parse_suite(r"a < b < c = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_call() {
let ast = parse_suite(r"foo() = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_formatted_value() {
// N.B. It looks like the parser can't generate a top-level
// FormattedValue, where as the official Python AST permits
// representing a single f-string containing just a variable as a
// FormattedValue directly.
//
// Bottom line is that because of this, this test is (at present)
// duplicative with the `fstring` test. That is, in theory these tests
// could fail independently, but in practice their failure or success
// is coupled.
//
// See: https://docs.python.org/3/library/ast.html#ast.FormattedValue
let ast = parse_suite(r#"f"{quux}" = 42"#);
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_fstring() {
let ast = parse_suite(r#"f"{foo} and {bar}" = 42"#);
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_string_literal() {
let ast = parse_suite(r#""foo" = 42"#);
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_bytes_literal() {
let ast = parse_suite(r#"b"foo" = 42"#);
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_number_literal() {
let ast = parse_suite(r"123 = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_boolean_literal() {
let ast = parse_suite(r"True = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_none_literal() {
let ast = parse_suite(r"None = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_ellipsis_literal() {
let ast = parse_suite(r"... = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 0,
},
)
"###);
}
#[test]
fn err_starred() {
let ast = parse_suite(r"*foo() = 42");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 1,
},
)
"###);
}
#[test]
fn err_list() {
let ast = parse_suite(r"[x, foo(), y] = [42, 42, 42]");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 4,
},
)
"###);
}
#[test]
fn err_list_nested() {
let ast = parse_suite(r"[[a, b], [[42]], d] = [[1, 2], [[3]], 4]");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 11,
},
)
"###);
}
#[test]
fn err_tuple() {
let ast = parse_suite(r"(x, foo(), y) = (42, 42, 42)");
insta::assert_debug_snapshot!(ast, @r###"
Err(
ParseError {
error: Lexical(
AssignmentError,
),
offset: 4,
},
)
"###);
}
// This last group of tests checks that assignments we expect to be parsed
// (including some interesting ones) continue to be parsed successfully.
#[test]
fn ok_starred() {
let ast = parse_suite(r"*foo = 42");
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_list() {
let ast = parse_suite(r"[x, y, z] = [1, 2, 3]");
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_tuple() {
let ast = parse_suite(r"(x, y, z) = (1, 2, 3)");
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_subscript_normal() {
let ast = parse_suite(r"x[0] = 42");
insta::assert_debug_snapshot!(ast);
}
// This is actually a type error, not a syntax error. So check that it
// doesn't fail parsing.
#[test]
fn ok_subscript_weird() {
let ast = parse_suite(r"5[0] = 42");
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_slice_normal() {
let ast = parse_suite(r"x[1:2] = [42]");
insta::assert_debug_snapshot!(ast);
}
// This is actually a type error, not a syntax error. So check that it
// doesn't fail parsing.
#[test]
fn ok_slice_weird() {
let ast = parse_suite(r"5[1:2] = [42]");
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_attribute_normal() {
let ast = parse_suite(r"foo.bar = 42");
insta::assert_debug_snapshot!(ast);
}
// This is actually an attribute error, not a syntax error. So check that
// it doesn't fail parsing.
#[test]
fn ok_attribute_weird() {
let ast = parse_suite(r#""foo".y = 42"#);
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_name() {
let ast = parse_suite(r"foo = 42");
insta::assert_debug_snapshot!(ast);
}
// This is a sanity test for what looks like an ipython directive being
// assigned to. Although this doesn't actually parse as an assignment
// statement, but rather, a directive whose value is `foo = 42`.
#[test]
fn ok_ipy_escape_command() {
use crate::Mode;
let src = r"!foo = 42";
let tokens = crate::lexer::lex(src, Mode::Ipython);
let ast = crate::parse_tokens(tokens.collect(), src, Mode::Ipython);
insta::assert_debug_snapshot!(ast);
}
#[test]
fn ok_assignment_expr() {
let ast = parse_suite(r"(x := 5)");
insta::assert_debug_snapshot!(ast);
}
}

View File

@@ -0,0 +1,54 @@
use ruff_python_ast::{self as ast, Expr, ExprContext};
pub(super) fn set_context(expr: Expr, ctx: ExprContext) -> Expr {
match expr {
Expr::Name(ast::ExprName { id, range, .. }) => ast::ExprName { range, id, ctx }.into(),
Expr::Tuple(ast::ExprTuple {
elts,
range,
parenthesized,
ctx: _,
}) => ast::ExprTuple {
elts: elts.into_iter().map(|elt| set_context(elt, ctx)).collect(),
range,
ctx,
parenthesized,
}
.into(),
Expr::List(ast::ExprList { elts, range, .. }) => ast::ExprList {
elts: elts.into_iter().map(|elt| set_context(elt, ctx)).collect(),
range,
ctx,
}
.into(),
Expr::Attribute(ast::ExprAttribute {
value, attr, range, ..
}) => ast::ExprAttribute {
range,
value,
attr,
ctx,
}
.into(),
Expr::Subscript(ast::ExprSubscript {
value,
slice,
range,
..
}) => ast::ExprSubscript {
range,
value,
slice,
ctx,
}
.into(),
Expr::Starred(ast::ExprStarred { value, range, .. }) => ast::ExprStarred {
value: Box::new(set_context(*value, ctx)),
range,
ctx,
}
.into(),
_ => expr,
}
}

View File

@@ -0,0 +1,138 @@
use std::hash::BuildHasherDefault;
// Contains functions that perform validation and parsing of arguments and parameters.
// Checks apply both to functions and to lambdas.
use crate::lexer::{LexicalError, LexicalErrorType};
use ruff_python_ast::{self as ast};
use ruff_text_size::{Ranged, TextRange, TextSize};
use rustc_hash::FxHashSet;
pub(crate) struct ArgumentList {
pub(crate) args: Vec<ast::Expr>,
pub(crate) keywords: Vec<ast::Keyword>,
}
// Perform validation of function/lambda arguments in a function definition.
pub(super) fn validate_arguments(arguments: &ast::Parameters) -> Result<(), LexicalError> {
let mut all_arg_names = FxHashSet::with_capacity_and_hasher(
arguments.posonlyargs.len()
+ arguments.args.len()
+ usize::from(arguments.vararg.is_some())
+ arguments.kwonlyargs.len()
+ usize::from(arguments.kwarg.is_some()),
BuildHasherDefault::default(),
);
let posonlyargs = arguments.posonlyargs.iter();
let args = arguments.args.iter();
let kwonlyargs = arguments.kwonlyargs.iter();
let vararg: Option<&ast::Parameter> = arguments.vararg.as_deref();
let kwarg: Option<&ast::Parameter> = arguments.kwarg.as_deref();
for arg in posonlyargs
.chain(args)
.chain(kwonlyargs)
.map(|arg| &arg.parameter)
.chain(vararg)
.chain(kwarg)
{
let range = arg.range;
let arg_name = arg.name.as_str();
if !all_arg_names.insert(arg_name) {
return Err(LexicalError::new(
LexicalErrorType::DuplicateArgumentError(arg_name.to_string().into_boxed_str()),
range,
));
}
}
Ok(())
}
pub(super) fn validate_pos_params(
args: &(
Vec<ast::ParameterWithDefault>,
Vec<ast::ParameterWithDefault>,
),
) -> Result<(), LexicalError> {
let (posonlyargs, args) = args;
#[allow(clippy::skip_while_next)]
let first_invalid = posonlyargs
.iter()
.chain(args.iter()) // for all args
.skip_while(|arg| arg.default.is_none()) // starting with args without default
.skip_while(|arg| arg.default.is_some()) // and then args with default
.next(); // there must not be any more args without default
if let Some(invalid) = first_invalid {
return Err(LexicalError::new(
LexicalErrorType::DefaultArgumentError,
invalid.parameter.range(),
));
}
Ok(())
}
type FunctionArgument = (
Option<(TextSize, TextSize, Option<ast::Identifier>)>,
ast::Expr,
);
// Parse arguments as supplied during a function/lambda *call*.
pub(super) fn parse_arguments(
function_arguments: Vec<FunctionArgument>,
) -> Result<ArgumentList, LexicalError> {
let mut args = vec![];
let mut keywords = vec![];
let mut keyword_names = FxHashSet::with_capacity_and_hasher(
function_arguments.len(),
BuildHasherDefault::default(),
);
let mut double_starred = false;
for (name, value) in function_arguments {
if let Some((start, end, name)) = name {
// Check for duplicate keyword arguments in the call.
if let Some(keyword_name) = &name {
if !keyword_names.insert(keyword_name.to_string()) {
return Err(LexicalError::new(
LexicalErrorType::DuplicateKeywordArgumentError(
keyword_name.to_string().into_boxed_str(),
),
TextRange::new(start, end),
));
}
} else {
double_starred = true;
}
keywords.push(ast::Keyword {
arg: name,
value,
range: TextRange::new(start, end),
});
} else {
// Positional arguments mustn't follow keyword arguments.
if !keywords.is_empty() && !is_starred(&value) {
return Err(LexicalError::new(
LexicalErrorType::PositionalArgumentError,
value.range(),
));
// Allow starred arguments after keyword arguments but
// not after double-starred arguments.
} else if double_starred {
return Err(LexicalError::new(
LexicalErrorType::UnpackedArgumentError,
value.range(),
));
}
args.push(value);
}
}
Ok(ArgumentList { args, keywords })
}
// Check if an expression is a starred expression.
const fn is_starred(exp: &ast::Expr) -> bool {
exp.is_starred_expr()
}

View File

@@ -0,0 +1,310 @@
//! The LALRPOP based parser implementation.
use itertools::Itertools;
use lalrpop_util::ParseError as LalrpopError;
use ruff_python_ast::{
Expr, ExprAttribute, ExprAwait, ExprBinOp, ExprBoolOp, ExprBooleanLiteral, ExprBytesLiteral,
ExprCall, ExprCompare, ExprDict, ExprDictComp, ExprEllipsisLiteral, ExprFString, ExprGenerator,
ExprIf, ExprIpyEscapeCommand, ExprLambda, ExprList, ExprListComp, ExprName, ExprNamed,
ExprNoneLiteral, ExprNumberLiteral, ExprSet, ExprSetComp, ExprSlice, ExprStarred,
ExprStringLiteral, ExprSubscript, ExprTuple, ExprUnaryOp, ExprYield, ExprYieldFrom, Mod,
};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::lexer::{LexResult, LexicalError, LexicalErrorType};
use crate::{Mode, ParseError, ParseErrorType, Tok};
mod context;
mod function;
#[rustfmt::skip]
#[allow(unreachable_pub)]
#[allow(clippy::type_complexity)]
#[allow(clippy::extra_unused_lifetimes)]
#[allow(clippy::needless_lifetimes)]
#[allow(clippy::unused_self)]
#[allow(clippy::cast_sign_loss)]
#[allow(clippy::default_trait_access)]
#[allow(clippy::let_unit_value)]
#[allow(clippy::just_underscores_and_digits)]
#[allow(clippy::no_effect_underscore_binding)]
#[allow(clippy::trivially_copy_pass_by_ref)]
#[allow(clippy::option_option)]
#[allow(clippy::unnecessary_wraps)]
#[allow(clippy::uninlined_format_args)]
#[allow(clippy::cloned_instead_of_copied)]
mod python {
#[cfg(feature = "lalrpop")]
include!(concat!(env!("OUT_DIR"), "/src/lalrpop/python.rs"));
#[cfg(not(feature = "lalrpop"))]
include!("python.rs");
}
pub(crate) fn parse_tokens(
tokens: Vec<LexResult>,
source: &str,
mode: Mode,
) -> Result<Mod, ParseError> {
let marker_token = (Tok::start_marker(mode), TextRange::default());
let lexer = std::iter::once(Ok(marker_token)).chain(
tokens
.into_iter()
.filter_ok(|token| !matches!(token, (Tok::Comment(..) | Tok::NonLogicalNewline, _))),
);
python::TopParser::new()
.parse(
source,
mode,
lexer.map_ok(|(t, range)| (range.start(), t, range.end())),
)
.map_err(parse_error_from_lalrpop)
}
fn parse_error_from_lalrpop(err: LalrpopError<TextSize, Tok, LexicalError>) -> ParseError {
match err {
// TODO: Are there cases where this isn't an EOF?
LalrpopError::InvalidToken { location } => ParseError {
error: ParseErrorType::Eof,
location: TextRange::empty(location),
},
LalrpopError::ExtraToken { token } => ParseError {
error: ParseErrorType::ExtraToken(token.1),
location: TextRange::new(token.0, token.2),
},
LalrpopError::User { error } => ParseError {
location: error.location(),
error: ParseErrorType::Lexical(error.into_error()),
},
LalrpopError::UnrecognizedToken { token, expected } => {
// Hacky, but it's how CPython does it. See PyParser_AddToken,
// in particular "Only one possible expected token" comment.
let expected = (expected.len() == 1).then(|| expected[0].clone());
ParseError {
error: ParseErrorType::UnrecognizedToken(token.1, expected),
location: TextRange::new(token.0, token.2),
}
}
LalrpopError::UnrecognizedEof { location, expected } => {
// This could be an initial indentation error that we should ignore
let indent_error = expected == ["Indent"];
if indent_error {
ParseError {
error: ParseErrorType::Lexical(LexicalErrorType::IndentationError),
location: TextRange::empty(location),
}
} else {
ParseError {
error: ParseErrorType::Eof,
location: TextRange::empty(location),
}
}
}
}
}
/// An expression that may be parenthesized.
#[derive(Clone, Debug)]
struct ParenthesizedExpr {
/// The range of the expression, including any parentheses.
range: TextRange,
/// The underlying expression.
expr: Expr,
}
impl ParenthesizedExpr {
/// Returns `true` if the expression is parenthesized.
fn is_parenthesized(&self) -> bool {
self.range.start() != self.expr.range().start()
}
}
impl Ranged for ParenthesizedExpr {
fn range(&self) -> TextRange {
self.range
}
}
impl From<Expr> for ParenthesizedExpr {
fn from(expr: Expr) -> Self {
ParenthesizedExpr {
range: expr.range(),
expr,
}
}
}
impl From<ParenthesizedExpr> for Expr {
fn from(parenthesized_expr: ParenthesizedExpr) -> Self {
parenthesized_expr.expr
}
}
impl From<ExprIpyEscapeCommand> for ParenthesizedExpr {
fn from(payload: ExprIpyEscapeCommand) -> Self {
Expr::IpyEscapeCommand(payload).into()
}
}
impl From<ExprBoolOp> for ParenthesizedExpr {
fn from(payload: ExprBoolOp) -> Self {
Expr::BoolOp(payload).into()
}
}
impl From<ExprNamed> for ParenthesizedExpr {
fn from(payload: ExprNamed) -> Self {
Expr::Named(payload).into()
}
}
impl From<ExprBinOp> for ParenthesizedExpr {
fn from(payload: ExprBinOp) -> Self {
Expr::BinOp(payload).into()
}
}
impl From<ExprUnaryOp> for ParenthesizedExpr {
fn from(payload: ExprUnaryOp) -> Self {
Expr::UnaryOp(payload).into()
}
}
impl From<ExprLambda> for ParenthesizedExpr {
fn from(payload: ExprLambda) -> Self {
Expr::Lambda(payload).into()
}
}
impl From<ExprIf> for ParenthesizedExpr {
fn from(payload: ExprIf) -> Self {
Expr::If(payload).into()
}
}
impl From<ExprDict> for ParenthesizedExpr {
fn from(payload: ExprDict) -> Self {
Expr::Dict(payload).into()
}
}
impl From<ExprSet> for ParenthesizedExpr {
fn from(payload: ExprSet) -> Self {
Expr::Set(payload).into()
}
}
impl From<ExprListComp> for ParenthesizedExpr {
fn from(payload: ExprListComp) -> Self {
Expr::ListComp(payload).into()
}
}
impl From<ExprSetComp> for ParenthesizedExpr {
fn from(payload: ExprSetComp) -> Self {
Expr::SetComp(payload).into()
}
}
impl From<ExprDictComp> for ParenthesizedExpr {
fn from(payload: ExprDictComp) -> Self {
Expr::DictComp(payload).into()
}
}
impl From<ExprGenerator> for ParenthesizedExpr {
fn from(payload: ExprGenerator) -> Self {
Expr::Generator(payload).into()
}
}
impl From<ExprAwait> for ParenthesizedExpr {
fn from(payload: ExprAwait) -> Self {
Expr::Await(payload).into()
}
}
impl From<ExprYield> for ParenthesizedExpr {
fn from(payload: ExprYield) -> Self {
Expr::Yield(payload).into()
}
}
impl From<ExprYieldFrom> for ParenthesizedExpr {
fn from(payload: ExprYieldFrom) -> Self {
Expr::YieldFrom(payload).into()
}
}
impl From<ExprCompare> for ParenthesizedExpr {
fn from(payload: ExprCompare) -> Self {
Expr::Compare(payload).into()
}
}
impl From<ExprCall> for ParenthesizedExpr {
fn from(payload: ExprCall) -> Self {
Expr::Call(payload).into()
}
}
impl From<ExprFString> for ParenthesizedExpr {
fn from(payload: ExprFString) -> Self {
Expr::FString(payload).into()
}
}
impl From<ExprStringLiteral> for ParenthesizedExpr {
fn from(payload: ExprStringLiteral) -> Self {
Expr::StringLiteral(payload).into()
}
}
impl From<ExprBytesLiteral> for ParenthesizedExpr {
fn from(payload: ExprBytesLiteral) -> Self {
Expr::BytesLiteral(payload).into()
}
}
impl From<ExprNumberLiteral> for ParenthesizedExpr {
fn from(payload: ExprNumberLiteral) -> Self {
Expr::NumberLiteral(payload).into()
}
}
impl From<ExprBooleanLiteral> for ParenthesizedExpr {
fn from(payload: ExprBooleanLiteral) -> Self {
Expr::BooleanLiteral(payload).into()
}
}
impl From<ExprNoneLiteral> for ParenthesizedExpr {
fn from(payload: ExprNoneLiteral) -> Self {
Expr::NoneLiteral(payload).into()
}
}
impl From<ExprEllipsisLiteral> for ParenthesizedExpr {
fn from(payload: ExprEllipsisLiteral) -> Self {
Expr::EllipsisLiteral(payload).into()
}
}
impl From<ExprAttribute> for ParenthesizedExpr {
fn from(payload: ExprAttribute) -> Self {
Expr::Attribute(payload).into()
}
}
impl From<ExprSubscript> for ParenthesizedExpr {
fn from(payload: ExprSubscript) -> Self {
Expr::Subscript(payload).into()
}
}
impl From<ExprStarred> for ParenthesizedExpr {
fn from(payload: ExprStarred) -> Self {
Expr::Starred(payload).into()
}
}
impl From<ExprName> for ParenthesizedExpr {
fn from(payload: ExprName) -> Self {
Expr::Name(payload).into()
}
}
impl From<ExprList> for ParenthesizedExpr {
fn from(payload: ExprList) -> Self {
Expr::List(payload).into()
}
}
impl From<ExprTuple> for ParenthesizedExpr {
fn from(payload: ExprTuple) -> Self {
Expr::Tuple(payload).into()
}
}
impl From<ExprSlice> for ParenthesizedExpr {
fn from(payload: ExprSlice) -> Self {
Expr::Slice(payload).into()
}
}
#[cfg(target_pointer_width = "64")]
mod size_assertions {
use static_assertions::assert_eq_size;
use super::ParenthesizedExpr;
assert_eq_size!(ParenthesizedExpr, [u8; 72]);
}

View File

@@ -5,12 +5,15 @@
use ruff_text_size::{Ranged, TextLen, TextRange, TextSize};
use ruff_python_ast::{self as ast, Int, IpyEscapeKind};
use super::{
function::{ArgumentList, parse_arguments, validate_pos_params, validate_arguments},
context::set_context,
ParenthesizedExpr
};
use crate::{
FStringErrorType,
Mode,
lexer::{LexicalError, LexicalErrorType},
function::{ArgumentList, parse_arguments, validate_pos_params, validate_arguments},
context::set_context,
string::{StringType, concatenated_strings, parse_fstring_literal_element, parse_string_literal},
token::{self, StringKind},
invalid,
@@ -156,33 +159,33 @@ ExpressionStatement: ast::Stmt = {
},
};
AssignSuffix: crate::parser::ParenthesizedExpr = {
AssignSuffix: ParenthesizedExpr = {
"=" <e:TestListOrYieldExpr> => e,
"=" <e:IpyEscapeCommandExpr> => e
};
TestListOrYieldExpr: crate::parser::ParenthesizedExpr = {
TestListOrYieldExpr: ParenthesizedExpr = {
TestList,
YieldExpr
}
#[inline]
TestOrStarExprList: crate::parser::ParenthesizedExpr = {
TestOrStarExprList: ParenthesizedExpr = {
// as far as I can tell, these were the same
TestList
};
TestOrStarExpr: crate::parser::ParenthesizedExpr = {
TestOrStarExpr: ParenthesizedExpr = {
Test<"all">,
StarExpr,
};
NamedOrStarExpr: crate::parser::ParenthesizedExpr = {
NamedOrStarExpr: ParenthesizedExpr = {
NamedExpression,
StarExpr,
};
TestOrStarNamedExpr: crate::parser::ParenthesizedExpr = {
TestOrStarNamedExpr: ParenthesizedExpr = {
NamedExpressionTest,
StarExpr,
};
@@ -339,20 +342,20 @@ IpyEscapeCommandStatement: ast::Stmt = {
} else {
Err(LexicalError::new(
LexicalErrorType::OtherError("IPython escape commands are only allowed in `Mode::Ipython`".to_string().into_boxed_str()),
location,
(location..end_location).into(),
))?
}
}
}
IpyEscapeCommandExpr: crate::parser::ParenthesizedExpr = {
IpyEscapeCommandExpr: ParenthesizedExpr = {
<location:@L> <c:ipy_escape_command> <end_location:@R> =>? {
if mode == Mode::Ipython {
// This should never occur as the lexer won't allow it.
if !matches!(c.0, IpyEscapeKind::Magic | IpyEscapeKind::Shell) {
return Err(LexicalError::new(
LexicalErrorType::OtherError("IPython escape command expr is only allowed for % and !".to_string().into_boxed_str()),
location,
(location..end_location).into(),
))?;
}
Ok(ast::ExprIpyEscapeCommand {
@@ -363,7 +366,7 @@ IpyEscapeCommandExpr: crate::parser::ParenthesizedExpr = {
} else {
Err(LexicalError::new(
LexicalErrorType::OtherError("IPython escape commands are only allowed in `Mode::Ipython`".to_string().into_boxed_str()),
location,
(location..end_location).into(),
))?
}
}
@@ -383,7 +386,7 @@ IpyHelpEndEscapeCommandStatement: ast::Stmt = {
let ast::Expr::NumberLiteral(ast::ExprNumberLiteral { value: ast::Number::Int(integer), .. }) = slice.as_ref() else {
return Err(LexicalError::new(
LexicalErrorType::OtherError("only integer literals are allowed in Subscript expressions in help end escape command".to_string().into_boxed_str()),
range.start(),
*range,
));
};
unparse_expr(value, buffer)?;
@@ -399,7 +402,7 @@ IpyHelpEndEscapeCommandStatement: ast::Stmt = {
_ => {
return Err(LexicalError::new(
LexicalErrorType::OtherError("only Name, Subscript and Attribute expressions are allowed in help end escape command".to_string().into_boxed_str()),
expr.start(),
expr.range(),
));
}
}
@@ -410,7 +413,7 @@ IpyHelpEndEscapeCommandStatement: ast::Stmt = {
return Err(ParseError::User {
error: LexicalError::new(
LexicalErrorType::OtherError("IPython escape commands are only allowed in `Mode::Ipython`".to_string().into_boxed_str()),
location,
(location..end_location).into(),
),
});
}
@@ -422,7 +425,7 @@ IpyHelpEndEscapeCommandStatement: ast::Stmt = {
return Err(ParseError::User {
error: LexicalError::new(
LexicalErrorType::OtherError("maximum of 2 `?` tokens are allowed in help end escape command".to_string().into_boxed_str()),
location,
(location..end_location).into(),
),
});
}
@@ -565,7 +568,7 @@ AsPattern: ast::Pattern = {
if name.as_str() == "_" {
Err(LexicalError::new(
LexicalErrorType::OtherError("cannot use '_' as a target".to_string().into_boxed_str()),
location,
(location..end_location).into(),
))?
} else {
Ok(ast::Pattern::MatchAs(
@@ -632,13 +635,13 @@ StarPattern: ast::Pattern = {
}.into(),
}
NumberAtom: crate::parser::ParenthesizedExpr = {
NumberAtom: ParenthesizedExpr = {
<location:@L> <value:Number> <end_location:@R> => ast::Expr::NumberLiteral(
ast::ExprNumberLiteral { value, range: (location..end_location).into() }
).into(),
}
NumberExpr: crate::parser::ParenthesizedExpr = {
NumberExpr: ParenthesizedExpr = {
NumberAtom,
<location:@L> "-" <operand:NumberAtom> <end_location:@R> => ast::Expr::UnaryOp(
ast::ExprUnaryOp {
@@ -649,7 +652,7 @@ NumberExpr: crate::parser::ParenthesizedExpr = {
).into(),
}
AddOpExpr: crate::parser::ParenthesizedExpr = {
AddOpExpr: ParenthesizedExpr = {
<location:@L> <left:NumberExpr> <op:AddOp> <right:NumberAtom> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op,
@@ -1251,7 +1254,7 @@ ParameterListStarArgs<ParameterType, StarParameterType, DoubleStarParameterType>
if va.is_none() && kwonlyargs.is_empty() && kwarg.is_none() {
return Err(LexicalError::new(
LexicalErrorType::OtherError("named arguments must follow bare *".to_string().into_boxed_str()),
location,
(location..location).into(),
))?;
}
@@ -1318,7 +1321,7 @@ Decorator: ast::Decorator = {
},
};
YieldExpr: crate::parser::ParenthesizedExpr = {
YieldExpr: ParenthesizedExpr = {
<location:@L> "yield" <value:TestList?> <end_location:@R> => ast::ExprYield {
value: value.map(ast::Expr::from).map(Box::new),
range: (location..end_location).into(),
@@ -1329,7 +1332,7 @@ YieldExpr: crate::parser::ParenthesizedExpr = {
}.into(),
};
Test<Goal>: crate::parser::ParenthesizedExpr = {
Test<Goal>: ParenthesizedExpr = {
<location:@L> <body:OrTest<"all">> "if" <test:OrTest<"all">> "else" <orelse:Test<"all">> <end_location:@R> => ast::ExprIf {
test: Box::new(test.into()),
body: Box::new(body.into()),
@@ -1340,12 +1343,12 @@ Test<Goal>: crate::parser::ParenthesizedExpr = {
LambdaDef,
};
NamedExpressionTest: crate::parser::ParenthesizedExpr = {
NamedExpressionTest: ParenthesizedExpr = {
NamedExpression,
Test<"all">,
}
NamedExpressionName: crate::parser::ParenthesizedExpr = {
NamedExpressionName: ParenthesizedExpr = {
<location:@L> <id:Identifier> <end_location:@R> => ast::ExprName {
id: id.into(),
ctx: ast::ExprContext::Store,
@@ -1353,7 +1356,7 @@ NamedExpressionName: crate::parser::ParenthesizedExpr = {
}.into(),
}
NamedExpression: crate::parser::ParenthesizedExpr = {
NamedExpression: ParenthesizedExpr = {
<location:@L> <target:NamedExpressionName> ":=" <value:Test<"all">> <end_location:@R> => {
ast::ExprNamed {
target: Box::new(target.into()),
@@ -1363,12 +1366,12 @@ NamedExpression: crate::parser::ParenthesizedExpr = {
},
};
LambdaDef: crate::parser::ParenthesizedExpr = {
LambdaDef: ParenthesizedExpr = {
<location:@L> "lambda" <location_args:@L> <parameters:ParameterList<UntypedParameter, StarUntypedParameter, StarUntypedParameter>?> <end_location_args:@R> ":" <fstring_middle:fstring_middle?> <body:Test<"all">> <end_location:@R> =>? {
if fstring_middle.is_some() {
return Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::LambdaWithoutParentheses),
location,
(location..end_location).into(),
))?;
}
parameters.as_ref().map(validate_arguments).transpose()?;
@@ -1381,7 +1384,7 @@ LambdaDef: crate::parser::ParenthesizedExpr = {
}
}
OrTest<Goal>: crate::parser::ParenthesizedExpr = {
OrTest<Goal>: ParenthesizedExpr = {
<location:@L> <values:(<AndTest<"all">> "or")+> <last: AndTest<"all">> <end_location:@R> => {
let values = values.into_iter().chain(std::iter::once(last)).map(ast::Expr::from).collect();
ast::ExprBoolOp { op: ast::BoolOp::Or, values, range: (location..end_location).into() }.into()
@@ -1389,7 +1392,7 @@ OrTest<Goal>: crate::parser::ParenthesizedExpr = {
AndTest<Goal>,
};
AndTest<Goal>: crate::parser::ParenthesizedExpr = {
AndTest<Goal>: ParenthesizedExpr = {
<location:@L> <values:(<NotTest<"all">> "and")+> <last:NotTest<"all">> <end_location:@R> => {
let values = values.into_iter().chain(std::iter::once(last)).map(ast::Expr::from).collect();
ast::ExprBoolOp { op: ast::BoolOp::And, values, range: (location..end_location).into() }.into()
@@ -1397,7 +1400,7 @@ AndTest<Goal>: crate::parser::ParenthesizedExpr = {
NotTest<Goal>,
};
NotTest<Goal>: crate::parser::ParenthesizedExpr = {
NotTest<Goal>: ParenthesizedExpr = {
<location:@L> "not" <operand:NotTest<"all">> <end_location:@R> => ast::ExprUnaryOp {
operand: Box::new(operand.into()),
op: ast::UnaryOp::Not,
@@ -1406,7 +1409,7 @@ NotTest<Goal>: crate::parser::ParenthesizedExpr = {
Comparison<Goal>,
};
Comparison<Goal>: crate::parser::ParenthesizedExpr = {
Comparison<Goal>: ParenthesizedExpr = {
<location:@L> <left:Expression<"all">> <comparisons:(CompOp Expression<"all">)+> <end_location:@R> => {
let mut ops = Vec::with_capacity(comparisons.len());
let mut comparators = Vec::with_capacity(comparisons.len());
@@ -1437,7 +1440,7 @@ CompOp: ast::CmpOp = {
"is" "not" => ast::CmpOp::IsNot,
};
Expression<Goal>: crate::parser::ParenthesizedExpr = {
Expression<Goal>: ParenthesizedExpr = {
<location:@L> <left:Expression<"all">> "|" <right:XorExpression<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op: ast::Operator::BitOr,
@@ -1447,7 +1450,7 @@ Expression<Goal>: crate::parser::ParenthesizedExpr = {
XorExpression<Goal>,
};
XorExpression<Goal>: crate::parser::ParenthesizedExpr = {
XorExpression<Goal>: ParenthesizedExpr = {
<location:@L> <left:XorExpression<"all">> "^" <right:AndExpression<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op: ast::Operator::BitXor,
@@ -1457,7 +1460,7 @@ XorExpression<Goal>: crate::parser::ParenthesizedExpr = {
AndExpression<Goal>,
};
AndExpression<Goal>: crate::parser::ParenthesizedExpr = {
AndExpression<Goal>: ParenthesizedExpr = {
<location:@L> <left:AndExpression<"all">> "&" <right:ShiftExpression<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op: ast::Operator::BitAnd,
@@ -1467,7 +1470,7 @@ AndExpression<Goal>: crate::parser::ParenthesizedExpr = {
ShiftExpression<Goal>,
};
ShiftExpression<Goal>: crate::parser::ParenthesizedExpr = {
ShiftExpression<Goal>: ParenthesizedExpr = {
<location:@L> <left:ShiftExpression<"all">> <op:ShiftOp> <right:ArithmeticExpression<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op,
@@ -1482,7 +1485,7 @@ ShiftOp: ast::Operator = {
">>" => ast::Operator::RShift,
};
ArithmeticExpression<Goal>: crate::parser::ParenthesizedExpr = {
ArithmeticExpression<Goal>: ParenthesizedExpr = {
<location:@L> <left:ArithmeticExpression<"all">> <op:AddOp> <right:Term<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op,
@@ -1497,7 +1500,7 @@ AddOp: ast::Operator = {
"-" => ast::Operator::Sub,
};
Term<Goal>: crate::parser::ParenthesizedExpr = {
Term<Goal>: ParenthesizedExpr = {
<location:@L> <left:Term<"all">> <op:MulOp> <right:Factor<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op,
@@ -1515,7 +1518,7 @@ MulOp: ast::Operator = {
"@" => ast::Operator::MatMult,
};
Factor<Goal>: crate::parser::ParenthesizedExpr = {
Factor<Goal>: ParenthesizedExpr = {
<location:@L> <op:UnaryOp> <operand:Factor<"all">> <end_location:@R> => ast::ExprUnaryOp {
operand: Box::new(operand.into()),
op,
@@ -1530,7 +1533,7 @@ UnaryOp: ast::UnaryOp = {
"~" => ast::UnaryOp::Invert,
};
Power<Goal>: crate::parser::ParenthesizedExpr = {
Power<Goal>: ParenthesizedExpr = {
<location:@L> <left:AtomExpr<"all">> "**" <right:Factor<"all">> <end_location:@R> => ast::ExprBinOp {
left: Box::new(left.into()),
op: ast::Operator::Pow,
@@ -1540,14 +1543,14 @@ Power<Goal>: crate::parser::ParenthesizedExpr = {
AtomExpr<Goal>,
};
AtomExpr<Goal>: crate::parser::ParenthesizedExpr = {
AtomExpr<Goal>: ParenthesizedExpr = {
<location:@L> "await" <value:AtomExpr2<"all">> <end_location:@R> => {
ast::ExprAwait { value: Box::new(value.into()), range: (location..end_location).into() }.into()
},
AtomExpr2<Goal>,
}
AtomExpr2<Goal>: crate::parser::ParenthesizedExpr = {
AtomExpr2<Goal>: ParenthesizedExpr = {
Atom<Goal>,
<location:@L> <func:AtomExpr2<"all">> <arguments:Arguments> <end_location:@R> => ast::ExprCall {
func: Box::new(func.into()),
@@ -1568,14 +1571,14 @@ AtomExpr2<Goal>: crate::parser::ParenthesizedExpr = {
}.into(),
};
SubscriptList: crate::parser::ParenthesizedExpr = {
SubscriptList: ParenthesizedExpr = {
Subscript,
<location:@L> <s1:Subscript> "," <end_location:@R> => {
ast::ExprTuple {
elts: vec![s1.into()],
ctx: ast::ExprContext::Load,
range: (location..end_location).into(),
parenthesized: false
parenthesized: false,
}.into()
},
<location:@L> <elts:TwoOrMoreSep<Subscript, ",">> ","? <end_location:@R> => {
@@ -1584,12 +1587,12 @@ SubscriptList: crate::parser::ParenthesizedExpr = {
elts,
ctx: ast::ExprContext::Load,
range: (location..end_location).into(),
parenthesized: false
parenthesized: false,
}.into()
}
};
Subscript: crate::parser::ParenthesizedExpr = {
Subscript: ParenthesizedExpr = {
TestOrStarNamedExpr,
<location:@L> <lower:Test<"all">?> ":" <upper:Test<"all">?> <step:SliceOp?> <end_location:@R> => {
let lower = lower.map(ast::Expr::from).map(Box::new);
@@ -1601,7 +1604,7 @@ Subscript: crate::parser::ParenthesizedExpr = {
}
};
SliceOp: Option<crate::parser::ParenthesizedExpr> = {
SliceOp: Option<ParenthesizedExpr> = {
<location:@L> ":" <e:Test<"all">?> => e,
}
@@ -1646,7 +1649,7 @@ FStringReplacementField: ast::FStringElement = {
if value.expr.is_lambda_expr() && !value.is_parenthesized() {
return Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::LambdaWithoutParentheses),
value.start(),
value.range(),
))?;
}
let debug_text = debug.map(|_| {
@@ -1697,14 +1700,14 @@ FStringConversion: (TextSize, ast::ConversionFlag) = {
"a" => ast::ConversionFlag::Ascii,
_ => Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::InvalidConversionFlag),
name_location,
(location..name_location).into(),
))?
};
Ok((location, conversion))
}
};
Atom<Goal>: crate::parser::ParenthesizedExpr = {
Atom<Goal>: ParenthesizedExpr = {
<expr:String> => expr.into(),
<location:@L> <value:Number> <end_location:@R> => ast::ExprNumberLiteral {
value,
@@ -1724,7 +1727,7 @@ Atom<Goal>: crate::parser::ParenthesizedExpr = {
},
<location:@L> "(" <elts:OneOrMore<Test<"all">>> <trailing_comma:","?> ")" <end_location:@R> if Goal != "no-withitems" => {
if elts.len() == 1 && trailing_comma.is_none() {
crate::parser::ParenthesizedExpr {
ParenthesizedExpr {
expr: elts.into_iter().next().unwrap().into(),
range: (location..end_location).into(),
}
@@ -1743,10 +1746,10 @@ Atom<Goal>: crate::parser::ParenthesizedExpr = {
if mid.expr.is_starred_expr() {
return Err(LexicalError::new(
LexicalErrorType::OtherError("cannot use starred expression here".to_string().into_boxed_str()),
mid.start(),
mid.range(),
))?;
}
Ok(crate::parser::ParenthesizedExpr {
Ok(ParenthesizedExpr {
expr: mid.into(),
range: (location..end_location).into(),
})
@@ -1764,9 +1767,9 @@ Atom<Goal>: crate::parser::ParenthesizedExpr = {
elts: Vec::new(),
ctx: ast::ExprContext::Load,
range: (location..end_location).into(),
parenthesized: true
parenthesized: true,
}.into(),
<location:@L> "(" <e:YieldExpr> ")" <end_location:@R> => crate::parser::ParenthesizedExpr {
<location:@L> "(" <e:YieldExpr> ")" <end_location:@R> => ParenthesizedExpr {
expr: e.into(),
range: (location..end_location).into(),
},
@@ -1779,7 +1782,7 @@ Atom<Goal>: crate::parser::ParenthesizedExpr = {
"(" <location:@L> "**" <e:Expression<"all">> ")" <end_location:@R> =>? {
Err(LexicalError::new(
LexicalErrorType::OtherError("cannot use double starred expression here".to_string().into_boxed_str()),
location,
(location..end_location).into(),
).into())
},
<location:@L> "{" <e:DictLiteralValues?> "}" <end_location:@R> => {
@@ -1816,37 +1819,37 @@ Atom<Goal>: crate::parser::ParenthesizedExpr = {
<location:@L> "..." <end_location:@R> => ast::ExprEllipsisLiteral { range: (location..end_location).into() }.into(),
};
ListLiteralValues: Vec<crate::parser::ParenthesizedExpr> = {
ListLiteralValues: Vec<ParenthesizedExpr> = {
<e:OneOrMore<TestOrStarNamedExpr>> ","? => e,
};
DictLiteralValues: Vec<(Option<Box<crate::parser::ParenthesizedExpr>>, crate::parser::ParenthesizedExpr)> = {
DictLiteralValues: Vec<(Option<Box<ParenthesizedExpr>>, ParenthesizedExpr)> = {
<elements:OneOrMore<DictElement>> ","? => elements,
};
DictEntry: (crate::parser::ParenthesizedExpr, crate::parser::ParenthesizedExpr) = {
DictEntry: (ParenthesizedExpr, ParenthesizedExpr) = {
<e1: Test<"all">> ":" <e2: Test<"all">> => (e1, e2),
};
DictElement: (Option<Box<crate::parser::ParenthesizedExpr>>, crate::parser::ParenthesizedExpr) = {
DictElement: (Option<Box<ParenthesizedExpr>>, ParenthesizedExpr) = {
<e:DictEntry> => (Some(Box::new(e.0)), e.1),
"**" <e:Expression<"all">> => (None, e),
};
SetLiteralValues: Vec<crate::parser::ParenthesizedExpr> = {
SetLiteralValues: Vec<ParenthesizedExpr> = {
<e1:OneOrMore<TestOrStarNamedExpr>> ","? => e1
};
ExpressionOrStarExpression: crate::parser::ParenthesizedExpr = {
ExpressionOrStarExpression: ParenthesizedExpr = {
Expression<"all">,
StarExpr
};
ExpressionList: crate::parser::ParenthesizedExpr = {
ExpressionList: ParenthesizedExpr = {
GenericList<ExpressionOrStarExpression>
};
ExpressionList2: Vec<crate::parser::ParenthesizedExpr> = {
ExpressionList2: Vec<ParenthesizedExpr> = {
<elements:OneOrMore<ExpressionOrStarExpression>> ","? => elements,
};
@@ -1855,14 +1858,14 @@ ExpressionList2: Vec<crate::parser::ParenthesizedExpr> = {
// - a single expression
// - a single expression followed by a trailing comma
#[inline]
TestList: crate::parser::ParenthesizedExpr = {
TestList: ParenthesizedExpr = {
GenericList<TestOrStarExpr>
};
GenericList<Element>: crate::parser::ParenthesizedExpr = {
GenericList<Element>: ParenthesizedExpr = {
<location:@L> <elts:OneOrMore<Element>> <trailing_comma:","?> <end_location:@R> => {
if elts.len() == 1 && trailing_comma.is_none() {
crate::parser::ParenthesizedExpr {
ParenthesizedExpr {
expr: elts.into_iter().next().unwrap().into(),
range: (location..end_location).into(),
}
@@ -1879,7 +1882,7 @@ GenericList<Element>: crate::parser::ParenthesizedExpr = {
}
// Test
StarExpr: crate::parser::ParenthesizedExpr = {
StarExpr: ParenthesizedExpr = {
<location:@L> "*" <value:Expression<"all">> <end_location:@R> => ast::ExprStarred {
value: Box::new(value.into()),
ctx: ast::ExprContext::Load,
@@ -1904,8 +1907,8 @@ SingleForComprehension: ast::Comprehension = {
}
};
ExpressionNoCond: crate::parser::ParenthesizedExpr = OrTest<"all">;
ComprehensionIf: crate::parser::ParenthesizedExpr = "if" <c:ExpressionNoCond> => c;
ExpressionNoCond: ParenthesizedExpr = OrTest<"all">;
ComprehensionIf: ParenthesizedExpr = "if" <c:ExpressionNoCond> => c;
Arguments: ast::Arguments = {
<location:@L> "(" <e: Comma<FunctionArgument>> ")" <end_location:@R> =>? {

View File

@@ -36,12 +36,12 @@ use unicode_ident::{is_xid_continue, is_xid_start};
use ruff_python_ast::{Int, IpyEscapeKind};
use ruff_text_size::{TextLen, TextRange, TextSize};
use crate::error::FStringErrorType;
use crate::lexer::cursor::{Cursor, EOF_CHAR};
use crate::lexer::fstring::{FStringContext, FStringContextFlags, FStrings};
use crate::lexer::indentation::{Indentation, Indentations};
use crate::{
soft_keywords::SoftKeywordTransformer,
string::FStringErrorType,
token::{StringKind, Tok},
Mode,
};
@@ -286,7 +286,7 @@ impl<'source> Lexer<'source> {
Err(err) => {
return Err(LexicalError::new(
LexicalErrorType::OtherError(format!("{err:?}").into_boxed_str()),
self.token_range().start(),
self.token_range(),
));
}
};
@@ -311,7 +311,7 @@ impl<'source> Lexer<'source> {
if self.cursor.eat_char('_') {
return Err(LexicalError::new(
LexicalErrorType::OtherError("Invalid Syntax".to_string().into_boxed_str()),
self.offset() - TextSize::new(1),
TextRange::new(self.offset() - TextSize::new(1), self.offset()),
));
}
@@ -345,7 +345,7 @@ impl<'source> Lexer<'source> {
LexicalErrorType::OtherError(
"Invalid decimal literal".to_string().into_boxed_str(),
),
self.token_start(),
self.token_range(),
)
})?;
@@ -370,9 +370,11 @@ impl<'source> Lexer<'source> {
// Leading zeros in decimal integer literals are not permitted.
return Err(LexicalError::new(
LexicalErrorType::OtherError(
"Invalid Token".to_string().into_boxed_str(),
"Invalid decimal integer literal"
.to_string()
.into_boxed_str(),
),
self.token_range().start(),
self.token_range(),
));
}
value
@@ -380,7 +382,7 @@ impl<'source> Lexer<'source> {
Err(err) => {
return Err(LexicalError::new(
LexicalErrorType::OtherError(format!("{err:?}").into_boxed_str()),
self.token_range().start(),
self.token_range(),
))
}
};
@@ -595,7 +597,7 @@ impl<'source> Lexer<'source> {
};
return Err(LexicalError::new(
LexicalErrorType::FStringError(error),
self.offset(),
self.token_range(),
));
}
'\n' | '\r' if !fstring.is_triple_quoted() => {
@@ -608,7 +610,7 @@ impl<'source> Lexer<'source> {
}
return Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::UnterminatedString),
self.offset(),
self.token_range(),
));
}
'\\' => {
@@ -716,13 +718,13 @@ impl<'source> Lexer<'source> {
{
return Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::UnclosedLbrace),
self.cursor.text_len(),
self.token_range(),
));
}
}
return Err(LexicalError::new(
LexicalErrorType::Eof,
self.cursor.text_len(),
LexicalErrorType::UnclosedStringError,
self.token_range(),
));
};
@@ -765,13 +767,13 @@ impl<'source> Lexer<'source> {
{
return Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::UnclosedLbrace),
self.offset(),
self.token_range(),
));
}
}
return Err(LexicalError::new(
LexicalErrorType::StringError,
self.offset(),
self.token_range(),
));
};
@@ -806,17 +808,13 @@ impl<'source> Lexer<'source> {
LexicalErrorType::FStringError(
FStringErrorType::UnclosedLbrace,
),
self.offset() - TextSize::new(1),
self.token_range(),
));
}
}
return Err(LexicalError::new(
LexicalErrorType::OtherError(
"EOL while scanning string literal"
.to_string()
.into_boxed_str(),
),
self.offset() - TextSize::new(1),
LexicalErrorType::UnclosedStringError,
self.token_range(),
));
}
Some(ch) if ch == quote => {
@@ -866,7 +864,7 @@ impl<'source> Lexer<'source> {
self.pending_indentation = Some(indentation);
let offset = self.offset();
self.indentations.dedent_one(indentation).map_err(|_| {
LexicalError::new(LexicalErrorType::IndentationError, offset)
LexicalError::new(LexicalErrorType::IndentationError, self.token_range())
})?;
return Ok((Tok::Dedent, TextRange::empty(offset)));
}
@@ -874,7 +872,7 @@ impl<'source> Lexer<'source> {
Err(_) => {
return Err(LexicalError::new(
LexicalErrorType::IndentationError,
self.offset(),
self.token_range(),
));
}
}
@@ -900,7 +898,7 @@ impl<'source> Lexer<'source> {
} else {
Err(LexicalError::new(
LexicalErrorType::UnrecognizedToken { tok: c },
self.token_start(),
self.token_range(),
))
}
} else {
@@ -924,11 +922,11 @@ impl<'source> Lexer<'source> {
if self.cursor.eat_char('\r') {
self.cursor.eat_char('\n');
} else if self.cursor.is_eof() {
return Err(LexicalError::new(LexicalErrorType::Eof, self.token_start()));
return Err(LexicalError::new(LexicalErrorType::Eof, self.token_range()));
} else if !self.cursor.eat_char('\n') {
return Err(LexicalError::new(
LexicalErrorType::LineContinuationError,
self.token_start(),
self.token_range(),
));
}
}
@@ -962,11 +960,11 @@ impl<'source> Lexer<'source> {
if self.cursor.eat_char('\r') {
self.cursor.eat_char('\n');
} else if self.cursor.is_eof() {
return Err(LexicalError::new(LexicalErrorType::Eof, self.token_start()));
return Err(LexicalError::new(LexicalErrorType::Eof, self.token_range()));
} else if !self.cursor.eat_char('\n') {
return Err(LexicalError::new(
LexicalErrorType::LineContinuationError,
self.token_start(),
self.token_range(),
));
}
indentation = Indentation::root();
@@ -1004,7 +1002,7 @@ impl<'source> Lexer<'source> {
self.pending_indentation = Some(indentation);
self.indentations.dedent_one(indentation).map_err(|_| {
LexicalError::new(LexicalErrorType::IndentationError, self.offset())
LexicalError::new(LexicalErrorType::IndentationError, self.token_range())
})?;
Some((Tok::Dedent, TextRange::empty(self.offset())))
@@ -1020,7 +1018,7 @@ impl<'source> Lexer<'source> {
Err(_) => {
return Err(LexicalError::new(
LexicalErrorType::IndentationError,
self.offset(),
self.token_range(),
));
}
};
@@ -1034,7 +1032,7 @@ impl<'source> Lexer<'source> {
if self.nesting > 0 {
// Reset the nesting to avoid going into infinite loop.
self.nesting = 0;
return Err(LexicalError::new(LexicalErrorType::Eof, self.offset()));
return Err(LexicalError::new(LexicalErrorType::Eof, self.token_range()));
}
// Next, insert a trailing newline, if required.
@@ -1201,7 +1199,7 @@ impl<'source> Lexer<'source> {
if fstring.nesting() == self.nesting {
return Err(LexicalError::new(
LexicalErrorType::FStringError(FStringErrorType::SingleRbrace),
self.token_start(),
self.token_range(),
));
}
fstring.try_end_format_spec(self.nesting);
@@ -1295,7 +1293,7 @@ impl<'source> Lexer<'source> {
return Err(LexicalError::new(
LexicalErrorType::UnrecognizedToken { tok: c },
self.token_start(),
self.token_range(),
));
}
};
@@ -1359,12 +1357,12 @@ pub struct LexicalError {
/// The type of error that occurred.
error: LexicalErrorType,
/// The location of the error.
location: TextSize,
location: TextRange,
}
impl LexicalError {
/// Creates a new `LexicalError` with the given error type and location.
pub fn new(error: LexicalErrorType, location: TextSize) -> Self {
pub fn new(error: LexicalErrorType, location: TextRange) -> Self {
Self { error, location }
}
@@ -1376,7 +1374,7 @@ impl LexicalError {
self.error
}
pub fn location(&self) -> TextSize {
pub fn location(&self) -> TextRange {
self.location
}
}
@@ -1401,7 +1399,7 @@ impl std::fmt::Display for LexicalError {
f,
"{} at byte offset {}",
self.error(),
u32::from(self.location())
u32::from(self.location().start())
)
}
}
@@ -1416,9 +1414,14 @@ pub enum LexicalErrorType {
// to use the `UnicodeError` variant instead.
#[doc(hidden)]
StringError,
// TODO: Should take a start/end position to report.
/// A string literal without the closing quote.
UnclosedStringError,
/// Decoding of a unicode escape sequence in a string literal failed.
UnicodeError,
/// Missing the `{` for unicode escape sequence.
MissingUnicodeLbrace,
/// Missing the `}` for unicode escape sequence.
MissingUnicodeRbrace,
/// The nesting of brackets/braces/parentheses is not balanced.
NestingError,
/// The indentation is not consistent.
@@ -1495,6 +1498,15 @@ impl std::fmt::Display for LexicalErrorType {
LexicalErrorType::Eof => write!(f, "unexpected EOF while parsing"),
LexicalErrorType::AssignmentError => write!(f, "invalid assignment target"),
LexicalErrorType::OtherError(msg) => write!(f, "{msg}"),
LexicalErrorType::UnclosedStringError => {
write!(f, "missing closing quote in string literal")
}
LexicalErrorType::MissingUnicodeLbrace => {
write!(f, "Missing `{{` in Unicode escape sequence")
}
LexicalErrorType::MissingUnicodeRbrace => {
write!(f, "Missing `}}` in Unicode escape sequence")
}
}
}
}
@@ -2301,7 +2313,6 @@ f"{(lambda x:{x})}"
assert_eq!(lex_fstring_error("f'{a:b}}'"), SingleRbrace);
assert_eq!(lex_fstring_error("f'{3:}}>10}'"), SingleRbrace);
assert_eq!(lex_fstring_error(r"f'\{foo}\}'"), SingleRbrace);
assert_eq!(lex_fstring_error("f'{'"), UnclosedLbrace);
assert_eq!(lex_fstring_error("f'{foo!r'"), UnclosedLbrace);
assert_eq!(lex_fstring_error("f'{foo='"), UnclosedLbrace);
@@ -2336,7 +2347,7 @@ f"{(lambda x:{x})}"
error: FStringError(
UnclosedLbrace,
),
location: 4,
location: 3..4,
}
"###);
@@ -2345,7 +2356,7 @@ f"{(lambda x:{x})}"
error: FStringError(
UnclosedLbrace,
),
location: 6,
location: 3..6,
}
"###);
}

View File

@@ -109,28 +109,226 @@
//! [parsing]: https://en.wikipedia.org/wiki/Parsing
//! [lexer]: crate::lexer
pub use parser::{
parse, parse_expression, parse_expression_starts_at, parse_program, parse_starts_at,
parse_suite, parse_tokens, ParseError, ParseErrorType,
};
use ruff_python_ast::{Mod, PySourceType, Suite};
pub use string::FStringErrorType;
use std::cell::Cell;
pub use error::{FStringErrorType, ParseError, ParseErrorType};
use lexer::{lex, lex_starts_at};
pub use parser::Program;
use ruff_python_ast::{Expr, Mod, ModModule, PySourceType, Suite};
use ruff_text_size::TextSize;
pub use token::{StringKind, Tok, TokenKind};
use crate::lexer::LexResult;
mod context;
mod function;
mod error;
mod invalid;
// Skip flattening lexer to distinguish from full ruff_python_parser
mod lalrpop;
pub mod lexer;
mod parser;
mod soft_keywords;
mod string;
mod token;
mod token_set;
mod token_source;
pub mod typing;
thread_local! {
static NEW_PARSER: Cell<bool> = Cell::new(std::env::var("NEW_PARSER").is_ok());
}
/// Controls whether the current thread uses the new hand written or the old lalrpop based parser.
///
/// Uses the new hand written parser if `use_new_parser` is true.
///
/// Defaults to use the new handwritten parser if the environment variable `NEW_PARSER` is set.
pub fn set_new_parser(use_new_parser: bool) {
NEW_PARSER.set(use_new_parser);
}
/// Parse a full Python program usually consisting of multiple lines.
///
/// This is a convenience function that can be used to parse a full Python program without having to
/// specify the [`Mode`] or the location. It is probably what you want to use most of the time.
///
/// # Example
///
/// For example, parsing a simple function definition and a call to that function:
///
/// ```
/// use ruff_python_parser as parser;
/// let source = r#"
/// def foo():
/// return 42
///
/// print(foo())
/// "#;
/// let program = parser::parse_program(source);
/// assert!(program.is_ok());
/// ```
pub fn parse_program(source: &str) -> Result<ModModule, ParseError> {
let lexer = lex(source, Mode::Module);
match parse_tokens(lexer.collect(), source, Mode::Module)? {
Mod::Module(m) => Ok(m),
Mod::Expression(_) => unreachable!("Mode::Module doesn't return other variant"),
}
}
pub fn parse_suite(source: &str) -> Result<Suite, ParseError> {
parse_program(source).map(|m| m.body)
}
/// Parses a single Python expression.
///
/// This convenience function can be used to parse a single expression without having to
/// specify the Mode or the location.
///
/// # Example
///
/// For example, parsing a single expression denoting the addition of two numbers:
///
/// ```
/// use ruff_python_parser as parser;
/// let expr = parser::parse_expression("1 + 2");
///
/// assert!(expr.is_ok());
///
/// ```
pub fn parse_expression(source: &str) -> Result<Expr, ParseError> {
let lexer = lex(source, Mode::Expression).collect();
match parse_tokens(lexer, source, Mode::Expression)? {
Mod::Expression(expression) => Ok(*expression.body),
Mod::Module(_m) => unreachable!("Mode::Expression doesn't return other variant"),
}
}
/// Parses a Python expression from a given location.
///
/// This function allows to specify the location of the expression in the source code, other than
/// that, it behaves exactly like [`parse_expression`].
///
/// # Example
///
/// Parsing a single expression denoting the addition of two numbers, but this time specifying a different,
/// somewhat silly, location:
///
/// ```
/// use ruff_python_parser::{parse_expression_starts_at};
/// # use ruff_text_size::TextSize;
///
/// let expr = parse_expression_starts_at("1 + 2", TextSize::from(400));
/// assert!(expr.is_ok());
/// ```
pub fn parse_expression_starts_at(source: &str, offset: TextSize) -> Result<Expr, ParseError> {
let lexer = lex_starts_at(source, Mode::Module, offset).collect();
match parse_tokens(lexer, source, Mode::Expression)? {
Mod::Expression(expression) => Ok(*expression.body),
Mod::Module(_m) => unreachable!("Mode::Expression doesn't return other variant"),
}
}
/// Parse the given Python source code using the specified [`Mode`].
///
/// This function is the most general function to parse Python code. Based on the [`Mode`] supplied,
/// it can be used to parse a single expression, a full Python program, an interactive expression
/// or a Python program containing IPython escape commands.
///
/// # Example
///
/// If we want to parse a simple expression, we can use the [`Mode::Expression`] mode during
/// parsing:
///
/// ```
/// use ruff_python_parser::{Mode, parse};
///
/// let expr = parse("1 + 2", Mode::Expression);
/// assert!(expr.is_ok());
/// ```
///
/// Alternatively, we can parse a full Python program consisting of multiple lines:
///
/// ```
/// use ruff_python_parser::{Mode, parse};
///
/// let source = r#"
/// class Greeter:
///
/// def greet(self):
/// print("Hello, world!")
/// "#;
/// let program = parse(source, Mode::Module);
/// assert!(program.is_ok());
/// ```
///
/// Additionally, we can parse a Python program containing IPython escapes:
///
/// ```
/// use ruff_python_parser::{Mode, parse};
///
/// let source = r#"
/// %timeit 1 + 2
/// ?str.replace
/// !ls
/// "#;
/// let program = parse(source, Mode::Ipython);
/// assert!(program.is_ok());
/// ```
pub fn parse(source: &str, mode: Mode) -> Result<Mod, ParseError> {
let lxr = lexer::lex(source, mode);
parse_tokens(lxr.collect(), source, mode)
}
/// Parse the given Python source code using the specified [`Mode`] and [`TextSize`].
///
/// This function allows to specify the location of the the source code, other than
/// that, it behaves exactly like [`parse`].
///
/// # Example
///
/// ```
/// # use ruff_text_size::TextSize;
/// use ruff_python_parser::{Mode, parse_starts_at};
///
/// let source = r#"
/// def fib(i):
/// a, b = 0, 1
/// for _ in range(i):
/// a, b = b, a + b
/// return a
///
/// print(fib(42))
/// "#;
/// let program = parse_starts_at(source, Mode::Module, TextSize::from(0));
/// assert!(program.is_ok());
/// ```
pub fn parse_starts_at(source: &str, mode: Mode, offset: TextSize) -> Result<Mod, ParseError> {
let lxr = lexer::lex_starts_at(source, mode, offset);
parse_tokens(lxr.collect(), source, mode)
}
/// Parse an iterator of [`LexResult`]s using the specified [`Mode`].
///
/// This could allow you to perform some preprocessing on the tokens before parsing them.
///
/// # Example
///
/// As an example, instead of parsing a string, we can parse a list of tokens after we generate
/// them using the [`lexer::lex`] function:
///
/// ```
/// use ruff_python_parser::{lexer::lex, Mode, parse_tokens};
///
/// let source = "1 + 2";
/// let expr = parse_tokens(lex(source, Mode::Expression).collect(), source, Mode::Expression);
/// assert!(expr.is_ok());
/// ```
pub fn parse_tokens(tokens: Vec<LexResult>, source: &str, mode: Mode) -> Result<Mod, ParseError> {
if NEW_PARSER.get() {
crate::parser::parse_tokens(tokens, source, mode)
} else {
crate::lalrpop::parse_tokens(tokens, source, mode)
}
}
/// Collect tokens up to and including the first error.
pub fn tokenize(contents: &str, mode: Mode) -> Vec<LexResult> {
let mut tokens: Vec<LexResult> = allocate_tokens_vec(contents);
@@ -248,28 +446,3 @@ impl std::fmt::Display for ModeParseError {
write!(f, r#"mode must be "exec", "eval", "ipython", or "single""#)
}
}
#[rustfmt::skip]
#[allow(unreachable_pub)]
#[allow(clippy::type_complexity)]
#[allow(clippy::extra_unused_lifetimes)]
#[allow(clippy::needless_lifetimes)]
#[allow(clippy::unused_self)]
#[allow(clippy::cast_sign_loss)]
#[allow(clippy::default_trait_access)]
#[allow(clippy::let_unit_value)]
#[allow(clippy::just_underscores_and_digits)]
#[allow(clippy::no_effect_underscore_binding)]
#[allow(clippy::trivially_copy_pass_by_ref)]
#[allow(clippy::option_option)]
#[allow(clippy::unnecessary_wraps)]
#[allow(clippy::uninlined_format_args)]
#[allow(clippy::cloned_instead_of_copied)]
mod python {
#[cfg(feature = "lalrpop")]
include!(concat!(env!("OUT_DIR"), "/src/python.rs"));
#[cfg(not(feature = "lalrpop"))]
include!("python.rs");
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,148 @@
use std::hash::BuildHasherDefault;
use ast::CmpOp;
use ruff_python_ast::{self as ast, Expr, ExprContext};
use rustc_hash::FxHashSet;
use crate::{ParseError, ParseErrorType, TokenKind};
/// Set the `ctx` for `Expr::Id`, `Expr::Attribute`, `Expr::Subscript`, `Expr::Starred`,
/// `Expr::Tuple` and `Expr::List`. If `expr` is either `Expr::Tuple` or `Expr::List`,
/// recursively sets the `ctx` for their elements.
pub(super) fn set_expr_ctx(expr: &mut Expr, new_ctx: ExprContext) {
match expr {
Expr::Name(ast::ExprName { ctx, .. })
| Expr::Attribute(ast::ExprAttribute { ctx, .. })
| Expr::Subscript(ast::ExprSubscript { ctx, .. }) => *ctx = new_ctx,
Expr::Starred(ast::ExprStarred { value, ctx, .. }) => {
*ctx = new_ctx;
set_expr_ctx(value, new_ctx);
}
Expr::UnaryOp(ast::ExprUnaryOp { operand, .. }) => {
set_expr_ctx(operand, new_ctx);
}
Expr::List(ast::ExprList { elts, ctx, .. })
| Expr::Tuple(ast::ExprTuple { elts, ctx, .. }) => {
*ctx = new_ctx;
elts.iter_mut()
.for_each(|element| set_expr_ctx(element, new_ctx));
}
_ => {}
}
}
/// Check if the given expression is itself or contains an expression that is
/// valid on the left hand side of an assignment. For example, identifiers,
/// starred expressions, attribute expressions, subscript expressions,
/// list and tuple unpacking are valid assignment targets.
pub(super) fn is_valid_assignment_target(expr: &Expr) -> bool {
match expr {
Expr::Starred(ast::ExprStarred { value, .. }) => is_valid_assignment_target(value),
Expr::List(ast::ExprList { elts, .. }) | Expr::Tuple(ast::ExprTuple { elts, .. }) => {
elts.iter().all(is_valid_assignment_target)
}
Expr::Name(_) | Expr::Attribute(_) | Expr::Subscript(_) => true,
_ => false,
}
}
/// Check if the given expression is itself or contains an expression that is
/// valid on the left hand side of an augmented assignment. For example, identifiers,
/// attribute and subscript expressions are valid augmented assignment targets.
pub(super) fn is_valid_aug_assignment_target(expr: &Expr) -> bool {
matches!(
expr,
Expr::Name(_) | Expr::Attribute(_) | Expr::Subscript(_)
)
}
/// Check if the given expression is itself or contains an expression that is
/// valid as a target of a `del` statement.
pub(super) fn is_valid_del_target(expr: &Expr) -> bool {
// https://github.com/python/cpython/blob/d864b0094f9875c5613cbb0b7f7f3ca8f1c6b606/Parser/action_helpers.c#L1150-L1180
match expr {
Expr::List(ast::ExprList { elts, .. }) | Expr::Tuple(ast::ExprTuple { elts, .. }) => {
elts.iter().all(is_valid_del_target)
}
Expr::Name(_) | Expr::Attribute(_) | Expr::Subscript(_) => true,
_ => false,
}
}
/// Converts a [`TokenKind`] array of size 2 to its correspondent [`CmpOp`].
pub(super) fn token_kind_to_cmp_op(kind: [TokenKind; 2]) -> Result<CmpOp, ()> {
Ok(match kind {
[TokenKind::Is, TokenKind::Not] => CmpOp::IsNot,
[TokenKind::Is, _] => CmpOp::Is,
[TokenKind::In, _] => CmpOp::In,
[TokenKind::EqEqual, _] => CmpOp::Eq,
[TokenKind::Less, _] => CmpOp::Lt,
[TokenKind::Greater, _] => CmpOp::Gt,
[TokenKind::NotEqual, _] => CmpOp::NotEq,
[TokenKind::LessEqual, _] => CmpOp::LtE,
[TokenKind::GreaterEqual, _] => CmpOp::GtE,
[TokenKind::Not, TokenKind::In] => CmpOp::NotIn,
_ => return Err(()),
})
}
// Perform validation of function/lambda parameters in a function definition.
pub(super) fn validate_parameters(parameters: &ast::Parameters) -> Result<(), ParseError> {
let mut all_arg_names = FxHashSet::with_capacity_and_hasher(
parameters.posonlyargs.len()
+ parameters.args.len()
+ usize::from(parameters.vararg.is_some())
+ parameters.kwonlyargs.len()
+ usize::from(parameters.kwarg.is_some()),
BuildHasherDefault::default(),
);
let posonlyargs = parameters.posonlyargs.iter();
let args = parameters.args.iter();
let kwonlyargs = parameters.kwonlyargs.iter();
let vararg: Option<&ast::Parameter> = parameters.vararg.as_deref();
let kwarg: Option<&ast::Parameter> = parameters.kwarg.as_deref();
for arg in posonlyargs
.chain(args)
.chain(kwonlyargs)
.map(|arg| &arg.parameter)
.chain(vararg)
.chain(kwarg)
{
let range = arg.range;
let arg_name = arg.name.as_str();
if !all_arg_names.insert(arg_name) {
return Err(ParseError {
error: ParseErrorType::DuplicateArgumentError(arg_name.to_string()),
location: range,
});
}
}
Ok(())
}
pub(super) fn validate_arguments(arguments: &ast::Arguments) -> Result<(), ParseError> {
let mut all_arg_names = FxHashSet::with_capacity_and_hasher(
arguments.keywords.len(),
BuildHasherDefault::default(),
);
for (name, range) in arguments
.keywords
.iter()
.filter_map(|argument| argument.arg.as_ref().map(|arg| (arg, argument.range)))
{
let arg_name = name.as_str();
if !all_arg_names.insert(arg_name) {
return Err(ParseError {
error: ParseErrorType::DuplicateKeywordArgumentError(arg_name.to_string()),
location: range,
});
}
}
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,672 @@
use ruff_python_ast::{
self as ast, Expr, ExprContext, Number, Operator, Pattern, Singleton, UnaryOp,
};
use ruff_text_size::{Ranged, TextSize};
use crate::parser::progress::ParserProgress;
use crate::parser::{Parser, SequenceMatchPatternParentheses};
use crate::token_set::TokenSet;
use crate::{ParseErrorType, Tok, TokenKind};
use super::RecoveryContextKind;
/// The set of tokens that can start a literal pattern.
const LITERAL_PATTERN_START_SET: TokenSet = TokenSet::new([
TokenKind::None,
TokenKind::True,
TokenKind::False,
TokenKind::String,
TokenKind::Int,
TokenKind::Float,
TokenKind::Complex,
]);
/// The set of tokens that can start a pattern.
const PATTERN_START_SET: TokenSet = TokenSet::new([
// Star pattern
TokenKind::Star,
// Capture pattern
// Wildcard pattern ('_' is a name token)
// Value pattern (name or attribute)
// Class pattern
TokenKind::Name,
// Group pattern
TokenKind::Lpar,
// Sequence pattern
TokenKind::Lsqb,
// Mapping pattern
TokenKind::Lbrace,
])
.union(LITERAL_PATTERN_START_SET);
/// The set of tokens that can start a mapping pattern.
const MAPPING_PATTERN_START_SET: TokenSet = TokenSet::new([
// Double star pattern
TokenKind::DoubleStar,
// Value pattern
TokenKind::Name,
])
.union(LITERAL_PATTERN_START_SET);
impl<'src> Parser<'src> {
/// Returns `true` if the current token is a valid start of a pattern.
pub(super) fn at_pattern_start(&self) -> bool {
self.at_ts(PATTERN_START_SET)
}
/// Returns `true` if the current token is a valid start of a mapping pattern.
pub(super) fn at_mapping_pattern_start(&self) -> bool {
self.at_ts(MAPPING_PATTERN_START_SET)
}
pub(super) fn parse_match_patterns(&mut self) -> Pattern {
let start = self.node_start();
let pattern = self.parse_match_pattern();
if self.at(TokenKind::Comma) {
Pattern::MatchSequence(self.parse_sequence_match_pattern(pattern, start, None))
} else {
pattern
}
}
fn parse_match_pattern(&mut self) -> Pattern {
let start = self.node_start();
let mut lhs = self.parse_match_pattern_lhs();
// Or pattern
if self.at(TokenKind::Vbar) {
let mut patterns = vec![lhs];
let mut progress = ParserProgress::default();
while self.eat(TokenKind::Vbar) {
progress.assert_progressing(self);
let pattern = self.parse_match_pattern_lhs();
patterns.push(pattern);
}
lhs = Pattern::MatchOr(ast::PatternMatchOr {
range: self.node_range(start),
patterns,
});
}
// As pattern
if self.eat(TokenKind::As) {
let ident = self.parse_identifier();
lhs = Pattern::MatchAs(ast::PatternMatchAs {
range: self.node_range(start),
name: Some(ident),
pattern: Some(Box::new(lhs)),
});
}
lhs
}
fn parse_match_pattern_lhs(&mut self) -> Pattern {
let start = self.node_start();
let mut lhs = match self.current_token_kind() {
TokenKind::Lbrace => Pattern::MatchMapping(self.parse_match_pattern_mapping()),
TokenKind::Star => Pattern::MatchStar(self.parse_match_pattern_star()),
TokenKind::Lpar | TokenKind::Lsqb => self.parse_delimited_match_pattern(),
_ => self.parse_match_pattern_literal(),
};
if self.at(TokenKind::Lpar) {
lhs = Pattern::MatchClass(self.parse_match_pattern_class(lhs, start));
}
// TODO(dhruvmanila): This error isn't being reported (`1 + 2` can't be used as a pattern)
// literal_pattern:
// | signed_number !('+' | '-')
if self.at(TokenKind::Plus) || self.at(TokenKind::Minus) {
let (operator_token, _) = self.next_token();
let operator = if matches!(operator_token, Tok::Plus) {
Operator::Add
} else {
Operator::Sub
};
let lhs_value = if let Pattern::MatchValue(lhs) = lhs {
if !lhs.value.is_literal_expr() && !matches!(lhs.value.as_ref(), Expr::UnaryOp(_)) {
self.add_error(
ParseErrorType::OtherError(format!(
"invalid `{}` expression for match pattern",
self.src_text(lhs.range)
)),
lhs.range,
);
}
lhs.value
} else {
self.add_error(
ParseErrorType::OtherError("invalid lhs pattern".to_string()),
&lhs,
);
#[allow(deprecated)]
Box::new(Expr::Invalid(ast::ExprInvalid {
value: self.src_text(lhs.range()).into(),
range: lhs.range(),
}))
};
let rhs_pattern = self.parse_match_pattern_lhs();
let rhs_value = if let Pattern::MatchValue(rhs) = rhs_pattern {
if !rhs.value.is_literal_expr() {
self.add_error(
ParseErrorType::OtherError(
"invalid expression for match pattern".to_string(),
),
&rhs,
);
}
rhs.value
} else {
self.add_error(
ParseErrorType::OtherError("invalid rhs pattern".to_string()),
rhs_pattern.range(),
);
#[allow(deprecated)]
Box::new(Expr::Invalid(ast::ExprInvalid {
value: self.src_text(rhs_pattern.range()).into(),
range: rhs_pattern.range(),
}))
};
if matches!(
rhs_value.as_ref(),
Expr::UnaryOp(ast::ExprUnaryOp {
op: UnaryOp::USub,
..
})
) {
self.add_error(
ParseErrorType::OtherError(
"`-` not allowed in rhs of match pattern".to_string(),
),
rhs_value.range(),
);
}
let range = self.node_range(start);
return Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(Expr::BinOp(ast::ExprBinOp {
left: lhs_value,
op: operator,
right: rhs_value,
range,
})),
range,
});
}
lhs
}
/// Parses a mapping pattern.
///
/// # Panics
///
/// If the parser isn't positioned at a `{` token.
///
/// See: <https://docs.python.org/3/reference/compound_stmts.html#mapping-patterns>
fn parse_match_pattern_mapping(&mut self) -> ast::PatternMatchMapping {
let start = self.node_start();
self.bump(TokenKind::Lbrace);
let mut keys = vec![];
let mut patterns = vec![];
let mut rest = None;
self.parse_comma_separated_list(
RecoveryContextKind::MatchPatternMapping,
|parser| {
if parser.eat(TokenKind::DoubleStar) {
rest = Some(parser.parse_identifier());
} else {
let key = match parser.parse_match_pattern_lhs() {
Pattern::MatchValue(ast::PatternMatchValue { value, .. }) => *value,
Pattern::MatchSingleton(ast::PatternMatchSingleton { value, range }) => {
match value {
Singleton::None => {
Expr::NoneLiteral(ast::ExprNoneLiteral { range })
}
Singleton::True => Expr::BooleanLiteral(ast::ExprBooleanLiteral {
value: true,
range,
}),
Singleton::False => Expr::BooleanLiteral(ast::ExprBooleanLiteral {
value: false,
range,
}),
}
}
pattern => {
parser.add_error(
ParseErrorType::OtherError(format!(
"invalid mapping pattern key `{}`",
parser.src_text(&pattern)
)),
&pattern,
);
#[allow(deprecated)]
Expr::Invalid(ast::ExprInvalid {
value: parser.src_text(&pattern).into(),
range: pattern.range(),
})
}
};
keys.push(key);
parser.expect(TokenKind::Colon);
patterns.push(parser.parse_match_pattern());
}
},
true,
);
// TODO(dhruvmanila): There can't be any other pattern after a `**` pattern.
// TODO(dhruvmanila): Duplicate literal keys should raise a SyntaxError.
self.expect(TokenKind::Rbrace);
ast::PatternMatchMapping {
range: self.node_range(start),
keys,
patterns,
rest,
}
}
fn parse_match_pattern_star(&mut self) -> ast::PatternMatchStar {
let start = self.node_start();
self.bump(TokenKind::Star);
let ident = self.parse_identifier();
ast::PatternMatchStar {
range: self.node_range(start),
name: if ident.is_valid() && ident.id == "_" {
None
} else {
Some(ident)
},
}
}
fn parse_delimited_match_pattern(&mut self) -> Pattern {
let start = self.node_start();
let parentheses = if self.eat(TokenKind::Lpar) {
SequenceMatchPatternParentheses::Tuple
} else {
self.bump(TokenKind::Lsqb);
SequenceMatchPatternParentheses::List
};
if matches!(
self.current_token_kind(),
TokenKind::Newline | TokenKind::Colon
) {
self.add_error(
ParseErrorType::OtherError(format!(
"missing `{closing}`",
closing = if parentheses.is_list() { "]" } else { ")" }
)),
self.current_token_range(),
);
}
if self.eat(parentheses.closing_kind()) {
return Pattern::MatchSequence(ast::PatternMatchSequence {
patterns: vec![],
range: self.node_range(start),
});
}
let mut pattern = self.parse_match_pattern();
if parentheses.is_list() || self.at(TokenKind::Comma) {
pattern = Pattern::MatchSequence(self.parse_sequence_match_pattern(
pattern,
start,
Some(parentheses),
));
} else {
self.expect(parentheses.closing_kind());
}
pattern
}
/// Parses a sequence pattern.
///
/// If the `parentheses` is `None`, it is an [open sequence pattern].
///
/// See: <https://docs.python.org/3/reference/compound_stmts.html#sequence-patterns>
///
/// [open sequence pattern]: https://docs.python.org/3/reference/compound_stmts.html#grammar-token-python-grammar-open_sequence_pattern
fn parse_sequence_match_pattern(
&mut self,
first_element: Pattern,
start: TextSize,
parentheses: Option<SequenceMatchPatternParentheses>,
) -> ast::PatternMatchSequence {
if parentheses.is_some_and(|parentheses| {
self.at(parentheses.closing_kind()) || self.peek_nth(1) == parentheses.closing_kind()
}) {
// The comma is optional if it is a single-element sequence
self.eat(TokenKind::Comma);
} else {
self.expect(TokenKind::Comma);
}
let mut patterns = vec![first_element];
self.parse_comma_separated_list(
RecoveryContextKind::SequenceMatchPattern(parentheses),
|parser| patterns.push(parser.parse_match_pattern()),
true,
);
if let Some(parentheses) = parentheses {
self.expect(parentheses.closing_kind());
}
ast::PatternMatchSequence {
range: self.node_range(start),
patterns,
}
}
fn parse_match_pattern_literal(&mut self) -> Pattern {
let start = self.node_start();
match self.current_token_kind() {
TokenKind::None => {
self.bump(TokenKind::None);
Pattern::MatchSingleton(ast::PatternMatchSingleton {
value: Singleton::None,
range: self.node_range(start),
})
}
TokenKind::True => {
self.bump(TokenKind::True);
Pattern::MatchSingleton(ast::PatternMatchSingleton {
value: Singleton::True,
range: self.node_range(start),
})
}
TokenKind::False => {
self.bump(TokenKind::False);
Pattern::MatchSingleton(ast::PatternMatchSingleton {
value: Singleton::False,
range: self.node_range(start),
})
}
TokenKind::String => {
let str = self.parse_string_expression();
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(str),
range: self.node_range(start),
})
}
TokenKind::Complex => {
let (Tok::Complex { real, imag }, _) = self.bump(TokenKind::Complex) else {
unreachable!()
};
let range = self.node_range(start);
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(Expr::NumberLiteral(ast::ExprNumberLiteral {
value: Number::Complex { real, imag },
range,
})),
range,
})
}
TokenKind::Int => {
let (Tok::Int { value }, _) = self.bump(TokenKind::Int) else {
unreachable!()
};
let range = self.node_range(start);
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(Expr::NumberLiteral(ast::ExprNumberLiteral {
value: Number::Int(value),
range,
})),
range,
})
}
TokenKind::Float => {
let (Tok::Float { value }, _) = self.bump(TokenKind::Float) else {
unreachable!()
};
let range = self.node_range(start);
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(Expr::NumberLiteral(ast::ExprNumberLiteral {
value: Number::Float(value),
range,
})),
range,
})
}
TokenKind::Name if self.peek_nth(1) == TokenKind::Dot => {
let (Tok::Name { name }, _) = self.bump(TokenKind::Name) else {
unreachable!()
};
let id = Expr::Name(ast::ExprName {
id: name.to_string(),
ctx: ExprContext::Load,
range: self.node_range(start),
});
let attribute = self.parse_attr_expr_for_match_pattern(id, start);
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(attribute),
range: self.node_range(start),
})
}
TokenKind::Name => {
let (Tok::Name { name }, _) = self.bump(TokenKind::Name) else {
unreachable!()
};
let range = self.node_range(start);
Pattern::MatchAs(ast::PatternMatchAs {
range,
pattern: None,
name: if name.contains('_') {
None
} else {
Some(ast::Identifier {
id: name.to_string(),
range,
})
},
})
}
TokenKind::Minus
if matches!(
self.peek_nth(1),
TokenKind::Int | TokenKind::Float | TokenKind::Complex
) =>
{
let parsed_expr = self.parse_lhs_expression();
let range = self.node_range(start);
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(parsed_expr.expr),
range,
})
}
kind => {
// Upon encountering an unexpected token, return a `Pattern::MatchValue` containing
// an empty `Expr::Name`.
let invalid_node = if kind.is_keyword() {
Expr::Name(self.parse_name())
} else {
self.add_error(
ParseErrorType::OtherError("Expected a pattern".to_string()),
self.current_token_range(),
);
Expr::Name(ast::ExprName {
range: self.missing_node_range(),
id: String::new(),
ctx: ExprContext::Load,
})
};
Pattern::MatchValue(ast::PatternMatchValue {
value: Box::new(invalid_node),
range: self.missing_node_range(),
})
}
}
}
fn parse_attr_expr_for_match_pattern(&mut self, mut lhs: Expr, start: TextSize) -> Expr {
while self.current_token_kind() == TokenKind::Dot {
lhs = Expr::Attribute(self.parse_attribute_expression(lhs, start));
}
lhs
}
/// Parses the [pattern arguments] in a class pattern.
///
/// # Panics
///
/// If the parser isn't positioned at a `(` token.
///
/// See: <https://docs.python.org/3/reference/compound_stmts.html#class-patterns>
///
/// [pattern arguments]: https://docs.python.org/3/reference/compound_stmts.html#grammar-token-python-grammar-pattern_arguments
fn parse_match_pattern_class(
&mut self,
cls: Pattern,
start: TextSize,
) -> ast::PatternMatchClass {
self.bump(TokenKind::Lpar);
let mut patterns = vec![];
let mut keywords = vec![];
let mut has_seen_pattern = false;
let mut has_seen_keyword_pattern = false;
let arguments_start = self.node_start();
self.parse_comma_separated_list(
RecoveryContextKind::MatchPatternClassArguments,
|parser| {
let pattern_start = parser.node_start();
let pattern = parser.parse_match_pattern();
if parser.eat(TokenKind::Equal) {
has_seen_pattern = false;
has_seen_keyword_pattern = true;
let value_pattern = parser.parse_match_pattern();
// Key can only be an identifier
if let Pattern::MatchAs(ast::PatternMatchAs {
name: Some(attr), ..
}) = pattern
{
keywords.push(ast::PatternKeyword {
attr,
pattern: value_pattern,
range: parser.node_range(pattern_start),
});
} else {
// In case it's not a valid keyword pattern, we'll add an empty identifier
// to indicate that. This is to avoid dropping the parsed value pattern.
keywords.push(ast::PatternKeyword {
attr: ast::Identifier {
id: String::new(),
range: parser.missing_node_range(),
},
pattern: value_pattern,
range: parser.node_range(pattern_start),
});
parser.add_error(
ParseErrorType::OtherError("Invalid keyword pattern".to_string()),
parser.node_range(pattern_start),
);
}
} else {
has_seen_pattern = true;
patterns.push(pattern);
}
if has_seen_keyword_pattern && has_seen_pattern {
parser.add_error(
ParseErrorType::OtherError(
"pattern not allowed after keyword pattern".to_string(),
),
parser.node_range(pattern_start),
);
}
},
true,
);
self.expect(TokenKind::Rpar);
let arguments_range = self.node_range(arguments_start);
let cls = match cls {
Pattern::MatchAs(ast::PatternMatchAs {
name: Some(ident), ..
}) => {
if ident.is_valid() {
Box::new(Expr::Name(ast::ExprName {
range: ident.range(),
id: ident.id,
ctx: ExprContext::Load,
}))
} else {
#[allow(deprecated)]
Box::new(Expr::Invalid(ast::ExprInvalid {
value: self.src_text(&ident).into(),
range: ident.range(),
}))
}
}
Pattern::MatchValue(ast::PatternMatchValue { value, range: _ })
if matches!(value.as_ref(), Expr::Attribute(_)) =>
{
value
}
pattern => {
self.add_error(
ParseErrorType::OtherError("invalid pattern match class".to_string()),
&pattern,
);
// FIXME(micha): Including the entire range is not ideal because it also includes trivia.
#[allow(deprecated)]
Box::new(Expr::Invalid(ast::ExprInvalid {
value: self.src_text(pattern.range()).into(),
range: pattern.range(),
}))
}
};
ast::PatternMatchClass {
cls,
arguments: ast::PatternArguments {
patterns,
keywords,
range: arguments_range,
},
range: self.node_range(start),
}
}
}

View File

@@ -0,0 +1,36 @@
use crate::parser::Parser;
use crate::TokenKind;
use ruff_text_size::TextSize;
/// Captures the progress of the parser and allows to test if the parsing is still making progress
#[derive(Debug, Copy, Clone, Default)]
pub(super) struct ParserProgress(Option<(TokenKind, TextSize)>);
impl ParserProgress {
/// Returns true if the parser has passed this position
#[inline]
fn has_progressed(self, p: &Parser) -> bool {
match self.0 {
None => true,
Some(snapshot) => snapshot != (p.current_token_kind(), p.current_token_range().start()),
}
}
/// Asserts that the parsing is still making progress.
///
/// # Panics
///
/// Panics if the parser hasn't progressed since the last call.
#[inline]
pub(super) fn assert_progressing(&mut self, p: &Parser) {
assert!(
self.has_progressed(p),
"The parser is no longer progressing. Stuck at '{}' {:?}:{:?}",
p.src_text(p.current_token_range()),
p.current_token_kind(),
p.current_token_range(),
);
self.0 = Some((p.current_token_kind(), p.current_token_range().start()));
}
}

View File

@@ -1,5 +1,5 @@
---
source: crates/ruff_python_parser/src/invalid.rs
source: crates/ruff_python_parser/src/parser/tests.rs
expression: ast
---
Ok(

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,44 @@
use crate::parser::Parser;
use crate::token_source::TokenSource;
use crate::Mode;
mod parser;
mod suite;
// This is a sanity test for what looks like an ipython directive being
// assigned to. Although this doesn't actually parse as an assignment
// statement, but rather, a directive whose value is `foo = 42`.
#[test]
fn ok_ipy_escape_command() {
use crate::Mode;
let src = r"!foo = 42";
let tokens = crate::lexer::lex(src, Mode::Ipython).collect();
let ast = crate::parse_tokens(tokens, src, Mode::Ipython);
insta::assert_debug_snapshot!(ast);
}
// Test that is intentionally ignored by default.
// Use it for quickly debugging a parser issue.
#[test]
#[ignore]
fn parser_quick_test() {
let src = r"if True:
1
...
if x < 1:
...
else:
pass
if a:
pass
elif b:
...
";
let tokens = crate::lexer::lex(src, Mode::Module).collect();
let ast = Parser::new(src, Mode::Module, TokenSource::new(tokens)).parse_program();
assert_eq!(&ast.parse_errors, &[]);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,136 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nx: int\n(y): 1 + 2\nvar: tuple[int] | int = 1,\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..46,
body: [
AnnAssign(
StmtAnnAssign {
range: 1..7,
target: Name(
ExprName {
range: 1..2,
id: "x",
ctx: Store,
},
),
annotation: Name(
ExprName {
range: 4..7,
id: "int",
ctx: Load,
},
),
value: None,
simple: true,
},
),
AnnAssign(
StmtAnnAssign {
range: 8..18,
target: Name(
ExprName {
range: 9..10,
id: "y",
ctx: Store,
},
),
annotation: BinOp(
ExprBinOp {
range: 13..18,
left: NumberLiteral(
ExprNumberLiteral {
range: 13..14,
value: Int(
1,
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 17..18,
value: Int(
2,
),
},
),
},
),
value: None,
simple: false,
},
),
AnnAssign(
StmtAnnAssign {
range: 19..45,
target: Name(
ExprName {
range: 19..22,
id: "var",
ctx: Store,
},
),
annotation: BinOp(
ExprBinOp {
range: 24..40,
left: Subscript(
ExprSubscript {
range: 24..34,
value: Name(
ExprName {
range: 24..29,
id: "tuple",
ctx: Load,
},
),
slice: Name(
ExprName {
range: 30..33,
id: "int",
ctx: Load,
},
),
ctx: Load,
},
),
op: BitOr,
right: Name(
ExprName {
range: 37..40,
id: "int",
ctx: Load,
},
),
},
),
value: Some(
Tuple(
ExprTuple {
range: 43..45,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 43..44,
value: Int(
1,
),
},
),
],
ctx: Load,
parenthesized: false,
},
),
),
simple: true,
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,264 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nx = 1\n[] = *l\n() = *t\na, b = ab\n*a = 1 + 2\na = b = c\nfoo.bar = False\nbaz[0] = 42\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..82,
body: [
Assign(
StmtAssign {
range: 1..6,
targets: [
Name(
ExprName {
range: 1..2,
id: "x",
ctx: Store,
},
),
],
value: NumberLiteral(
ExprNumberLiteral {
range: 5..6,
value: Int(
1,
),
},
),
},
),
Assign(
StmtAssign {
range: 7..14,
targets: [
List(
ExprList {
range: 7..9,
elts: [],
ctx: Store,
},
),
],
value: Starred(
ExprStarred {
range: 12..14,
value: Name(
ExprName {
range: 13..14,
id: "l",
ctx: Load,
},
),
ctx: Load,
},
),
},
),
Assign(
StmtAssign {
range: 15..22,
targets: [
Tuple(
ExprTuple {
range: 15..17,
elts: [],
ctx: Store,
parenthesized: true,
},
),
],
value: Starred(
ExprStarred {
range: 20..22,
value: Name(
ExprName {
range: 21..22,
id: "t",
ctx: Load,
},
),
ctx: Load,
},
),
},
),
Assign(
StmtAssign {
range: 23..32,
targets: [
Tuple(
ExprTuple {
range: 23..27,
elts: [
Name(
ExprName {
range: 23..24,
id: "a",
ctx: Store,
},
),
Name(
ExprName {
range: 26..27,
id: "b",
ctx: Store,
},
),
],
ctx: Store,
parenthesized: false,
},
),
],
value: Name(
ExprName {
range: 30..32,
id: "ab",
ctx: Load,
},
),
},
),
Assign(
StmtAssign {
range: 33..43,
targets: [
Starred(
ExprStarred {
range: 33..35,
value: Name(
ExprName {
range: 34..35,
id: "a",
ctx: Store,
},
),
ctx: Store,
},
),
],
value: BinOp(
ExprBinOp {
range: 38..43,
left: NumberLiteral(
ExprNumberLiteral {
range: 38..39,
value: Int(
1,
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 42..43,
value: Int(
2,
),
},
),
},
),
},
),
Assign(
StmtAssign {
range: 44..53,
targets: [
Name(
ExprName {
range: 44..45,
id: "a",
ctx: Store,
},
),
Name(
ExprName {
range: 48..49,
id: "b",
ctx: Store,
},
),
],
value: Name(
ExprName {
range: 52..53,
id: "c",
ctx: Load,
},
),
},
),
Assign(
StmtAssign {
range: 54..69,
targets: [
Attribute(
ExprAttribute {
range: 54..61,
value: Name(
ExprName {
range: 54..57,
id: "foo",
ctx: Load,
},
),
attr: Identifier {
id: "bar",
range: 58..61,
},
ctx: Store,
},
),
],
value: BooleanLiteral(
ExprBooleanLiteral {
range: 64..69,
value: false,
},
),
},
),
Assign(
StmtAssign {
range: 70..81,
targets: [
Subscript(
ExprSubscript {
range: 70..76,
value: Name(
ExprName {
range: 70..73,
id: "baz",
ctx: Load,
},
),
slice: NumberLiteral(
ExprNumberLiteral {
range: 74..75,
value: Int(
0,
),
},
),
ctx: Store,
},
),
],
value: NumberLiteral(
ExprNumberLiteral {
range: 79..81,
value: Int(
42,
),
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,155 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nasync def f():\n ...\n\nasync for i in iter:\n ...\n\nasync with x:\n ...\n\n@a\nasync def x():\n ...\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..104,
body: [
FunctionDef(
StmtFunctionDef {
range: 1..23,
is_async: true,
decorator_list: [],
name: Identifier {
id: "f",
range: 11..12,
},
type_params: None,
parameters: Parameters {
range: 12..14,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Expr(
StmtExpr {
range: 20..23,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 20..23,
},
),
},
),
],
},
),
For(
StmtFor {
range: 25..53,
is_async: true,
target: Name(
ExprName {
range: 35..36,
id: "i",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 40..44,
id: "iter",
ctx: Load,
},
),
body: [
Expr(
StmtExpr {
range: 50..53,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 50..53,
},
),
},
),
],
orelse: [],
},
),
With(
StmtWith {
range: 55..76,
is_async: true,
items: [
WithItem {
range: 66..67,
context_expr: Name(
ExprName {
range: 66..67,
id: "x",
ctx: Load,
},
),
optional_vars: None,
},
],
body: [
Expr(
StmtExpr {
range: 73..76,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 73..76,
},
),
},
),
],
},
),
FunctionDef(
StmtFunctionDef {
range: 78..103,
is_async: true,
decorator_list: [
Decorator {
range: 78..80,
expression: Name(
ExprName {
range: 79..80,
id: "a",
ctx: Load,
},
),
},
],
name: Identifier {
id: "x",
range: 91..92,
},
type_params: None,
parameters: Parameters {
range: 92..94,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Expr(
StmtExpr {
range: 100..103,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 100..103,
},
),
},
),
],
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,184 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nvalue.attr\nvalue.attr()\nvalue().attr\nvalue().attr().foo\nvalue.attr.foo\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..72,
body: [
Expr(
StmtExpr {
range: 1..11,
value: Attribute(
ExprAttribute {
range: 1..11,
value: Name(
ExprName {
range: 1..6,
id: "value",
ctx: Load,
},
),
attr: Identifier {
id: "attr",
range: 7..11,
},
ctx: Load,
},
),
},
),
Expr(
StmtExpr {
range: 12..24,
value: Call(
ExprCall {
range: 12..24,
func: Attribute(
ExprAttribute {
range: 12..22,
value: Name(
ExprName {
range: 12..17,
id: "value",
ctx: Load,
},
),
attr: Identifier {
id: "attr",
range: 18..22,
},
ctx: Load,
},
),
arguments: Arguments {
range: 22..24,
args: [],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 25..37,
value: Attribute(
ExprAttribute {
range: 25..37,
value: Call(
ExprCall {
range: 25..32,
func: Name(
ExprName {
range: 25..30,
id: "value",
ctx: Load,
},
),
arguments: Arguments {
range: 30..32,
args: [],
keywords: [],
},
},
),
attr: Identifier {
id: "attr",
range: 33..37,
},
ctx: Load,
},
),
},
),
Expr(
StmtExpr {
range: 38..56,
value: Attribute(
ExprAttribute {
range: 38..56,
value: Call(
ExprCall {
range: 38..52,
func: Attribute(
ExprAttribute {
range: 38..50,
value: Call(
ExprCall {
range: 38..45,
func: Name(
ExprName {
range: 38..43,
id: "value",
ctx: Load,
},
),
arguments: Arguments {
range: 43..45,
args: [],
keywords: [],
},
},
),
attr: Identifier {
id: "attr",
range: 46..50,
},
ctx: Load,
},
),
arguments: Arguments {
range: 50..52,
args: [],
keywords: [],
},
},
),
attr: Identifier {
id: "foo",
range: 53..56,
},
ctx: Load,
},
),
},
),
Expr(
StmtExpr {
range: 57..71,
value: Attribute(
ExprAttribute {
range: 57..71,
value: Attribute(
ExprAttribute {
range: 57..67,
value: Name(
ExprName {
range: 57..62,
id: "value",
ctx: Load,
},
),
attr: Identifier {
id: "attr",
range: 63..67,
},
ctx: Load,
},
),
attr: Identifier {
id: "foo",
range: 68..71,
},
ctx: Load,
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,329 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\na += 1\na *= b\na -= 1\na /= a + 1\na //= (a + b) - c ** 2\na @= [1,2]\na %= x\na |= 1\na <<= 2\na >>= 2\na ^= ...\na **= 42\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..115,
body: [
AugAssign(
StmtAugAssign {
range: 1..7,
target: Name(
ExprName {
range: 1..2,
id: "a",
ctx: Store,
},
),
op: Add,
value: NumberLiteral(
ExprNumberLiteral {
range: 6..7,
value: Int(
1,
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 8..14,
target: Name(
ExprName {
range: 8..9,
id: "a",
ctx: Store,
},
),
op: Mult,
value: Name(
ExprName {
range: 13..14,
id: "b",
ctx: Load,
},
),
},
),
AugAssign(
StmtAugAssign {
range: 15..21,
target: Name(
ExprName {
range: 15..16,
id: "a",
ctx: Store,
},
),
op: Sub,
value: NumberLiteral(
ExprNumberLiteral {
range: 20..21,
value: Int(
1,
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 22..32,
target: Name(
ExprName {
range: 22..23,
id: "a",
ctx: Store,
},
),
op: Div,
value: BinOp(
ExprBinOp {
range: 27..32,
left: Name(
ExprName {
range: 27..28,
id: "a",
ctx: Load,
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 31..32,
value: Int(
1,
),
},
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 33..55,
target: Name(
ExprName {
range: 33..34,
id: "a",
ctx: Store,
},
),
op: FloorDiv,
value: BinOp(
ExprBinOp {
range: 39..55,
left: BinOp(
ExprBinOp {
range: 40..45,
left: Name(
ExprName {
range: 40..41,
id: "a",
ctx: Load,
},
),
op: Add,
right: Name(
ExprName {
range: 44..45,
id: "b",
ctx: Load,
},
),
},
),
op: Sub,
right: BinOp(
ExprBinOp {
range: 49..55,
left: Name(
ExprName {
range: 49..50,
id: "c",
ctx: Load,
},
),
op: Pow,
right: NumberLiteral(
ExprNumberLiteral {
range: 54..55,
value: Int(
2,
),
},
),
},
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 56..66,
target: Name(
ExprName {
range: 56..57,
id: "a",
ctx: Store,
},
),
op: MatMult,
value: List(
ExprList {
range: 61..66,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 62..63,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 64..65,
value: Int(
2,
),
},
),
],
ctx: Load,
},
),
},
),
AugAssign(
StmtAugAssign {
range: 67..73,
target: Name(
ExprName {
range: 67..68,
id: "a",
ctx: Store,
},
),
op: Mod,
value: Name(
ExprName {
range: 72..73,
id: "x",
ctx: Load,
},
),
},
),
AugAssign(
StmtAugAssign {
range: 74..80,
target: Name(
ExprName {
range: 74..75,
id: "a",
ctx: Store,
},
),
op: BitOr,
value: NumberLiteral(
ExprNumberLiteral {
range: 79..80,
value: Int(
1,
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 81..88,
target: Name(
ExprName {
range: 81..82,
id: "a",
ctx: Store,
},
),
op: LShift,
value: NumberLiteral(
ExprNumberLiteral {
range: 87..88,
value: Int(
2,
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 89..96,
target: Name(
ExprName {
range: 89..90,
id: "a",
ctx: Store,
},
),
op: RShift,
value: NumberLiteral(
ExprNumberLiteral {
range: 95..96,
value: Int(
2,
),
},
),
},
),
AugAssign(
StmtAugAssign {
range: 97..105,
target: Name(
ExprName {
range: 97..98,
id: "a",
ctx: Store,
},
),
op: BitXor,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 102..105,
},
),
},
),
AugAssign(
StmtAugAssign {
range: 106..114,
target: Name(
ExprName {
range: 106..107,
id: "a",
ctx: Store,
},
),
op: Pow,
value: NumberLiteral(
ExprNumberLiteral {
range: 112..114,
value: Int(
42,
),
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,363 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nawait x\nawait x + 1\nawait a and b\nawait f()\nawait [1, 2]\nawait {3, 4}\nawait {i: 5}\nawait 7, 8\nawait (9, 10)\nawait 1 == 1\nawait x if True else None\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..148,
body: [
Expr(
StmtExpr {
range: 1..8,
value: Await(
ExprAwait {
range: 1..8,
value: Name(
ExprName {
range: 7..8,
id: "x",
ctx: Load,
},
),
},
),
},
),
Expr(
StmtExpr {
range: 9..20,
value: BinOp(
ExprBinOp {
range: 9..20,
left: Await(
ExprAwait {
range: 9..16,
value: Name(
ExprName {
range: 15..16,
id: "x",
ctx: Load,
},
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 19..20,
value: Int(
1,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 21..34,
value: BoolOp(
ExprBoolOp {
range: 21..34,
op: And,
values: [
Await(
ExprAwait {
range: 21..28,
value: Name(
ExprName {
range: 27..28,
id: "a",
ctx: Load,
},
),
},
),
Name(
ExprName {
range: 33..34,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 35..44,
value: Await(
ExprAwait {
range: 35..44,
value: Call(
ExprCall {
range: 41..44,
func: Name(
ExprName {
range: 41..42,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 42..44,
args: [],
keywords: [],
},
},
),
},
),
},
),
Expr(
StmtExpr {
range: 45..57,
value: Await(
ExprAwait {
range: 45..57,
value: List(
ExprList {
range: 51..57,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 52..53,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 55..56,
value: Int(
2,
),
},
),
],
ctx: Load,
},
),
},
),
},
),
Expr(
StmtExpr {
range: 58..70,
value: Await(
ExprAwait {
range: 58..70,
value: Set(
ExprSet {
range: 64..70,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 65..66,
value: Int(
3,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 68..69,
value: Int(
4,
),
},
),
],
},
),
},
),
},
),
Expr(
StmtExpr {
range: 71..83,
value: Await(
ExprAwait {
range: 71..83,
value: Dict(
ExprDict {
range: 77..83,
keys: [
Some(
Name(
ExprName {
range: 78..79,
id: "i",
ctx: Load,
},
),
),
],
values: [
NumberLiteral(
ExprNumberLiteral {
range: 81..82,
value: Int(
5,
),
},
),
],
},
),
},
),
},
),
Expr(
StmtExpr {
range: 84..94,
value: Tuple(
ExprTuple {
range: 84..94,
elts: [
Await(
ExprAwait {
range: 84..91,
value: NumberLiteral(
ExprNumberLiteral {
range: 90..91,
value: Int(
7,
),
},
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 93..94,
value: Int(
8,
),
},
),
],
ctx: Load,
parenthesized: false,
},
),
},
),
Expr(
StmtExpr {
range: 95..108,
value: Await(
ExprAwait {
range: 95..108,
value: Tuple(
ExprTuple {
range: 101..108,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 102..103,
value: Int(
9,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 105..107,
value: Int(
10,
),
},
),
],
ctx: Load,
parenthesized: true,
},
),
},
),
},
),
Expr(
StmtExpr {
range: 109..121,
value: Compare(
ExprCompare {
range: 109..121,
left: Await(
ExprAwait {
range: 109..116,
value: NumberLiteral(
ExprNumberLiteral {
range: 115..116,
value: Int(
1,
),
},
),
},
),
ops: [
Eq,
],
comparators: [
NumberLiteral(
ExprNumberLiteral {
range: 120..121,
value: Int(
1,
),
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 122..147,
value: If(
ExprIf {
range: 122..147,
test: BooleanLiteral(
ExprBooleanLiteral {
range: 133..137,
value: true,
},
),
body: Await(
ExprAwait {
range: 122..129,
value: Name(
ExprName {
range: 128..129,
id: "x",
ctx: Load,
},
),
},
),
orelse: NoneLiteral(
ExprNoneLiteral {
range: 143..147,
},
),
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,422 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\n1 + 2\n1 + 2 - 3\n1 + 2 - 3 + 4\n2 * 2\n1 + 2 * 2\n3 ** 2\n3 ** 2 * 5\n1 + (2 + 3)\n1 << 2\n1 >> 2\n1 | 2\n1 ^ 2\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..103,
body: [
Expr(
StmtExpr {
range: 1..6,
value: BinOp(
ExprBinOp {
range: 1..6,
left: NumberLiteral(
ExprNumberLiteral {
range: 1..2,
value: Int(
1,
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 5..6,
value: Int(
2,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 7..16,
value: BinOp(
ExprBinOp {
range: 7..16,
left: BinOp(
ExprBinOp {
range: 7..12,
left: NumberLiteral(
ExprNumberLiteral {
range: 7..8,
value: Int(
1,
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 11..12,
value: Int(
2,
),
},
),
},
),
op: Sub,
right: NumberLiteral(
ExprNumberLiteral {
range: 15..16,
value: Int(
3,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 17..30,
value: BinOp(
ExprBinOp {
range: 17..30,
left: BinOp(
ExprBinOp {
range: 17..26,
left: BinOp(
ExprBinOp {
range: 17..22,
left: NumberLiteral(
ExprNumberLiteral {
range: 17..18,
value: Int(
1,
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 21..22,
value: Int(
2,
),
},
),
},
),
op: Sub,
right: NumberLiteral(
ExprNumberLiteral {
range: 25..26,
value: Int(
3,
),
},
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 29..30,
value: Int(
4,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 31..36,
value: BinOp(
ExprBinOp {
range: 31..36,
left: NumberLiteral(
ExprNumberLiteral {
range: 31..32,
value: Int(
2,
),
},
),
op: Mult,
right: NumberLiteral(
ExprNumberLiteral {
range: 35..36,
value: Int(
2,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 37..46,
value: BinOp(
ExprBinOp {
range: 37..46,
left: NumberLiteral(
ExprNumberLiteral {
range: 37..38,
value: Int(
1,
),
},
),
op: Add,
right: BinOp(
ExprBinOp {
range: 41..46,
left: NumberLiteral(
ExprNumberLiteral {
range: 41..42,
value: Int(
2,
),
},
),
op: Mult,
right: NumberLiteral(
ExprNumberLiteral {
range: 45..46,
value: Int(
2,
),
},
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 47..53,
value: BinOp(
ExprBinOp {
range: 47..53,
left: NumberLiteral(
ExprNumberLiteral {
range: 47..48,
value: Int(
3,
),
},
),
op: Pow,
right: NumberLiteral(
ExprNumberLiteral {
range: 52..53,
value: Int(
2,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 54..64,
value: BinOp(
ExprBinOp {
range: 54..64,
left: BinOp(
ExprBinOp {
range: 54..60,
left: NumberLiteral(
ExprNumberLiteral {
range: 54..55,
value: Int(
3,
),
},
),
op: Pow,
right: NumberLiteral(
ExprNumberLiteral {
range: 59..60,
value: Int(
2,
),
},
),
},
),
op: Mult,
right: NumberLiteral(
ExprNumberLiteral {
range: 63..64,
value: Int(
5,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 65..76,
value: BinOp(
ExprBinOp {
range: 65..76,
left: NumberLiteral(
ExprNumberLiteral {
range: 65..66,
value: Int(
1,
),
},
),
op: Add,
right: BinOp(
ExprBinOp {
range: 70..75,
left: NumberLiteral(
ExprNumberLiteral {
range: 70..71,
value: Int(
2,
),
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 74..75,
value: Int(
3,
),
},
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 77..83,
value: BinOp(
ExprBinOp {
range: 77..83,
left: NumberLiteral(
ExprNumberLiteral {
range: 77..78,
value: Int(
1,
),
},
),
op: LShift,
right: NumberLiteral(
ExprNumberLiteral {
range: 82..83,
value: Int(
2,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 84..90,
value: BinOp(
ExprBinOp {
range: 84..90,
left: NumberLiteral(
ExprNumberLiteral {
range: 84..85,
value: Int(
1,
),
},
),
op: RShift,
right: NumberLiteral(
ExprNumberLiteral {
range: 89..90,
value: Int(
2,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 91..96,
value: BinOp(
ExprBinOp {
range: 91..96,
left: NumberLiteral(
ExprNumberLiteral {
range: 91..92,
value: Int(
1,
),
},
),
op: BitOr,
right: NumberLiteral(
ExprNumberLiteral {
range: 95..96,
value: Int(
2,
),
},
),
},
),
},
),
Expr(
StmtExpr {
range: 97..102,
value: BinOp(
ExprBinOp {
range: 97..102,
left: NumberLiteral(
ExprNumberLiteral {
range: 97..98,
value: Int(
1,
),
},
),
op: BitXor,
right: NumberLiteral(
ExprNumberLiteral {
range: 101..102,
value: Int(
2,
),
},
),
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,178 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\na and b\na and b and c\na or b\na or b or c\na and b or c\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..55,
body: [
Expr(
StmtExpr {
range: 1..8,
value: BoolOp(
ExprBoolOp {
range: 1..8,
op: And,
values: [
Name(
ExprName {
range: 1..2,
id: "a",
ctx: Load,
},
),
Name(
ExprName {
range: 7..8,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 9..22,
value: BoolOp(
ExprBoolOp {
range: 9..22,
op: And,
values: [
Name(
ExprName {
range: 9..10,
id: "a",
ctx: Load,
},
),
Name(
ExprName {
range: 15..16,
id: "b",
ctx: Load,
},
),
Name(
ExprName {
range: 21..22,
id: "c",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 23..29,
value: BoolOp(
ExprBoolOp {
range: 23..29,
op: Or,
values: [
Name(
ExprName {
range: 23..24,
id: "a",
ctx: Load,
},
),
Name(
ExprName {
range: 28..29,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 30..41,
value: BoolOp(
ExprBoolOp {
range: 30..41,
op: Or,
values: [
Name(
ExprName {
range: 30..31,
id: "a",
ctx: Load,
},
),
Name(
ExprName {
range: 35..36,
id: "b",
ctx: Load,
},
),
Name(
ExprName {
range: 40..41,
id: "c",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 42..54,
value: BoolOp(
ExprBoolOp {
range: 42..54,
op: Or,
values: [
BoolOp(
ExprBoolOp {
range: 42..49,
op: And,
values: [
Name(
ExprName {
range: 42..43,
id: "a",
ctx: Load,
},
),
Name(
ExprName {
range: 48..49,
id: "b",
ctx: Load,
},
),
],
},
),
Name(
ExprName {
range: 53..54,
id: "c",
ctx: Load,
},
),
],
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,595 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nl()\nx(1, 2)\nx(1, 2, x=3, y=4)\nf(*l)\nf(**a)\nf(*a, b, **l)\nf(*a, *b)\nf(\n [\n [a]\n for d in f\n ],\n)\nf(\n {\n [a]\n for d in f\n },\n)\nf(\n {\n A: [a]\n for d in f\n },\n)\ncall(\n a=1 if True else None,\n x=0,\n)\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..262,
body: [
Expr(
StmtExpr {
range: 1..4,
value: Call(
ExprCall {
range: 1..4,
func: Name(
ExprName {
range: 1..2,
id: "l",
ctx: Load,
},
),
arguments: Arguments {
range: 2..4,
args: [],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 5..12,
value: Call(
ExprCall {
range: 5..12,
func: Name(
ExprName {
range: 5..6,
id: "x",
ctx: Load,
},
),
arguments: Arguments {
range: 6..12,
args: [
NumberLiteral(
ExprNumberLiteral {
range: 7..8,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 10..11,
value: Int(
2,
),
},
),
],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 13..30,
value: Call(
ExprCall {
range: 13..30,
func: Name(
ExprName {
range: 13..14,
id: "x",
ctx: Load,
},
),
arguments: Arguments {
range: 14..30,
args: [
NumberLiteral(
ExprNumberLiteral {
range: 15..16,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 18..19,
value: Int(
2,
),
},
),
],
keywords: [
Keyword {
range: 21..24,
arg: Some(
Identifier {
id: "x",
range: 21..22,
},
),
value: NumberLiteral(
ExprNumberLiteral {
range: 23..24,
value: Int(
3,
),
},
),
},
Keyword {
range: 26..29,
arg: Some(
Identifier {
id: "y",
range: 26..27,
},
),
value: NumberLiteral(
ExprNumberLiteral {
range: 28..29,
value: Int(
4,
),
},
),
},
],
},
},
),
},
),
Expr(
StmtExpr {
range: 31..36,
value: Call(
ExprCall {
range: 31..36,
func: Name(
ExprName {
range: 31..32,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 32..36,
args: [
Starred(
ExprStarred {
range: 33..35,
value: Name(
ExprName {
range: 34..35,
id: "l",
ctx: Load,
},
),
ctx: Load,
},
),
],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 37..43,
value: Call(
ExprCall {
range: 37..43,
func: Name(
ExprName {
range: 37..38,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 38..43,
args: [],
keywords: [
Keyword {
range: 39..42,
arg: None,
value: Name(
ExprName {
range: 41..42,
id: "a",
ctx: Load,
},
),
},
],
},
},
),
},
),
Expr(
StmtExpr {
range: 44..57,
value: Call(
ExprCall {
range: 44..57,
func: Name(
ExprName {
range: 44..45,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 45..57,
args: [
Starred(
ExprStarred {
range: 46..48,
value: Name(
ExprName {
range: 47..48,
id: "a",
ctx: Load,
},
),
ctx: Load,
},
),
Name(
ExprName {
range: 50..51,
id: "b",
ctx: Load,
},
),
],
keywords: [
Keyword {
range: 53..56,
arg: None,
value: Name(
ExprName {
range: 55..56,
id: "l",
ctx: Load,
},
),
},
],
},
},
),
},
),
Expr(
StmtExpr {
range: 58..67,
value: Call(
ExprCall {
range: 58..67,
func: Name(
ExprName {
range: 58..59,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 59..67,
args: [
Starred(
ExprStarred {
range: 60..62,
value: Name(
ExprName {
range: 61..62,
id: "a",
ctx: Load,
},
),
ctx: Load,
},
),
Starred(
ExprStarred {
range: 64..66,
value: Name(
ExprName {
range: 65..66,
id: "b",
ctx: Load,
},
),
ctx: Load,
},
),
],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 68..116,
value: Call(
ExprCall {
range: 68..116,
func: Name(
ExprName {
range: 68..69,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 69..116,
args: [
ListComp(
ExprListComp {
range: 75..113,
elt: List(
ExprList {
range: 85..88,
elts: [
Name(
ExprName {
range: 86..87,
id: "a",
ctx: Load,
},
),
],
ctx: Load,
},
),
generators: [
Comprehension {
range: 97..107,
target: Name(
ExprName {
range: 101..102,
id: "d",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 106..107,
id: "f",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
],
},
),
],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 117..165,
value: Call(
ExprCall {
range: 117..165,
func: Name(
ExprName {
range: 117..118,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 118..165,
args: [
SetComp(
ExprSetComp {
range: 124..162,
elt: List(
ExprList {
range: 134..137,
elts: [
Name(
ExprName {
range: 135..136,
id: "a",
ctx: Load,
},
),
],
ctx: Load,
},
),
generators: [
Comprehension {
range: 146..156,
target: Name(
ExprName {
range: 150..151,
id: "d",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 155..156,
id: "f",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
],
},
),
],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 166..217,
value: Call(
ExprCall {
range: 166..217,
func: Name(
ExprName {
range: 166..167,
id: "f",
ctx: Load,
},
),
arguments: Arguments {
range: 167..217,
args: [
DictComp(
ExprDictComp {
range: 173..214,
key: Name(
ExprName {
range: 183..184,
id: "A",
ctx: Load,
},
),
value: List(
ExprList {
range: 186..189,
elts: [
Name(
ExprName {
range: 187..188,
id: "a",
ctx: Load,
},
),
],
ctx: Load,
},
),
generators: [
Comprehension {
range: 198..208,
target: Name(
ExprName {
range: 202..203,
id: "d",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 207..208,
id: "f",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
],
},
),
],
keywords: [],
},
},
),
},
),
Expr(
StmtExpr {
range: 218..261,
value: Call(
ExprCall {
range: 218..261,
func: Name(
ExprName {
range: 218..222,
id: "call",
ctx: Load,
},
),
arguments: Arguments {
range: 222..261,
args: [],
keywords: [
Keyword {
range: 228..249,
arg: Some(
Identifier {
id: "a",
range: 228..229,
},
),
value: If(
ExprIf {
range: 230..249,
test: BooleanLiteral(
ExprBooleanLiteral {
range: 235..239,
value: true,
},
),
body: NumberLiteral(
ExprNumberLiteral {
range: 230..231,
value: Int(
1,
),
},
),
orelse: NoneLiteral(
ExprNoneLiteral {
range: 245..249,
},
),
},
),
},
Keyword {
range: 255..258,
arg: Some(
Identifier {
id: "x",
range: 255..256,
},
),
value: NumberLiteral(
ExprNumberLiteral {
range: 257..258,
value: Int(
0,
),
},
),
},
],
},
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,246 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\nclass T:\n ...\nclass Test():\n def __init__(self):\n pass\nclass T(a=1, *A, **k):\n ...\nclass T:\n def f():\n a, b = l\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..147,
body: [
ClassDef(
StmtClassDef {
range: 1..17,
decorator_list: [],
name: Identifier {
id: "T",
range: 7..8,
},
type_params: None,
arguments: None,
body: [
Expr(
StmtExpr {
range: 14..17,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 14..17,
},
),
},
),
],
},
),
ClassDef(
StmtClassDef {
range: 18..76,
decorator_list: [],
name: Identifier {
id: "Test",
range: 24..28,
},
type_params: None,
arguments: Some(
Arguments {
range: 28..30,
args: [],
keywords: [],
},
),
body: [
FunctionDef(
StmtFunctionDef {
range: 40..76,
is_async: false,
decorator_list: [],
name: Identifier {
id: "__init__",
range: 44..52,
},
type_params: None,
parameters: Parameters {
range: 52..58,
posonlyargs: [],
args: [
ParameterWithDefault {
range: 53..57,
parameter: Parameter {
range: 53..57,
name: Identifier {
id: "self",
range: 53..57,
},
annotation: None,
},
default: None,
},
],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Pass(
StmtPass {
range: 72..76,
},
),
],
},
),
],
},
),
ClassDef(
StmtClassDef {
range: 77..107,
decorator_list: [],
name: Identifier {
id: "T",
range: 83..84,
},
type_params: None,
arguments: Some(
Arguments {
range: 84..98,
args: [
Starred(
ExprStarred {
range: 90..92,
value: Name(
ExprName {
range: 91..92,
id: "A",
ctx: Load,
},
),
ctx: Load,
},
),
],
keywords: [
Keyword {
range: 85..88,
arg: Some(
Identifier {
id: "a",
range: 85..86,
},
),
value: NumberLiteral(
ExprNumberLiteral {
range: 87..88,
value: Int(
1,
),
},
),
},
Keyword {
range: 94..97,
arg: None,
value: Name(
ExprName {
range: 96..97,
id: "k",
ctx: Load,
},
),
},
],
},
),
body: [
Expr(
StmtExpr {
range: 104..107,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 104..107,
},
),
},
),
],
},
),
ClassDef(
StmtClassDef {
range: 108..146,
decorator_list: [],
name: Identifier {
id: "T",
range: 114..115,
},
type_params: None,
arguments: None,
body: [
FunctionDef(
StmtFunctionDef {
range: 121..146,
is_async: false,
decorator_list: [],
name: Identifier {
id: "f",
range: 125..126,
},
type_params: None,
parameters: Parameters {
range: 126..128,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Assign(
StmtAssign {
range: 138..146,
targets: [
Tuple(
ExprTuple {
range: 138..142,
elts: [
Name(
ExprName {
range: 138..139,
id: "a",
ctx: Store,
},
),
Name(
ExprName {
range: 141..142,
id: "b",
ctx: Store,
},
),
],
ctx: Store,
parenthesized: false,
},
),
],
value: Name(
ExprName {
range: 145..146,
id: "l",
ctx: Load,
},
),
},
),
],
},
),
],
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,397 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\na == b\nb < a\nb > a\na >= b\na <= b\na != b\na is c\na in b\na not in c\na is not b\na < b == c > d is e not in f is not g <= h >= i != j\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..130,
body: [
Expr(
StmtExpr {
range: 1..7,
value: Compare(
ExprCompare {
range: 1..7,
left: Name(
ExprName {
range: 1..2,
id: "a",
ctx: Load,
},
),
ops: [
Eq,
],
comparators: [
Name(
ExprName {
range: 6..7,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 8..13,
value: Compare(
ExprCompare {
range: 8..13,
left: Name(
ExprName {
range: 8..9,
id: "b",
ctx: Load,
},
),
ops: [
Lt,
],
comparators: [
Name(
ExprName {
range: 12..13,
id: "a",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 14..19,
value: Compare(
ExprCompare {
range: 14..19,
left: Name(
ExprName {
range: 14..15,
id: "b",
ctx: Load,
},
),
ops: [
Gt,
],
comparators: [
Name(
ExprName {
range: 18..19,
id: "a",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 20..26,
value: Compare(
ExprCompare {
range: 20..26,
left: Name(
ExprName {
range: 20..21,
id: "a",
ctx: Load,
},
),
ops: [
GtE,
],
comparators: [
Name(
ExprName {
range: 25..26,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 27..33,
value: Compare(
ExprCompare {
range: 27..33,
left: Name(
ExprName {
range: 27..28,
id: "a",
ctx: Load,
},
),
ops: [
LtE,
],
comparators: [
Name(
ExprName {
range: 32..33,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 34..40,
value: Compare(
ExprCompare {
range: 34..40,
left: Name(
ExprName {
range: 34..35,
id: "a",
ctx: Load,
},
),
ops: [
NotEq,
],
comparators: [
Name(
ExprName {
range: 39..40,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 41..47,
value: Compare(
ExprCompare {
range: 41..47,
left: Name(
ExprName {
range: 41..42,
id: "a",
ctx: Load,
},
),
ops: [
Is,
],
comparators: [
Name(
ExprName {
range: 46..47,
id: "c",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 48..54,
value: Compare(
ExprCompare {
range: 48..54,
left: Name(
ExprName {
range: 48..49,
id: "a",
ctx: Load,
},
),
ops: [
In,
],
comparators: [
Name(
ExprName {
range: 53..54,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 55..65,
value: Compare(
ExprCompare {
range: 55..65,
left: Name(
ExprName {
range: 55..56,
id: "a",
ctx: Load,
},
),
ops: [
NotIn,
],
comparators: [
Name(
ExprName {
range: 64..65,
id: "c",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 66..76,
value: Compare(
ExprCompare {
range: 66..76,
left: Name(
ExprName {
range: 66..67,
id: "a",
ctx: Load,
},
),
ops: [
IsNot,
],
comparators: [
Name(
ExprName {
range: 75..76,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 77..129,
value: Compare(
ExprCompare {
range: 77..129,
left: Name(
ExprName {
range: 77..78,
id: "a",
ctx: Load,
},
),
ops: [
Lt,
Eq,
Gt,
Is,
NotIn,
IsNot,
LtE,
GtE,
NotEq,
],
comparators: [
Name(
ExprName {
range: 81..82,
id: "b",
ctx: Load,
},
),
Name(
ExprName {
range: 86..87,
id: "c",
ctx: Load,
},
),
Name(
ExprName {
range: 90..91,
id: "d",
ctx: Load,
},
),
Name(
ExprName {
range: 95..96,
id: "e",
ctx: Load,
},
),
Name(
ExprName {
range: 104..105,
id: "f",
ctx: Load,
},
),
Name(
ExprName {
range: 113..114,
id: "g",
ctx: Load,
},
),
Name(
ExprName {
range: 118..119,
id: "h",
ctx: Load,
},
),
Name(
ExprName {
range: 123..124,
id: "i",
ctx: Load,
},
),
Name(
ExprName {
range: 128..129,
id: "j",
ctx: Load,
},
),
],
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,342 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\n@a\ndef f(): ...\n\n@a.b.c\ndef f(): ...\n\n@a\n@a.b.c\ndef f(): ...\n\n@a\n@1 | 2\n@a.b.c\nclass T: ...\n\n@named_expr := abc\ndef f():\n ...\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..130,
body: [
FunctionDef(
StmtFunctionDef {
range: 1..16,
is_async: false,
decorator_list: [
Decorator {
range: 1..3,
expression: Name(
ExprName {
range: 2..3,
id: "a",
ctx: Load,
},
),
},
],
name: Identifier {
id: "f",
range: 8..9,
},
type_params: None,
parameters: Parameters {
range: 9..11,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Expr(
StmtExpr {
range: 13..16,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 13..16,
},
),
},
),
],
},
),
FunctionDef(
StmtFunctionDef {
range: 18..37,
is_async: false,
decorator_list: [
Decorator {
range: 18..24,
expression: Attribute(
ExprAttribute {
range: 19..24,
value: Attribute(
ExprAttribute {
range: 19..22,
value: Name(
ExprName {
range: 19..20,
id: "a",
ctx: Load,
},
),
attr: Identifier {
id: "b",
range: 21..22,
},
ctx: Load,
},
),
attr: Identifier {
id: "c",
range: 23..24,
},
ctx: Load,
},
),
},
],
name: Identifier {
id: "f",
range: 29..30,
},
type_params: None,
parameters: Parameters {
range: 30..32,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Expr(
StmtExpr {
range: 34..37,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 34..37,
},
),
},
),
],
},
),
FunctionDef(
StmtFunctionDef {
range: 39..61,
is_async: false,
decorator_list: [
Decorator {
range: 39..41,
expression: Name(
ExprName {
range: 40..41,
id: "a",
ctx: Load,
},
),
},
Decorator {
range: 42..48,
expression: Attribute(
ExprAttribute {
range: 43..48,
value: Attribute(
ExprAttribute {
range: 43..46,
value: Name(
ExprName {
range: 43..44,
id: "a",
ctx: Load,
},
),
attr: Identifier {
id: "b",
range: 45..46,
},
ctx: Load,
},
),
attr: Identifier {
id: "c",
range: 47..48,
},
ctx: Load,
},
),
},
],
name: Identifier {
id: "f",
range: 53..54,
},
type_params: None,
parameters: Parameters {
range: 54..56,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Expr(
StmtExpr {
range: 58..61,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 58..61,
},
),
},
),
],
},
),
ClassDef(
StmtClassDef {
range: 63..92,
decorator_list: [
Decorator {
range: 63..65,
expression: Name(
ExprName {
range: 64..65,
id: "a",
ctx: Load,
},
),
},
Decorator {
range: 66..72,
expression: BinOp(
ExprBinOp {
range: 67..72,
left: NumberLiteral(
ExprNumberLiteral {
range: 67..68,
value: Int(
1,
),
},
),
op: BitOr,
right: NumberLiteral(
ExprNumberLiteral {
range: 71..72,
value: Int(
2,
),
},
),
},
),
},
Decorator {
range: 73..79,
expression: Attribute(
ExprAttribute {
range: 74..79,
value: Attribute(
ExprAttribute {
range: 74..77,
value: Name(
ExprName {
range: 74..75,
id: "a",
ctx: Load,
},
),
attr: Identifier {
id: "b",
range: 76..77,
},
ctx: Load,
},
),
attr: Identifier {
id: "c",
range: 78..79,
},
ctx: Load,
},
),
},
],
name: Identifier {
id: "T",
range: 86..87,
},
type_params: None,
arguments: None,
body: [
Expr(
StmtExpr {
range: 89..92,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 89..92,
},
),
},
),
],
},
),
FunctionDef(
StmtFunctionDef {
range: 94..129,
is_async: false,
decorator_list: [
Decorator {
range: 94..112,
expression: Named(
ExprNamed {
range: 95..112,
target: Name(
ExprName {
range: 95..105,
id: "named_expr",
ctx: Store,
},
),
value: Name(
ExprName {
range: 109..112,
id: "abc",
ctx: Load,
},
),
},
),
},
],
name: Identifier {
id: "f",
range: 117..118,
},
type_params: None,
parameters: Parameters {
range: 118..120,
posonlyargs: [],
args: [],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
returns: None,
body: [
Expr(
StmtExpr {
range: 126..129,
value: EllipsisLiteral(
ExprEllipsisLiteral {
range: 126..129,
},
),
},
),
],
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,558 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\n{1: 2 for i in a}\n{x + 1: 'x' for i in range(5)}\n{b: c * 2 for c in d if x in w if y and yy if z}\n{a: a ** 2 for b in c if d and e for f in j if k > h}\n{a: b for b in c if d and e async for f in j if k > h}\n{a: a for b, c in d}\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..229,
body: [
Expr(
StmtExpr {
range: 1..18,
value: DictComp(
ExprDictComp {
range: 1..18,
key: NumberLiteral(
ExprNumberLiteral {
range: 2..3,
value: Int(
1,
),
},
),
value: NumberLiteral(
ExprNumberLiteral {
range: 5..6,
value: Int(
2,
),
},
),
generators: [
Comprehension {
range: 7..17,
target: Name(
ExprName {
range: 11..12,
id: "i",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 16..17,
id: "a",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
],
},
),
},
),
Expr(
StmtExpr {
range: 19..49,
value: DictComp(
ExprDictComp {
range: 19..49,
key: BinOp(
ExprBinOp {
range: 20..25,
left: Name(
ExprName {
range: 20..21,
id: "x",
ctx: Load,
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 24..25,
value: Int(
1,
),
},
),
},
),
value: StringLiteral(
ExprStringLiteral {
range: 27..30,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 27..30,
value: "x",
unicode: false,
},
),
},
},
),
generators: [
Comprehension {
range: 31..48,
target: Name(
ExprName {
range: 35..36,
id: "i",
ctx: Store,
},
),
iter: Call(
ExprCall {
range: 40..48,
func: Name(
ExprName {
range: 40..45,
id: "range",
ctx: Load,
},
),
arguments: Arguments {
range: 45..48,
args: [
NumberLiteral(
ExprNumberLiteral {
range: 46..47,
value: Int(
5,
),
},
),
],
keywords: [],
},
},
),
ifs: [],
is_async: false,
},
],
},
),
},
),
Expr(
StmtExpr {
range: 50..98,
value: DictComp(
ExprDictComp {
range: 50..98,
key: Name(
ExprName {
range: 51..52,
id: "b",
ctx: Load,
},
),
value: BinOp(
ExprBinOp {
range: 54..59,
left: Name(
ExprName {
range: 54..55,
id: "c",
ctx: Load,
},
),
op: Mult,
right: NumberLiteral(
ExprNumberLiteral {
range: 58..59,
value: Int(
2,
),
},
),
},
),
generators: [
Comprehension {
range: 60..97,
target: Name(
ExprName {
range: 64..65,
id: "c",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 69..70,
id: "d",
ctx: Load,
},
),
ifs: [
Compare(
ExprCompare {
range: 74..80,
left: Name(
ExprName {
range: 74..75,
id: "x",
ctx: Load,
},
),
ops: [
In,
],
comparators: [
Name(
ExprName {
range: 79..80,
id: "w",
ctx: Load,
},
),
],
},
),
BoolOp(
ExprBoolOp {
range: 84..92,
op: And,
values: [
Name(
ExprName {
range: 84..85,
id: "y",
ctx: Load,
},
),
Name(
ExprName {
range: 90..92,
id: "yy",
ctx: Load,
},
),
],
},
),
Name(
ExprName {
range: 96..97,
id: "z",
ctx: Load,
},
),
],
is_async: false,
},
],
},
),
},
),
Expr(
StmtExpr {
range: 99..152,
value: DictComp(
ExprDictComp {
range: 99..152,
key: Name(
ExprName {
range: 100..101,
id: "a",
ctx: Load,
},
),
value: BinOp(
ExprBinOp {
range: 103..109,
left: Name(
ExprName {
range: 103..104,
id: "a",
ctx: Load,
},
),
op: Pow,
right: NumberLiteral(
ExprNumberLiteral {
range: 108..109,
value: Int(
2,
),
},
),
},
),
generators: [
Comprehension {
range: 110..131,
target: Name(
ExprName {
range: 114..115,
id: "b",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 119..120,
id: "c",
ctx: Load,
},
),
ifs: [
BoolOp(
ExprBoolOp {
range: 124..131,
op: And,
values: [
Name(
ExprName {
range: 124..125,
id: "d",
ctx: Load,
},
),
Name(
ExprName {
range: 130..131,
id: "e",
ctx: Load,
},
),
],
},
),
],
is_async: false,
},
Comprehension {
range: 132..151,
target: Name(
ExprName {
range: 136..137,
id: "f",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 141..142,
id: "j",
ctx: Load,
},
),
ifs: [
Compare(
ExprCompare {
range: 146..151,
left: Name(
ExprName {
range: 146..147,
id: "k",
ctx: Load,
},
),
ops: [
Gt,
],
comparators: [
Name(
ExprName {
range: 150..151,
id: "h",
ctx: Load,
},
),
],
},
),
],
is_async: false,
},
],
},
),
},
),
Expr(
StmtExpr {
range: 153..207,
value: DictComp(
ExprDictComp {
range: 153..207,
key: Name(
ExprName {
range: 154..155,
id: "a",
ctx: Load,
},
),
value: Name(
ExprName {
range: 157..158,
id: "b",
ctx: Load,
},
),
generators: [
Comprehension {
range: 159..180,
target: Name(
ExprName {
range: 163..164,
id: "b",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 168..169,
id: "c",
ctx: Load,
},
),
ifs: [
BoolOp(
ExprBoolOp {
range: 173..180,
op: And,
values: [
Name(
ExprName {
range: 173..174,
id: "d",
ctx: Load,
},
),
Name(
ExprName {
range: 179..180,
id: "e",
ctx: Load,
},
),
],
},
),
],
is_async: false,
},
Comprehension {
range: 181..206,
target: Name(
ExprName {
range: 191..192,
id: "f",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 196..197,
id: "j",
ctx: Load,
},
),
ifs: [
Compare(
ExprCompare {
range: 201..206,
left: Name(
ExprName {
range: 201..202,
id: "k",
ctx: Load,
},
),
ops: [
Gt,
],
comparators: [
Name(
ExprName {
range: 205..206,
id: "h",
ctx: Load,
},
),
],
},
),
],
is_async: true,
},
],
},
),
},
),
Expr(
StmtExpr {
range: 208..228,
value: DictComp(
ExprDictComp {
range: 208..228,
key: Name(
ExprName {
range: 209..210,
id: "a",
ctx: Load,
},
),
value: Name(
ExprName {
range: 212..213,
id: "a",
ctx: Load,
},
),
generators: [
Comprehension {
range: 214..227,
target: Tuple(
ExprTuple {
range: 218..222,
elts: [
Name(
ExprName {
range: 218..219,
id: "b",
ctx: Store,
},
),
Name(
ExprName {
range: 221..222,
id: "c",
ctx: Store,
},
),
],
ctx: Store,
parenthesized: false,
},
),
iter: Name(
ExprName {
range: 226..227,
id: "d",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
],
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,853 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(\"\n{}\n{1:2, a:1, b:'hello'}\n{a:b, **d}\n{'foo': 'bar', **{'nested': 'dict'}}\n{x + 1: y * 2, **call()}\n{l: [1, 2, 3], t: (1,2,3), d: {1:2, 3:4}, s: {1, 2}}\n{**d}\n{1: 2, **{'nested': 'dict'}}\n{a: c}\n{i: tuple(j for j in t if i != j)\n for t in L\n for i in t}\n{\n 'A': lambda p: None,\n 'B': C,\n}\n{**a, **b}\n\")"
---
Program {
ast: Module(
ModModule {
range: 0..325,
body: [
Expr(
StmtExpr {
range: 1..3,
value: Dict(
ExprDict {
range: 1..3,
keys: [],
values: [],
},
),
},
),
Expr(
StmtExpr {
range: 4..25,
value: Dict(
ExprDict {
range: 4..25,
keys: [
Some(
NumberLiteral(
ExprNumberLiteral {
range: 5..6,
value: Int(
1,
),
},
),
),
Some(
Name(
ExprName {
range: 10..11,
id: "a",
ctx: Load,
},
),
),
Some(
Name(
ExprName {
range: 15..16,
id: "b",
ctx: Load,
},
),
),
],
values: [
NumberLiteral(
ExprNumberLiteral {
range: 7..8,
value: Int(
2,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 12..13,
value: Int(
1,
),
},
),
StringLiteral(
ExprStringLiteral {
range: 17..24,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 17..24,
value: "hello",
unicode: false,
},
),
},
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 26..36,
value: Dict(
ExprDict {
range: 26..36,
keys: [
Some(
Name(
ExprName {
range: 27..28,
id: "a",
ctx: Load,
},
),
),
None,
],
values: [
Name(
ExprName {
range: 29..30,
id: "b",
ctx: Load,
},
),
Name(
ExprName {
range: 34..35,
id: "d",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 37..73,
value: Dict(
ExprDict {
range: 37..73,
keys: [
Some(
StringLiteral(
ExprStringLiteral {
range: 38..43,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 38..43,
value: "foo",
unicode: false,
},
),
},
},
),
),
None,
],
values: [
StringLiteral(
ExprStringLiteral {
range: 45..50,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 45..50,
value: "bar",
unicode: false,
},
),
},
},
),
Dict(
ExprDict {
range: 54..72,
keys: [
Some(
StringLiteral(
ExprStringLiteral {
range: 55..63,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 55..63,
value: "nested",
unicode: false,
},
),
},
},
),
),
],
values: [
StringLiteral(
ExprStringLiteral {
range: 65..71,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 65..71,
value: "dict",
unicode: false,
},
),
},
},
),
],
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 74..98,
value: Dict(
ExprDict {
range: 74..98,
keys: [
Some(
BinOp(
ExprBinOp {
range: 75..80,
left: Name(
ExprName {
range: 75..76,
id: "x",
ctx: Load,
},
),
op: Add,
right: NumberLiteral(
ExprNumberLiteral {
range: 79..80,
value: Int(
1,
),
},
),
},
),
),
None,
],
values: [
BinOp(
ExprBinOp {
range: 82..87,
left: Name(
ExprName {
range: 82..83,
id: "y",
ctx: Load,
},
),
op: Mult,
right: NumberLiteral(
ExprNumberLiteral {
range: 86..87,
value: Int(
2,
),
},
),
},
),
Call(
ExprCall {
range: 91..97,
func: Name(
ExprName {
range: 91..95,
id: "call",
ctx: Load,
},
),
arguments: Arguments {
range: 95..97,
args: [],
keywords: [],
},
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 99..151,
value: Dict(
ExprDict {
range: 99..151,
keys: [
Some(
Name(
ExprName {
range: 100..101,
id: "l",
ctx: Load,
},
),
),
Some(
Name(
ExprName {
range: 114..115,
id: "t",
ctx: Load,
},
),
),
Some(
Name(
ExprName {
range: 126..127,
id: "d",
ctx: Load,
},
),
),
Some(
Name(
ExprName {
range: 141..142,
id: "s",
ctx: Load,
},
),
),
],
values: [
List(
ExprList {
range: 103..112,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 104..105,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 107..108,
value: Int(
2,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 110..111,
value: Int(
3,
),
},
),
],
ctx: Load,
},
),
Tuple(
ExprTuple {
range: 117..124,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 118..119,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 120..121,
value: Int(
2,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 122..123,
value: Int(
3,
),
},
),
],
ctx: Load,
parenthesized: true,
},
),
Dict(
ExprDict {
range: 129..139,
keys: [
Some(
NumberLiteral(
ExprNumberLiteral {
range: 130..131,
value: Int(
1,
),
},
),
),
Some(
NumberLiteral(
ExprNumberLiteral {
range: 135..136,
value: Int(
3,
),
},
),
),
],
values: [
NumberLiteral(
ExprNumberLiteral {
range: 132..133,
value: Int(
2,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 137..138,
value: Int(
4,
),
},
),
],
},
),
Set(
ExprSet {
range: 144..150,
elts: [
NumberLiteral(
ExprNumberLiteral {
range: 145..146,
value: Int(
1,
),
},
),
NumberLiteral(
ExprNumberLiteral {
range: 148..149,
value: Int(
2,
),
},
),
],
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 152..157,
value: Dict(
ExprDict {
range: 152..157,
keys: [
None,
],
values: [
Name(
ExprName {
range: 155..156,
id: "d",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 158..186,
value: Dict(
ExprDict {
range: 158..186,
keys: [
Some(
NumberLiteral(
ExprNumberLiteral {
range: 159..160,
value: Int(
1,
),
},
),
),
None,
],
values: [
NumberLiteral(
ExprNumberLiteral {
range: 162..163,
value: Int(
2,
),
},
),
Dict(
ExprDict {
range: 167..185,
keys: [
Some(
StringLiteral(
ExprStringLiteral {
range: 168..176,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 168..176,
value: "nested",
unicode: false,
},
),
},
},
),
),
],
values: [
StringLiteral(
ExprStringLiteral {
range: 178..184,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 178..184,
value: "dict",
unicode: false,
},
),
},
},
),
],
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 187..193,
value: Dict(
ExprDict {
range: 187..193,
keys: [
Some(
Name(
ExprName {
range: 188..189,
id: "a",
ctx: Load,
},
),
),
],
values: [
Name(
ExprName {
range: 191..192,
id: "c",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 194..272,
value: DictComp(
ExprDictComp {
range: 194..272,
key: Name(
ExprName {
range: 195..196,
id: "i",
ctx: Load,
},
),
value: Call(
ExprCall {
range: 198..227,
func: Name(
ExprName {
range: 198..203,
id: "tuple",
ctx: Load,
},
),
arguments: Arguments {
range: 203..227,
args: [
Generator(
ExprGenerator {
range: 204..226,
elt: Name(
ExprName {
range: 204..205,
id: "j",
ctx: Load,
},
),
generators: [
Comprehension {
range: 206..226,
target: Name(
ExprName {
range: 210..211,
id: "j",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 215..216,
id: "t",
ctx: Load,
},
),
ifs: [
Compare(
ExprCompare {
range: 220..226,
left: Name(
ExprName {
range: 220..221,
id: "i",
ctx: Load,
},
),
ops: [
NotEq,
],
comparators: [
Name(
ExprName {
range: 225..226,
id: "j",
ctx: Load,
},
),
],
},
),
],
is_async: false,
},
],
parenthesized: false,
},
),
],
keywords: [],
},
},
),
generators: [
Comprehension {
range: 239..249,
target: Name(
ExprName {
range: 243..244,
id: "t",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 248..249,
id: "L",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
Comprehension {
range: 261..271,
target: Name(
ExprName {
range: 265..266,
id: "i",
ctx: Store,
},
),
iter: Name(
ExprName {
range: 270..271,
id: "t",
ctx: Load,
},
),
ifs: [],
is_async: false,
},
],
},
),
},
),
Expr(
StmtExpr {
range: 273..313,
value: Dict(
ExprDict {
range: 273..313,
keys: [
Some(
StringLiteral(
ExprStringLiteral {
range: 279..282,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 279..282,
value: "A",
unicode: false,
},
),
},
},
),
),
Some(
StringLiteral(
ExprStringLiteral {
range: 304..307,
value: StringLiteralValue {
inner: Single(
StringLiteral {
range: 304..307,
value: "B",
unicode: false,
},
),
},
},
),
),
],
values: [
Lambda(
ExprLambda {
range: 284..298,
parameters: Some(
Parameters {
range: 291..292,
posonlyargs: [],
args: [
ParameterWithDefault {
range: 291..292,
parameter: Parameter {
range: 291..292,
name: Identifier {
id: "p",
range: 291..292,
},
annotation: None,
},
default: None,
},
],
vararg: None,
kwonlyargs: [],
kwarg: None,
},
),
body: NoneLiteral(
ExprNoneLiteral {
range: 294..298,
},
),
},
),
Name(
ExprName {
range: 309..310,
id: "C",
ctx: Load,
},
),
],
},
),
},
),
Expr(
StmtExpr {
range: 314..324,
value: Dict(
ExprDict {
range: 314..324,
keys: [
None,
None,
],
values: [
Name(
ExprName {
range: 317..318,
id: "a",
ctx: Load,
},
),
Name(
ExprName {
range: 322..323,
id: "b",
ctx: Load,
},
),
],
},
),
},
),
],
},
),
parse_errors: [],
}

View File

@@ -0,0 +1,114 @@
---
source: crates/ruff_python_parser/src/parser/tests/parser.rs
expression: "parse(r#\"\nf\"\"\nF\"\"\nf''\nf\"\"\"\"\"\"\nf''''''\n\"#)"
---
Program {
ast: Module(
ModModule {
range: 0..29,
body: [
Expr(
StmtExpr {
range: 1..4,
value: FString(
ExprFString {
range: 1..4,
value: FStringValue {
inner: Single(
FString(
FString {
range: 1..4,
elements: [],
},
),
),
},
},
),
},
),
Expr(
StmtExpr {
range: 5..8,
value: FString(
ExprFString {
range: 5..8,
value: FStringValue {
inner: Single(
FString(
FString {
range: 5..8,
elements: [],
},
),
),
},
},
),
},
),
Expr(
StmtExpr {
range: 9..12,
value: FString(
ExprFString {
range: 9..12,
value: FStringValue {
inner: Single(
FString(
FString {
range: 9..12,
elements: [],
},
),
),
},
},
),
},
),
Expr(
StmtExpr {
range: 13..20,
value: FString(
ExprFString {
range: 13..20,
value: FStringValue {
inner: Single(
FString(
FString {
range: 13..20,
elements: [],
},
),
),
},
},
),
},
),
Expr(
StmtExpr {
range: 21..28,
value: FString(
ExprFString {
range: 21..28,
value: FStringValue {
inner: Single(
FString(
FString {
range: 21..28,
elements: [],
},
),
),
},
},
),
},
),
],
},
),
parse_errors: [],
}

Some files were not shown because too many files have changed in this diff Show More