| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Whitespace token modernization - ambient lexer
* Whitespace token modernization - ampl lexer
* Whitespace token modernization - apdlexer lexer
* Whitespace token modernization - apl lexer
* Whitespace token modernization - adl lexer
* Whitespace token modernization - arrow lexer
* Whitespace token modernization - asm lexer
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Modernize Whitespace token: basic lexer
* Modernize Whitespace token: bibtex lexer
* Modernize Whitespace token: boa lexer
* Modernize Whitespace token: capnproto lexer + new example
* Modernize Whitespace token: cddl lexer
* Modernize Whitespace token: chapel lexer
* Modernize Whitespace token: c_like lexer
* Modernize Whitespace token: configs lexer
* Modernize Whitespace token: console lexer
* Modernize Whitespace token: crystal lexer
* Modernize Whitespace token: csound lexer
* Modernize Whitespace token: css lexer
* Revert a change in basic lexer
|
| | | | |
| | | | |
| | | | |
| | | | | |
Fixes #1918
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* adding dracula theme
* Added highlight and line colours
* fixed module name underline
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Diff lexer - match whitespace
* Make lexer - whitespace token set
* Actionscript lexer - whitespace token
* Bare lexer - whitespace token
* Business(cobol) lexers - whitespace token
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
As pointed out by Issue #1860 the keyword unexport was not correctly
matched. Only the part `export' had been matched.
This commit adds an optional group to the regex of the `export`
keyword to match `unexport` as well.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* SQL whitespaces - regarding #1905
* Sqlite prompt ungrouped from trailing space
* sqlite prompt with Explicit trailing whitespace token
* Fix insertion of sqlite trail space token
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Fix "do concurrent" and "go to" keywords in the Fortran lexer.
* "Go to" statement was only highlighted if there was no space between "go" and "to".
* "Concurrent" keyword in the "Do Concurrent" statement was never highlighted because of a typo. It has been fixed. In addition, it now highlights them only if "Concurrent" is right after the "Do" keyword.
* I had to put the "do concurrent" changes before the already available list of keywords. Otherwise it won't highlight "Concurrent" because it finds first the "Do" keyword in the other list and stops searching for more keywords.
* Fix a bug while parsing Fortran files with go to and do concurrent statements causing wrong highlighting.
* For example, in the variable name "gotoErr", "goto" was highlighted but it shouldn't.
* Update Fortran tests to the changes for the "go to statements"
* Use Text.Whitespace to distinguish Fortran multiword keywords
Co-authored-by: ecasglez <ecasglez@protonmail.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Fix #1237 cpp whitespace token usage expanded
* Adapt tests change to 3eff56f5
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Allow printing lists as JSON.
* Fix failing tests.
* Address review feedback.
* Don't pretty-print the JSON output.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* New lexer for Maxima computer algebra system
* New lexer class MaximaLexer
* Update _mapping.py to include Maxima lexer
* New test input file maxima/foo.mac
I find that the commands
$ python3 -m pygments -O full -f html -o /tmp/foo.html tests/examplefiles/maxima/foo.mac
$ python3 -m pygments -x -l pygments/lexers/maxima.py:MaximaLexer tests/examplefiles/maxima/foo.mac
both produce expected output.
* Commit output from pytest --update-goldens for Maxima example file
Commit output from pytest tests/examplefiles/maxima --update-goldens
as obtained by Cameron Smith.
* Rename output file for test of Maxima lexer.
* In Maxima lexer, capture content of comment
all at once, instead of capturing each character separately.
Update expected output for example input file, as produce by:
$ pytest tests/examplefiles/maxima --update-goldens
* In lexer for Maxima language, identify whitespace characters as such
instead of just calling them Text.
* In lexer for Maxima language, identify comma, semicolon, and dollar sign
as Punctuation instead of Text.
* In lexer for Maxima language, cut encoding comment, and put in license statement.
* In lexer for Maxima language, identify keywords and other fixed strings such as operators
via the words function, instead of a long regex with alternation.
Incidentally update the example output, for which one symbol
(namely "done") has changed classification.
* In lexer for Maxima language, include additional test input and update output accordingly.
* In lexer for Maxima language, relax pattern for integers,
so integers are more accurately identified.
Update test example output accordingly.
* In lexer for Maxima language, adjust pattern for float numbers.
Include additional test input for floats and update expected output.
* In lexer for Maxima language, define analyse_text function.
* In lexer for Maxima language, correct errors identified by make check
(1) adjust package name underline
(2) put in copyright notice
Co-authored-by: Robert Dodier <robert_dodier@users.sourceforge.net>
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Improve checks.
* Fix lots of small errors.
* Remove the line length check.
* Add an option to skip lexers with no alias
* Run checks in make check
* Add a new CI target.
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
scop-feat/ascii-armored
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | |/ / /
| |/| | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Added GSQL lexer
* encased keywords in 'words' function
added link to language reference
* added additional word functions
* Added copyright annotation
Removed commented out string
* re-built test output file
* Updated words to Keywords
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
cltrudeau-master
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
- renamed nodecon to nodejsrepl
- removed bad mimetypes
|
| |/ / / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Added Smithy Lexer (#1878)
* Added Smithy Lexer
* Added Smithy Lexer auhtor
* Documented Smithy as a supported language
* Added Smithy test file and output
* Updated Smithy Lexer
* Added Standard file heading with copyright and license
* Used `words` method for optimization, instead of bare regex
* Specified whitespace punctuation in root
* Updated aliases to only contain lowercase names to pass `test_basic_api` tests
* Updated regexes lightly to fit regexlint rules (removing duplicate characters in group `-`)
* Fixed regexes with errors in regexlint rules (Escaping brackets, gaps in capture groups)
* Ran mapping script to fix build check
* Ran mapping to update after changing aliases in previous commit
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Fixes #1851
|
| |/ / /
|/| | |
| | | |
| | | |
| | | | |
* Only allow tables at line start to fix arrays.
* Only allow tables at line start to fix arrays.
|
| | | |
| | | |
| | | | |
Allow $ sign in C# language for interpolated strings.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
This reverts commit 710cac79c34412e551a4a92bcd7dd07d5d770922.
|
| | | |
| | | |
| | | |
| | | | |
Use Comment.Single/Multiline in Mako template lexer.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Add support for JSLT
JSLT is an open-source JSON query and transformation language, inspired
by jq, XPath, and XQuery: https://github.com/schibsted/jslt.
* fixup! Add support for JSLT
* fixup! Add support for JSLT
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Added GSQL lexer
* encased keywords in 'words' function
added link to language reference
* added additional word functions
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* make ts extends js lexer
* add regex's d flag for js lexers
cf. https://v8.dev/features/regexp-match-indices
* update js builtins, operators, exceptions
* fixup! update js builtins, operators, exceptions
* add typescript override keywork
* Update _mapping.py
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* use ini lexer for systemd service files
* add more unit file names
* make mapfiles
Co-authored-by: Aku Viljanen <aku.viljanen@puheet.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Treat true/false/nil as constants.
Also separate out declarations from other special forms and macros.
|
| | | |
| | | |
| | | | |
This operator is defined by json4s, which is one of scala's most popular JSON libraries.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
regex_opt() groups characters in sets when possible. The
warning was caused when the "[" character ended up at the beginning
of a set: r"[[...]". This emits a FutureWarning since Python 3.7
due to possible changes in semantics in the future
(https://bugs.python.org/issue30349). Just add "[" to the list
of characters that should be escaped in sets. Add unit tests
for words().
[Closes #1853.]
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Support both single carets for syntax errors (Python 2 and 3)
and fine-grained error locations with several carets (Python 3.11+).
Previously, the carets were highlighted as operators. This uses
a new token, Token.Punctuation.Marker. For now, no style supports
it specifically. In the future, styles might start differentiating
it from Token.Punctuation.
[Closes #1850.]
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
* Update for Csound 6.16.0
* Preserve removed Csound built-ins
|
| | | | |
|