summaryrefslogtreecommitdiff
path: root/manual/src/cmds/lexyacc.etex
blob: deb0beafef7928d54b0dbf265d999dfe9ecaba3d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
\chapter{Lexer and parser generators (ocamllex, ocamlyacc)}
\label{c:ocamlyacc}
%HEVEA\cutname{lexyacc.html}

This chapter describes two program generators: "ocamllex", that
produces a lexical analyzer from a set of regular expressions with
associated semantic actions, and "ocamlyacc", that produces a parser
from a grammar with associated semantic actions.

These program generators are very close to the well-known "lex" and
"yacc" commands that can be found in most C programming environments.
This chapter assumes a working knowledge of "lex" and "yacc": while
it describes the input syntax for "ocamllex" and "ocamlyacc" and the
main differences with "lex" and "yacc", it does not explain the basics
of writing a lexer or parser description in "lex" and "yacc". Readers
unfamiliar with "lex" and "yacc" are referred to  ``Compilers:
principles, techniques, and tools'' by Aho, Lam, Sethi and Ullman
(Pearson, 2006), or ``Lex $\&$ Yacc'', by Levine, Mason and
Brown (O'Reilly, 1992).

\section{s:ocamllex-overview}{Overview of \texttt{ocamllex}}

The "ocamllex" command produces a lexical analyzer from a set of regular
expressions with attached semantic actions, in the style of
"lex". Assuming the input file is \var{lexer}".mll", executing
\begin{alltt}
        ocamllex \var{lexer}.mll
\end{alltt}
produces OCaml code for a lexical analyzer in file \var{lexer}".ml".
This file defines one lexing function per entry point in the lexer
definition. These functions have the same names as the entry
points. Lexing functions take as argument a lexer buffer, and return
the semantic attribute of the corresponding entry point.

Lexer buffers are an abstract data type implemented in the standard
library module "Lexing". The functions "Lexing.from_channel",
"Lexing.from_string" and "Lexing.from_function" create
lexer buffers that read from an input channel, a character string, or
any reading function, respectively. (See the description of module
"Lexing" in chapter~\ref{c:stdlib}.)

When used in conjunction with a parser generated by "ocamlyacc", the
semantic actions compute a value belonging to the type "token" defined
by the generated parsing module. (See the description of "ocamlyacc"
below.)

\subsection{ss:ocamllex-options}{Options}
The following command-line options are recognized by "ocamllex".

\begin{options}

\item["-ml"]
Output code that does not use OCaml's built-in automata
interpreter. Instead, the automaton is encoded by OCaml functions.
This option improves performance when using the native compiler, but
decreases it when using the bytecode compiler.

\item["-o" \var{output-file}]
Specify the name of the output file produced by "ocamllex".
The default is the input file name with its extension replaced by ".ml".

\item["-q"]
Quiet mode.  "ocamllex" normally outputs informational messages
to standard output.  They are suppressed if option "-q" is used.

\item["-v" or "-version"]
Print version string and exit.

\item["-vnum"]
Print short version number and exit.

\item["-help" or "--help"]
Display a short usage summary and exit.
%
\end{options}

\section{s:ocamllex-syntax}{Syntax of lexer definitions}

The format of lexer definitions is as follows:
\begin{alltt}
\{ \var{header} \}
let \var{ident} = \var{regexp} \ldots
[refill \{ \var{refill-handler} \}]
rule \var{entrypoint} [\nth{arg}{1}\ldots{} \nth{arg}{n}] =
  parse \var{regexp} \{ \var{action} \}
      | \ldots
      | \var{regexp} \{ \var{action} \}
and \var{entrypoint} [\nth{arg}{1}\ldots{} \nth{arg}{n}] =
  parse \ldots
and \ldots
\{ \var{trailer} \}
\end{alltt}
Comments are delimited by "(*" and "*)", as in OCaml.
The "parse" keyword, can be replaced by the "shortest" keyword, with
the semantic consequences explained below.

Refill handlers are a recent (optional) feature introduced in 4.02,
documented below in subsection~\ref{ss:refill-handlers}.

\subsection{ss:ocamllex-header-trailer}{Header and trailer}
The {\it header} and {\it trailer} sections are arbitrary OCaml
text enclosed in curly braces. Either or both can be omitted. If
present, the header text is copied as is at the beginning of the
output file and the trailer text at the end. Typically, the
header section contains the "open" directives required
by the actions, and possibly some auxiliary functions used in the
actions.

\subsection{ss:ocamllex-named-regexp}{Naming regular expressions}

Between the header and the entry points, one can give names to
frequently-occurring regular expressions.  This is written
@"let" ident "=" regexp@.
In regular expressions that follow this declaration, the identifier
\var{ident} can be used as shorthand for \var{regexp}.

\subsection{ss:ocamllex-entry-points}{Entry points}

The names of the entry points must be valid identifiers for OCaml
values (starting with a lowercase letter).
Similarly, the arguments \texttt{\var{arg$_1$}\ldots{}
\var{arg$_n$}} must be valid identifiers for OCaml.
Each entry point becomes an
OCaml function that takes $n+1$ arguments,
the extra implicit last argument being of type "Lexing.lexbuf".
Characters are read from the "Lexing.lexbuf" argument and matched
against the regular expressions provided in the rule, until a prefix
of the input matches one of the rule.  The corresponding action is
then evaluated and returned as the result of the function.


If several regular expressions match a prefix of the input, the
``longest match'' rule applies: the regular expression that matches
the longest prefix of the input is selected.  In case of tie, the
regular expression that occurs earlier in the rule is selected.

However, if lexer rules are introduced with the "shortest" keyword in
place of the "parse" keyword, then the ``shortest match'' rule applies:
the shortest prefix of the input is selected. In case of tie, the
regular expression that occurs earlier in the rule is still selected.
This feature is not intended for use in ordinary lexical analyzers, it
may facilitate the use of "ocamllex" as a simple text processing tool.



\subsection{ss:ocamllex-regexp}{Regular expressions}

The regular expressions are in the style of "lex", with a more
OCaml-like syntax.
\begin{syntax}
regexp:
  \ldots
\end{syntax}
\begin{options}

\item[@"'" regular-char || escape-sequence "'"@]
A character constant, with the same syntax as OCaml character
constants. Match the denoted character.

\item["_"]
(underscore) Match any character.

\item[@"eof"@]
Match the end of the lexer input.\\
{\bf Note:} On some systems, with interactive input, an end-of-file
may be followed by more characters.  However, "ocamllex" will not
correctly handle regular expressions that contain "eof" followed by
something else.

\item[@'"' { string-character } '"'@]
A string constant, with the same syntax as OCaml string
constants. Match the corresponding sequence of characters.

\item[@'[' character-set ']'@]
Match any single character belonging to the given
character set. Valid character sets are: single
character constants @"'" @c@ "'"@; ranges of characters
@"'" @c@_1 "'" "-" "'" @c@_2 "'"@ (all characters between $c_1$ and $c_2$,
inclusive); and the union of two or more character sets, denoted by
concatenation.

\item[@'[' '^' character-set ']'@]
Match any single character not belonging to the given character set.


\item[@regexp_1 '#' regexp_2@]
(difference of character sets)
Regular expressions @regexp_1@ and @regexp_2@ must be character sets
defined with @'['\ldots ']'@ (or a single character expression or
underscore "_").
Match the difference of the two specified character sets.


\item[@regexp '*'@]
(repetition) Match the concatenation of zero or more
strings that match @regexp@.

\item[@regexp '+'@]
(strict repetition) Match the concatenation of one or more
strings that match @regexp@.

\item[@regexp '?'@]
(option) Match the empty string, or a string matching @regexp@.

\item[@regexp_1 '|' regexp_2@]
(alternative) Match any string that matches @regexp_1@ or @regexp_2@.
If both @regexp_1@ and @regexp_2@ are character sets, this constructions
produces another character set, obtained by taking the union of @regexp_1@ and
@regexp_2@.

\item[@regexp_1 regexp_2@]
(concatenation) Match the concatenation of two strings, the first
matching @regexp_1@, the second matching @regexp_2@.

\item[@'(' regexp ')'@]
Match the same strings as @regexp@.

\item[@ident@]
Reference the regular expression bound to @ident@ by an earlier
@"let" ident "=" regexp@ definition.

\item[@regexp 'as' ident@]
Bind the substring matched by @regexp@ to identifier @ident@.
\end{options}

Concerning the precedences of operators, "#" has the highest precedence,
followed by "*", "+"  and "?",
then concatenation, then "|" (alternation), then "as".

\subsection{ss:ocamllex-actions}{Actions}

The actions are arbitrary OCaml expressions. They are evaluated in
a context where the identifiers defined by using the "as" construct
are bound to subparts of the matched string.
Additionally, "lexbuf" is bound to the current lexer
buffer. Some typical uses for "lexbuf", in conjunction with the
operations on lexer buffers provided by the "Lexing" standard library
module, are listed below.

\begin{options}
\item["Lexing.lexeme lexbuf"]
Return the matched string.

\item["Lexing.lexeme_char lexbuf "$n$]
Return the $n\th$
character in the matched string. The first character corresponds to $n = 0$.

\item["Lexing.lexeme_start lexbuf"]
Return the absolute position in the input text of the beginning of the
matched string (i.e. the offset of the first character of the matched
string). The first character read from the input text has offset 0.

\item["Lexing.lexeme_end lexbuf"]
Return the absolute position in the input text of the end of the
matched string (i.e. the offset of the first character after the
matched string). The first character read from the input text has
offset 0.

\newcommand{\sub}[1]{$_{#1}$}%
\item[\var{entrypoint} {[\var{exp\sub{1}}\ldots{} \var{exp\sub{n}}]} "lexbuf"]
(Where \var{entrypoint} is the name of another entry point in the same
lexer definition.) Recursively call the lexer on the given entry point.
Notice that "lexbuf" is the last argument.
Useful for lexing nested comments, for example.

\end{options}

\subsection{ss:ocamllex-variables}{Variables in regular expressions}
The "as" construct is similar to ``\emph{groups}'' as provided by
numerous regular expression packages.
The type of these variables can be "string", "char", "string option"
or "char option".

We first consider the case of linear patterns, that is the case when
all "as" bound variables are distinct.
In @regexp 'as' ident@, the type of @ident@ normally is "string" (or
"string option") except
when @regexp@ is a character constant, an underscore, a string
constant of length one, a character set specification, or an
alternation of those. Then, the type of @ident@ is "char" (or "char
option").
Option types are introduced when overall rule matching does not
imply matching of the bound sub-pattern. This is in particular the
case of @'(' regexp 'as' ident ')' '?'@ and of
@regexp_1 '|' '(' regexp_2 'as' ident ')'@.

There is no linearity restriction over "as" bound variables.
When a variable is bound more than once, the previous rules are to be
extended as follows:
\begin{itemize}
\item A variable is a "char" variable when all its occurrences bind
"char" occurrences in the previous sense.
\item A variable is an "option" variable when the overall expression
can be matched without binding this variable.
\end{itemize}
For instance, in
"('a' as x) | ( 'a' (_ as x) )" the variable "x"  is of type
"char", whereas in
"(\"ab\" as x) | ( 'a' (_ as x) ? )" the variable "x"  is of type
"string option".


In some cases, a successful match may not yield a unique set of bindings.
For instance the matching of \verb+aba+ by the regular expression
"(('a'|\"ab\") as x) ((\"ba\"|'a') as y)" may result in binding
either
\verb+x+ to \verb+"ab"+ and \verb+y+ to \verb+"a"+, or
\verb+x+ to \verb+"a"+ and \verb+y+ to \verb+"ba"+.
The automata produced "ocamllex" on such ambiguous regular
expressions will select one of the possible resulting sets of
bindings.
The selected set of bindings is purposely left unspecified.

\subsection{ss:refill-handlers}{Refill handlers}

By default, when ocamllex reaches the end of its lexing buffer, it
will silently call the "refill_buff" function of "lexbuf" structure
and continue lexing. It is sometimes useful to be able to take control
of refilling action; typically, if you use a library for asynchronous
computation, you may want to wrap the refilling action in a delaying
function to avoid blocking synchronous operations.

Since OCaml 4.02, it is possible to specify a \var{refill-handler},
a function that will be called when refill happens. It is passed the
continuation of the lexing, on which it has total control. The OCaml
expression used as refill action should have a type that is an
instance of
\begin{verbatim}
   (Lexing.lexbuf -> 'a) -> Lexing.lexbuf -> 'a
\end{verbatim}
where the first argument is the continuation which captures the
processing ocamllex would usually perform (refilling the buffer, then
calling the lexing function again), and the result type that
instantiates ['a] should unify with the result type of all lexing
rules.

As an example, consider the following lexer that is parametrized over
an arbitrary monad:
\begin{verbatim}
{
type token = EOL | INT of int | PLUS

module Make (M : sig
               type 'a t
               val return: 'a -> 'a t
               val bind: 'a t -> ('a -> 'b t) -> 'b t
               val fail : string -> 'a t

               (* Set up lexbuf *)
               val on_refill : Lexing.lexbuf -> unit t
             end)
= struct

let refill_handler k lexbuf =
    M.bind (M.on_refill lexbuf) (fun () -> k lexbuf)

}

refill {refill_handler}

rule token = parse
| [' ' '\t']
    { token lexbuf }
| '\n'
    { M.return EOL }
| ['0'-'9']+ as i
    { M.return (INT (int_of_string i)) }
| '+'
    { M.return PLUS }
| _
    { M.fail "unexpected character" }
{
end
}
\end{verbatim}

\subsection{ss:ocamllex-reserved-ident}{Reserved identifiers}

All identifiers starting with "__ocaml_lex" are reserved for use by
"ocamllex"; do not use any such identifier in your programs.


\section{s:ocamlyacc-overview}{Overview of \texttt{ocamlyacc}}

The "ocamlyacc" command produces a parser from a context-free grammar
specification with attached semantic actions, in the style of "yacc".
Assuming the input file is \var{grammar}".mly", executing
\begin{alltt}
        ocamlyacc \var{options} \var{grammar}.mly
\end{alltt}
produces OCaml code for a parser in the file \var{grammar}".ml",
and its interface in file \var{grammar}".mli".

The generated module defines one parsing function per entry point in
the grammar. These functions have the same names as the entry points.
Parsing functions take as arguments a lexical analyzer (a function
from lexer buffers to tokens) and a lexer buffer, and return the
semantic attribute of the corresponding entry point. Lexical analyzer
functions are usually generated from a lexer specification by the
"ocamllex" program. Lexer buffers are an abstract data type
implemented in the standard library module "Lexing". Tokens are values from
the concrete type "token", defined in the interface file
\var{grammar}".mli" produced by "ocamlyacc".

\section{s:ocamlyacc-syntax}{Syntax of grammar definitions}

Grammar definitions have the following format:
\begin{alltt}
\%\{
  \var{header}
\%\}
  \var{declarations}
\%\%
  \var{rules}
\%\%
  \var{trailer}
\end{alltt}

Comments are delimited by \verb|(*| and \verb|*)|, as in OCaml.
Additionally, comments can be delimited by \verb|/*| and \verb|*/|,
as in C, in the ``declarations'' and ``rules'' sections.  C-style
comments do not nest, but OCaml-style comments do.

\subsection{ss:ocamlyacc-header-trailer}{Header and trailer}

The header and the trailer sections are OCaml code that is copied
as is into file \var{grammar}".ml". Both sections are optional. The header
goes at the beginning of the output file; it usually contains
"open" directives and auxiliary functions required by the semantic
actions of the rules. The trailer goes at the end of the output file.

\subsection{ss:ocamlyacc-declarations}{Declarations}

Declarations are given one per line. They all start with a \verb"%" sign.

\begin{options}

\item[@"%token" constr \ldots constr@]
Declare the given symbols @constr \ldots constr@
as tokens (terminal symbols).  These symbols
are added as constant constructors for the "token" concrete type.

\item[@"%token" "<" typexpr ">" constr \ldots constr@]
Declare the given symbols @constr \ldots constr@ as tokens with an
attached attribute of the
given type. These symbols are added as constructors with arguments of
the given type for the "token" concrete type. The @typexpr@ part is
an arbitrary OCaml type expression, except that all type
constructor names must be fully qualified (e.g. "Modname.typename")
for all types except standard built-in types, even if the proper
\verb|open| directives (e.g. \verb|open Modname|) were given in the
header section. That's because the header is copied only to the ".ml"
output file, but not to the ".mli" output file, while the @typexpr@ part
of a \verb"%token" declaration is copied to both.

\item[@"%start" symbol \ldots symbol@]
Declare the given symbols as entry points for the grammar. For each
entry point, a parsing function with the same name is defined in the
output module. Non-terminals that are not declared as entry points
have no such parsing function. Start symbols must be given a type with
the \verb|%type| directive below.

\item[@"%type" "<" typexpr ">" symbol \ldots symbol@]
Specify the type of the semantic attributes for the given symbols.
This is mandatory for start symbols only. Other nonterminal symbols
need not be given types by hand: these types will be inferred when
running the output files through the OCaml compiler (unless the
\verb"-s" option is in effect). The @typexpr@ part is an arbitrary OCaml
type expression, except that all type constructor names must be
fully qualified, as explained above for "%token".

\item[@"%left" symbol \ldots symbol@]
\item[@"%right" symbol \ldots symbol@]
\item[@"%nonassoc" symbol \ldots symbol@]

Associate precedences and associativities to the given symbols. All
symbols on the same line are given the same precedence. They have
higher precedence than symbols declared before in a \verb"%left",
\verb"%right" or \verb"%nonassoc" line. They have lower precedence
than symbols declared after in a \verb"%left", \verb"%right" or
\verb"%nonassoc" line. The symbols are declared to associate to the
left (\verb"%left"), to the right (\verb"%right"), or to be
non-associative (\verb"%nonassoc"). The symbols are usually tokens.
They can also be dummy nonterminals, for use with the \verb"%prec"
directive inside the rules.

The precedence declarations are used in the following way to
resolve reduce/reduce and shift/reduce conflicts:
\begin{itemize}
\item Tokens and rules have precedences.  By default, the precedence
  of a rule is the precedence of its rightmost terminal.  You
  can override this default by using the @"%prec"@ directive in the rule.
\item A reduce/reduce conflict
  is resolved in favor of the first rule (in the order given by the
  source file), and "ocamlyacc" outputs a warning.
\item A shift/reduce conflict
  is resolved by comparing the precedence of the rule to be
  reduced with the precedence of the token to be shifted.  If the
  precedence of the rule is higher, then the rule will be reduced;
  if the precedence of the token is higher, then the token will
  be shifted.
\item A shift/reduce conflict between a rule and a token with the
  same precedence will be resolved using the associativity: if the
  token is left-associative, then the parser will reduce; if the
  token is right-associative, then the parser will shift.  If the
  token is non-associative, then the parser will declare a syntax
  error.
\item When a shift/reduce conflict cannot be resolved using the above
  method, then "ocamlyacc" will output a warning and the parser will
  always shift.
\end{itemize}

\end{options}

\subsection{ss:ocamlyacc-rules}{Rules}

The syntax for rules is as usual:
\begin{alltt}
\var{nonterminal} :
    \var{symbol} \ldots \var{symbol} \{ \var{semantic-action} \}
  | \ldots
  | \var{symbol} \ldots \var{symbol} \{ \var{semantic-action} \}
;
\end{alltt}
%
Rules can also contain the \verb"%prec "{\it symbol} directive in the
right-hand side part, to override the default precedence and
associativity of the rule with the precedence and associativity of the
given symbol.

Semantic actions are arbitrary OCaml expressions, that
are evaluated to produce the semantic attribute attached to
the defined nonterminal. The semantic actions can access the
semantic attributes of the symbols in the right-hand side of
the rule with the \verb"$" notation: \verb"$1" is the attribute for the
first (leftmost) symbol, \verb"$2" is the attribute for the second
symbol, etc.

The rules may contain the special symbol "error" to indicate
resynchronization points, as in "yacc".

Actions occurring in the middle of rules are not supported.

Nonterminal symbols are like regular OCaml symbols, except that they
cannot end with "'" (single quote).

\subsection{ss:ocamlyacc-error-handling}{Error handling}

Error recovery is supported as follows: when the parser reaches an
error state (no grammar rules can apply), it calls a function named
"parse_error" with the string "\"syntax error\"" as argument. The default
"parse_error" function does nothing and returns, thus initiating error
recovery (see below). The user can define a customized "parse_error"
function in the header section of the grammar file.

The parser also enters error recovery mode if one of the grammar
actions raises the "Parsing.Parse_error" exception.

In error recovery mode, the parser discards states from the
stack until it reaches a place where the error token can be shifted.
It then discards tokens from the input until it finds three successive
tokens that can be accepted, and starts processing with the first of
these.  If no state can be uncovered where the error token can be
shifted, then the parser aborts by raising the "Parsing.Parse_error"
exception.

Refer to documentation on "yacc" for more details and guidance in how
to use error recovery.

\section{s:ocamlyacc-options}{Options}

The "ocamlyacc" command recognizes the following options:

\begin{options}

\item["-b"{\it prefix}]
Name the output files {\it prefix}".ml", {\it prefix}".mli",
{\it prefix}".output", instead of the default naming convention.

\item["-q"]
This option has no effect.

\item["-v"]
Generate a description of the parsing tables and a report on conflicts
resulting from ambiguities in the grammar. The description is put in
file \var{grammar}".output".

\item["-version"]
Print version string and exit.

\item["-vnum"]
Print short version number and exit.

\item["-"]
Read the grammar specification from standard input.  The default
output file names are "stdin.ml" and "stdin.mli".

\item["--" \var{file}]
Process \var{file} as the grammar specification, even if its name
starts with a dash (-) character.  This option must be the last on the
command line.

\end{options}

At run-time, the "ocamlyacc"-generated parser can be debugged by
setting the "p" option in the "OCAMLRUNPARAM" environment variable
(see section~\ref{s:ocamlrun-options}).  This causes the pushdown
automaton executing the parser to print a trace of its action (tokens
shifted, rules reduced, etc).  The trace mentions rule numbers and
state numbers that can be interpreted by looking at the file
\var{grammar}".output" generated by "ocamlyacc -v".

\section{s:lexyacc-example}{A complete example}

The all-time favorite: a desk calculator. This program reads
arithmetic expressions on standard input, one per line, and prints
their values. Here is the grammar definition:
\begin{verbatim}
        /* File parser.mly */
        %token <int> INT
        %token PLUS MINUS TIMES DIV
        %token LPAREN RPAREN
        %token EOL
        %left PLUS MINUS        /* lowest precedence */
        %left TIMES DIV         /* medium precedence */
        %nonassoc UMINUS        /* highest precedence */
        %start main             /* the entry point */
        %type <int> main
        %%
        main:
            expr EOL                { $1 }
        ;
        expr:
            INT                     { $1 }
          | LPAREN expr RPAREN      { $2 }
          | expr PLUS expr          { $1 + $3 }
          | expr MINUS expr         { $1 - $3 }
          | expr TIMES expr         { $1 * $3 }
          | expr DIV expr           { $1 / $3 }
          | MINUS expr %prec UMINUS { - $2 }
        ;
\end{verbatim}
Here is the definition for the corresponding lexer:
\begin{verbatim}
        (* File lexer.mll *)
        {
        open Parser        (* The type token is defined in parser.mli *)
        exception Eof
        }
        rule token = parse
            [' ' '\t']     { token lexbuf }     (* skip blanks *)
          | ['\n' ]        { EOL }
          | ['0'-'9']+ as lxm { INT(int_of_string lxm) }
          | '+'            { PLUS }
          | '-'            { MINUS }
          | '*'            { TIMES }
          | '/'            { DIV }
          | '('            { LPAREN }
          | ')'            { RPAREN }
          | eof            { raise Eof }
\end{verbatim}
Here is the main program, that combines the parser with the lexer:
\begin{verbatim}
        (* File calc.ml *)
        let _ =
          try
            let lexbuf = Lexing.from_channel stdin in
            while true do
              let result = Parser.main Lexer.token lexbuf in
                print_int result; print_newline(); flush stdout
            done
          with Lexer.Eof ->
            exit 0
\end{verbatim}
To compile everything, execute:
\begin{verbatim}
        ocamllex lexer.mll       # generates lexer.ml
        ocamlyacc parser.mly     # generates parser.ml and parser.mli
        ocamlc -c parser.mli
        ocamlc -c lexer.ml
        ocamlc -c parser.ml
        ocamlc -c calc.ml
        ocamlc -o calc lexer.cmo parser.cmo calc.cmo
\end{verbatim}

\section{s:lexyacc-common-errors}{Common errors}

\begin{options}

\item[ocamllex: transition table overflow, automaton is too big]

The deterministic automata generated by "ocamllex" are limited to at
most 32767 transitions.  The message above indicates that your lexer
definition is too complex and overflows this limit.  This is commonly
caused by lexer definitions that have separate rules for each of the
alphabetic keywords of the language, as in the following example.
\begin{verbatim}
rule token = parse
  "keyword1"   { KWD1 }
| "keyword2"   { KWD2 }
| ...
| "keyword100" { KWD100 }
| ['A'-'Z' 'a'-'z'] ['A'-'Z' 'a'-'z' '0'-'9' '_'] * as id
               { IDENT id}
\end{verbatim}
To keep the generated automata small, rewrite those definitions with
only one general ``identifier'' rule, followed by a hashtable lookup
to separate keywords from identifiers:
\begin{verbatim}
{ let keyword_table = Hashtbl.create 53
  let _ =
    List.iter (fun (kwd, tok) -> Hashtbl.add keyword_table kwd tok)
              [ "keyword1", KWD1;
                "keyword2", KWD2; ...
                "keyword100", KWD100 ]
}
rule token = parse
  ['A'-'Z' 'a'-'z'] ['A'-'Z' 'a'-'z' '0'-'9' '_'] * as id
               { try
                   Hashtbl.find keyword_table id
                 with Not_found ->
                   IDENT id }
\end{verbatim}

\item[ocamllex: Position memory overflow, too many bindings]
The deterministic automata generated by "ocamllex" maintain a table of
positions inside the scanned lexer buffer. The size of this table is
limited to at most 255 cells. This error should not show up in normal
situations.

\item[ocamlyacc: concurrency safety]

Parsers generated by ocamlyacc are not thread-safe.
Those parsers rely on an internal work state which is shared by all
ocamlyacc generated parsers.
The \href{https://cambium.inria.fr/~fpottier/menhir/}{menhir} parser generator
is a better option if you want thread-safe parsers.

\end{options}