summaryrefslogtreecommitdiff
path: root/TODO
blob: ae8a5b462b32a2a38c7edca3921818e3fbd2c253 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
  Copyright (C) 1992, 1997-2002, 2004-2012 Free Software Foundation, Inc.

  Copying and distribution of this file, with or without modification,
  are permitted in any medium without royalty provided the copyright
  notice and this notice are preserved.

===============
Short term work
===============

See where we are with UTF-8 performance.

Merge Debian patches 55-bigfile.patch, 69-mbtowc.patch and
70-man_apostrophe.patch.  Go through patches in Savannah.

Cleanup of the grep(), grepdir(), recursion (the "main loop") to use fts.
Fix --directories=read.

Write better Texinfo documentation for grep.  The manual page would be a
good place to start, but Info documents are also supposed to contain a
tutorial and examples.

Some test in tests/spencer2.tests should have failed!  Need to filter out
some bugs in dfa.[ch]/regex.[ch].

Multithreading?

GNU grep does 32-bit arithmetic, it needs to move to 64-bit (i.e.
size_t/ptrdiff_t).

Lazy dynamic linking of libpcre.

Check FreeBSD's integration of zgrep (-Z) and bzgrep (-J) in one
binary. Is there a possibility of doing even better by automatically
checking the magic of binary files ourselves (0x1F 0x8B for gzip, 0x1F
0x9D for compress, and 0x42 0x5A 0x68 for bzip2)?  Once what to do with
libpcre is decided, do the same for libz and libbz2.


==================
Matching algorithms
==================

Check <http://tony.abou-assaleh.net/greps.html>.  Take a look at these
and consider opportunities for merging or cloning:

   -- ja-grep's mlb2 patch (Japanese grep)
      <ftp://ftp.freebsd.org/pub/FreeBSD/ports/distfiles/grep-2.4.2-mlb2.patch.gz>
   -- lgrep (from lv, a Powerful Multilingual File Viewer / Grep)
      <http://www.ff.iij4u.or.jp/~nrt/lv/>;
   -- cgrep (Context grep) <http://plg.uwaterloo.ca/~ftp/mt/cgrep/>
      seems like nice work;
   -- sgrep (Struct grep) <http://www.cs.helsinki.fi/u/jjaakkol/sgrep.html>;
   -- agrep (Approximate grep) <http://www.tgries.de/agrep/>,
      from glimpse;
   -- nr-grep (Nondeterministic reverse grep)
      <http://www.dcc.uchile.cl/~gnavarro/software/>;
   -- ggrep (Grouse grep) <http://www.grouse.com.au/ggrep/>;
   -- grep.py (Python grep) <http://www.vdesmedt.com/~vds2212/grep.html>;
   -- freegrep <http://www.vocito.com/downloads/software/grep/>;

Check some new algorithms for matching; talk to Karl Berry and Nelson.
Sunday's "Quick Search" Algorithm (CACM 33, 1990-08-08 pp. 132-142)
claim that his algorithm is faster than Boyer-More. Worth checking.

Fix the DFA matcher to never use exponential space.  (Fortunately, these
cases are rare.)


============================
Standards: POSIX and Unicode
============================

For POSIX compliance, see p10003.x. Current support for the POSIX [= =]
and [. .] constructs is limited. This is difficult because it requires
locale-dependent details of the character set and collating sequence,
but POSIX does not standardize any method for accessing this information!

For Unicode, interesting things to check include the Unicode Standard
<http://www.unicode.org/standard/standard.html> and the Unicode Technical
Standard #18 (<http://www.unicode.org/reports/tr18/> “Unicode Regular
Expressions”).  Talk to Bruno Haible who's maintaining GNU libunistring.
See also Unicode Standard Annex #15 (<http://www.unicode.org/reports/tr15/>
“Unicode Normalization Forms”), already implemented by GNU libunistring.

In particular, --ignore-case needs to be evaluated against the standards.
We may want to deviate from POSIX if Unicode provides better or clearer
semantics.

POSIX and --ignore-case
-----------------------

For this issue, interesting things to check in POSIX include the
Volume “Base Definitions (XBD)”, Chapter “Regular Expressions” and in
particular Section “Regular Expression General Requirements” and its
paragraph about caseless matching (note that this may not have been
fully thought through and that this text may be self-contradicting
[specifically: “of either data or patterns” versus all the rest]).

In particular, consider the following with POSIX's approach to case
folding in mind. Assume a non-Turkic locale with a character
repertoire reduced to the following various forms of “LATIN LETTER I”:

0049;LATIN CAPITAL LETTER I;Lu;0;L;;;;;N;;;;0069;
0069;LATIN SMALL LETTER I;Ll;0;L;;;;;N;;;0049;;0049
0130;LATIN CAPITAL LETTER I WITH DOT ABOVE;Lu;0;L;0049 0307;;;;N;LATIN CAPITAL LETTER I DOT;;;0069;
0131;LATIN SMALL LETTER DOTLESS I;Ll;0;L;;;;;N;;;0049;;0049

First note the differing UTF-8 octet lengths of U+0049 (0x49) and
U+0069 (0x69) versus U+0130 (0xC4 0xB0) and U+0131 (0xC4 0xB1). This
implies that whole UTF-8 strings cannot be case-converted in place,
using the same memory buffer, and that the needed octet-size of the
new buffer cannot merely be guessed (although there's a simple upper
bound of six times the size of the input, as the longest UTF-8
encoding of any character is six bytes).

We have

lc(I) = i, uc(I) = I
lc(i) = i, uc(i) = I
lc(İ) = i, uc(İ) = İ
lc(ı) = ı, uc(ı) = I

where lc() and uc() denote lower-case and upper-case conversions.

There are several candidate --ignore-case logics (including the one
mandated by POSIX):

Using the

if (lc(input_wchar) == lc(pattern_wchar))

logic leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  Y  n
"i" |  Y  Y  Y  n
"İ" |  Y  Y  Y  n
"ı" |  n  n  n  Y

There is a lack of symmetry between CAPITAL and SMALL LETTERs with
this.

Using the

if (uc(input_wchar) == uc(pattern_wchar))

logic leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  n  Y
"i" |  Y  Y  n  Y
"İ" |  n  n  Y  n
"ı" |  Y  Y  n  Y

There is a lack of symmetry between CAPITAL and SMALL LETTERs with
this.

Using the

if (   lc(input_wchar) == lc(pattern_wchar)
|| uc(input_wchar) == uc(pattern_wchar))

logic leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  Y  Y
"i" |  Y  Y  Y  Y
"İ" |  Y  Y  Y  n
"ı" |  Y  Y  n  Y

There is some elegance and symmetry with this. But there are
potentially two conversions to be made per input character. If the
pattern is pre-converted, two copies of it need to be kept and used in
a mutually coherent fashion.

Using the

if (      input_wchar  == pattern_wchar
|| lc(input_wchar) == pattern_wchar
|| uc(input_wchar) == pattern_wchar)

logic (as mandated by POSIX) leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  n  Y
"i" |  Y  Y  Y  n
"İ" |  n  n  Y  n
"ı" |  n  n  n  Y

There is a different CAPITAL/SMALL symmetry with this. But there's
also a loss of pattern/input symmetry that's unique to it. Also there
are potentially two conversions to be made per input character.

Using the

if (lc(uc(input_wchar)) == lc(uc(pattern_wchar)))


logic leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  Y  Y
"i" |  Y  Y  Y  Y
"İ" |  Y  Y  Y  Y
"ı" |  Y  Y  Y  Y

This shows total symmetry and transitivity
(at least in this example analysis).
There are two conversions to be made per input character,
but support could be added for having
a single straight mapping performing
a composition of the two conversions.

Any optimization in the implementation of each logic
must not change its basic semantic.


Unicode and --ignore-case
-------------------------

For this issue, interesting things to check in Unicode include:

  -- The Unicode Standard, Chapter 3
     (<http://www.unicode.org/versions/Unicode5.2.0/ch03.pdf>
     “Conformance”), Section 3.13 (“Default Case Operations”) and the
     toCasefold() case conversion operation.

  -- The Unicode Standard, Chapter 4
     (<http://www.unicode.org/versions/Unicode5.2.0/ch04.pdf>
     “Character Properties”), Section 4.2 (“Case—Normative”) and
     the <http://www.unicode.org/Public/UNIDATA/SpecialCasing.txt>
     SpecialCasing.txt and
     <http://www.unicode.org/Public/UNIDATA/CaseFolding.txt>
     CaseFolding.txt files from the
     <http://www.unicode.org/Public/UNIDATA/UCD.html> Unicode
     Character Database .

The <http://www.unicode.org/standard/standard.html> Unicode Standard,
Chapter 5 (“<http://www.unicode.org/versions/Unicode5.2.0/ch05.pdf>
Implementation Guidelines ”), Section 5.18 (“Case Mappings”),
Subsection “Caseless Matching”.

The Unicode <http://www.unicode.org/charts/case/> case charts.

Unicode uses the

if (toCasefold(input_wchar_string) == toCasefold(pattern_wchar_string))

logic for caseless matching. Let's consider the “LATIN LETTER I”
example mentioned above. In a non-Turkic locale, simple case folding
yields

toCasefold_simple(U+0049) = U+0069
toCasefold_simple(U+0069) = U+0069
toCasefold_simple(U+0130) = U+0130
toCasefold_simple(U+0131) = U+0131

which leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  n  n
"i" |  Y  Y  n  n
"İ" |  n  n  Y  n
"ı" |  n  n  n  Y

This is different from anything so far!

In a non-Turkic locale, full case folding yields

toCasefold_full(U+0049) = U+0069
toCasefold_full(U+0069) = U+0069
toCasefold_full(U+0130) = <U+0069, U+0307>
toCasefold_full(U+0131) = U+0131

with

0307;COMBINING DOT ABOVE;Mn;230;NSM;;;;;N;NON-SPACING DOT ABOVE;;;;

which leads to the following matches:

  \in  I  i  İ  ı
pat\   ----------
"I" |  Y  Y  *  n
"i" |  Y  Y  *  n
"İ" |  n  n  Y  n
"ı" |  n  n  n  Y

This is just sad!

Note that having toCasefold(U+0131), simple or full, map to itself
instead of U+0069 is in contradiction with the rules of Section 5.18
of the Unicode Standard since toUpperCase(U+0131) is U+0049. Same
thing for toCasefold_simple(U+0130) since toLowerCase(U+0131) is
U+0069. The justification for the weird toCasefold_full(U+0130)
mapping is unknown; it doesn't even make sense to add a dot (U+0307)
to a letter that already has one (U+0069). It would have been so
simple to put them all in the same equivalence class!

Otherwise, also consider the following problem with Unicode's approach
on case folding in mind. Assume that we want to perform

echo 'AßBC | grep -i 'Sb'

which corresponds to

input:    U+0041 U+00DF U+0042 U+0043 U+000A
pattern:  U+0053 U+0062

Following “CaseFolding-4.1.0.txt”, applying the toCasefold()
transformation to these yields

input:    U+0061 U+0073 U+0073 U+0062 U+0063 U+000A
pattern:                U+0073 U+0062

so, according to this approach, the input should match the pattern. As
long as the original input line is to be reported to the user as a
whole, there is no problem (from the user's point-of-view;
implementation is complicated by this).

However, consider both these GNU extensions:

echo 'AßBC' | grep -i --only-matching 'Sb'
echo 'AßBC' | grep -i --color=always  'Sb'

What is to be reported in these cases, since the match begins in the
*middle* of the original input character 'ß'?

Note that Unicode's toCasefold() cannot be implemented in terms of
POSIX' towctrans() since that can only return a single wint_t value
per input wint_t value.