summaryrefslogtreecommitdiff
path: root/doc/performance.txt
blob: 2358b9d40e0e05324f27e2da425dc15a0d4bfd44 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
====================
Benchmarks and Speed
====================

:Author:
  Stefan Behnel

.. meta::
  :description: Performance evaluation of lxml and ElementTree:
                fast operations, common pitfalls and optimisation hints.
  :keywords: Python XML parser performance, XML processing, performance comparison,
             lxml performance, lxml.etree, lxml.objectify, benchmarks, ElementTree


lxml.etree is a very fast XML library.  Most of this is due to the
speed of libxml2, e.g. the parser and serialiser, or the XPath engine.
Other areas of lxml were specifically written for high performance in
high-level operations, such as the tree iterators.

On the other hand, the simplicity of lxml sometimes hides internal
operations that are more costly than the API suggests.  If you are not
aware of these cases, lxml may not always perform as you expect.  A
common example in the Python world is the Python list type.  New users
often expect it to be a linked list, while it actually is implemented
as an array, which results in a completely different complexity for
common operations.

Similarly, the tree model of libxml2 is more complex than what lxml's
ElementTree API projects into Python space, so some operations may
show unexpected performance.  Rest assured that most lxml users will
not notice this in real life, as lxml is very fast in absolute
numbers.  It is definitely fast enough for most applications, so lxml
is probably somewhere between 'fast enough' and 'the best choice' for
yours.  Read some messages_ from happy_ users_ to see what we mean.

.. _messages: http://permalink.gmane.org/gmane.comp.python.lxml.devel/3250
.. _happy: http://article.gmane.org/gmane.comp.python.lxml.devel/3246
.. _users: http://thread.gmane.org/gmane.comp.python.lxml.devel/3244/focus=3244

This text describes where lxml.etree (abbreviated to 'lxe') excels, gives
hints on some performance traps and compares the overall performance to the
original ElementTree_ (ET) and cElementTree_ (cET) libraries by Fredrik Lundh.
The cElementTree library is a fast C-implementation of the original
ElementTree.

.. _ElementTree:  http://effbot.org/zone/element-index.htm
.. _cElementTree: http://effbot.org/zone/celementtree.htm

.. contents::
.. 
   1  How to read the timings
   2  Bad things first
   3  Parsing and Serialising
   4  The ElementTree API
   5  Tree traversal
   6  XPath
   7  lxml.objectify


General notes
=============

First thing to say: there *is* an overhead involved in having a DOM-like C
library mimic the ElementTree API.  As opposed to ElementTree, lxml has to
generate Python representations of tree nodes on the fly when asked for them,
and the internal tree structure of libxml2 results in a higher maintenance
overhead than the simpler top-down structure of ElementTree.  What this means
is: the more of your code runs in Python, the less you can benefit from the
speed of lxml and libxml2.  Note, however, that this is true for most
performance critical Python applications.  No one would implement fourier
transformations in pure Python when you can use NumPy.

The up side then is that lxml provides powerful tools like tree iterators,
XPath and XSLT, that can handle complex operations at the speed of C.  Their
pythonic API in lxml makes them so flexible that most applications can easily
benefit from them.


How to read the timings
=======================

The statements made here are backed by the (micro-)benchmark scripts
`bench_etree.py`_, `bench_xpath.py`_ and `bench_objectify.py`_ that come with
the lxml source distribution.  They are distributed under the same BSD license
as lxml itself, and the lxml project would like to promote them as a general
benchmarking suite for all ElementTree implementations.  New benchmarks are
very easy to add as tiny test methods, so if you write a performance test for
a specific part of the API yourself, please consider sending it to the lxml
mailing list.

The timings presented below compare lxml 3.1.1 (with libxml2 2.9.0) to the
latest released versions of ElementTree (with cElementTree as accelerator
module) in the standard library of CPython 3.3.0.  They were run
single-threaded on a 2.9GHz 64bit double core Intel i7 machine under
Ubuntu Linux 12.10 (Quantal).  The C libraries were compiled with the
same platform specific optimisation flags.  The Python interpreter was
also manually compiled for the platform.  Note that many of the following
ElementTree timings are therefore better than what a normal Python
installation with the standard library (c)ElementTree modules would yield.
Note also that CPython 2.7 and 3.2+ come with a newer ElementTree version,
so older Python installations will not perform as good for (c)ElementTree,
and sometimes substantially worse.

.. _`bench_etree.py`:     https://github.com/lxml/lxml/blob/master/benchmark/bench_etree.py
.. _`bench_xpath.py`:     https://github.com/lxml/lxml/blob/master/benchmark/bench_xpath.py
.. _`bench_objectify.py`: https://github.com/lxml/lxml/blob/master/benchmark/bench_objectify.py

The scripts run a number of simple tests on the different libraries, using
different XML tree configurations: different tree sizes (T1-4), with or
without attributes (-/A), with or without ASCII string or unicode text
(-/S/U), and either against a tree or its serialised XML form (T/X).  In the
result extracts cited below, T1 refers to a 3-level tree with many children at
the third level, T2 is swapped around to have many children below the root
element, T3 is a deep tree with few children at each level and T4 is a small
tree, slightly broader than deep.  If repetition is involved, this usually
means running the benchmark in a loop over all children of the tree root,
otherwise, the operation is run on the root node (C/R).

As an example, the character code ``(SATR T1)`` states that the benchmark was
running for tree T1, with plain string text (S) and attributes (A).  It was
run against the root element (R) in the tree structure of the data (T).

Note that very small operations are repeated in integer loops to make them
measurable.  It is therefore not always possible to compare the absolute
timings of, say, a single access benchmark (which usually loops) and a 'get
all in one step' benchmark, which already takes enough time to be measurable
and is therefore measured as is.  An example is the index access to a single
child, which cannot be compared to the timings for ``getchildren()``.  Take a
look at the concrete benchmarks in the scripts to understand how the numbers
compare.


Parsing and Serialising
=======================

Serialisation is an area where lxml excels.  The reason is that it
executes entirely at the C level, without any interaction with Python
code.  The results are rather impressive, especially for UTF-8, which
is native to libxml2.  While 20 to 40 times faster than (c)ElementTree
1.2 (which was part of the standard library before Python 2.7/3.2),
lxml is still more than 10 times as fast as the much improved
ElementTree 1.3 in recent Python versions::

  lxe: tostring_utf16  (S-TR T1)    7.9958 msec/pass
  cET: tostring_utf16  (S-TR T1)   83.1358 msec/pass

  lxe: tostring_utf16  (UATR T1)    8.3222 msec/pass
  cET: tostring_utf16  (UATR T1)   84.4688 msec/pass

  lxe: tostring_utf16  (S-TR T2)    8.2297 msec/pass
  cET: tostring_utf16  (S-TR T2)   87.3415 msec/pass

  lxe: tostring_utf8   (S-TR T2)    6.5677 msec/pass
  cET: tostring_utf8   (S-TR T2)   76.2064 msec/pass

  lxe: tostring_utf8   (U-TR T3)    1.1952 msec/pass
  cET: tostring_utf8   (U-TR T3)   22.0058 msec/pass

The difference is somewhat smaller for plain text serialisation::

  lxe: tostring_text_ascii     (S-TR T1)    2.7738 msec/pass
  cET: tostring_text_ascii     (S-TR T1)    4.7629 msec/pass

  lxe: tostring_text_ascii     (S-TR T3)    0.8273 msec/pass
  cET: tostring_text_ascii     (S-TR T3)    1.5273 msec/pass

  lxe: tostring_text_utf16     (S-TR T1)    2.7659 msec/pass
  cET: tostring_text_utf16     (S-TR T1)   10.5038 msec/pass

  lxe: tostring_text_utf16     (U-TR T1)    2.8017 msec/pass
  cET: tostring_text_utf16     (U-TR T1)   10.5207 msec/pass

The ``tostring()`` function also supports serialisation to a Python
unicode string object, which is currently faster in ElementTree
under CPython 3.3::

  lxe: tostring_text_unicode   (S-TR T1)    2.6896 msec/pass
  cET: tostring_text_unicode   (S-TR T1)    1.0056 msec/pass

  lxe: tostring_text_unicode   (U-TR T1)    2.7366 msec/pass
  cET: tostring_text_unicode   (U-TR T1)    1.0154 msec/pass

  lxe: tostring_text_unicode   (S-TR T3)    0.7997 msec/pass
  cET: tostring_text_unicode   (S-TR T3)    0.3154 msec/pass

  lxe: tostring_text_unicode   (U-TR T4)    0.0048 msec/pass
  cET: tostring_text_unicode   (U-TR T4)    0.0160 msec/pass

For parsing, lxml.etree and cElementTree compete for the medal.
Depending on the input, either of the two can be faster.  The (c)ET
libraries use a very thin layer on top of the expat parser, which is
known to be very fast.  Here are some timings from the benchmarking
suite::

  lxe: parse_bytesIO   (SAXR T1)   13.0246 msec/pass
  cET: parse_bytesIO   (SAXR T1)    8.2929 msec/pass

  lxe: parse_bytesIO   (S-XR T3)    1.3542 msec/pass
  cET: parse_bytesIO   (S-XR T3)    2.4023 msec/pass

  lxe: parse_bytesIO   (UAXR T3)    7.5610 msec/pass
  cET: parse_bytesIO   (UAXR T3)   11.2455 msec/pass

And another couple of timings `from a benchmark`_ that Fredrik Lundh
`used to promote cElementTree`_, comparing a number of different
parsers.  First, parsing a 274KB XML file containing Shakespeare's
Hamlet::

  xml.etree.ElementTree.parse done in 0.017 seconds
  xml.etree.cElementTree.parse done in 0.007 seconds
  xml.etree.cElementTree.XMLParser.feed(): 6636 nodes read in 0.007 seconds
  lxml.etree.parse done in 0.003 seconds
  drop_whitespace.parse done in 0.003 seconds
  lxml.etree.XMLParser.feed(): 6636 nodes read in 0.004 seconds
  minidom tree read in 0.080 seconds

And a 3.4MB XML file containing the Old Testament::

  xml.etree.ElementTree.parse done in 0.038 seconds
  xml.etree.cElementTree.parse done in 0.030 seconds
  xml.etree.cElementTree.XMLParser.feed(): 25317 nodes read in 0.030 seconds
  lxml.etree.parse done in 0.016 seconds
  drop_whitespace.parse done in 0.015 seconds
  lxml.etree.XMLParser.feed(): 25317 nodes read in 0.022 seconds
  minidom tree read in 0.288 seconds

.. _`from a benchmark`: http://svn.effbot.org/public/elementtree-1.3/benchmark.py
.. _`used to promote cElementTree`: http://effbot.org/zone/celementtree.htm#benchmarks

Here are the same benchmarks again, but including the memory usage
of the process in KB before and after parsing (using os.fork() to
make sure we start from a clean state each time).  For the 274KB
hamlet.xml file::

  Memory usage: 7284
  xml.etree.ElementTree.parse done in 0.017 seconds
  Memory usage: 9432 (+2148)
  xml.etree.cElementTree.parse done in 0.007 seconds
  Memory usage: 9432 (+2152)
  xml.etree.cElementTree.XMLParser.feed(): 6636 nodes read in 0.007 seconds
  Memory usage: 9448 (+2164)
  lxml.etree.parse done in 0.003 seconds
  Memory usage: 11032 (+3748)
  drop_whitespace.parse done in 0.003 seconds
  Memory usage: 10224 (+2940)
  lxml.etree.XMLParser.feed(): 6636 nodes read in 0.004 seconds
  Memory usage: 11804 (+4520)
  minidom tree read in 0.080 seconds
  Memory usage: 12324 (+5040)

And for the 3.4MB Old Testament XML file::

  Memory usage: 10420
  xml.etree.ElementTree.parse done in 0.038 seconds
  Memory usage: 20660 (+10240)
  xml.etree.cElementTree.parse done in 0.030 seconds
  Memory usage: 20660 (+10240)
  xml.etree.cElementTree.XMLParser.feed(): 25317 nodes read in 0.030 seconds
  Memory usage: 20844 (+10424)
  lxml.etree.parse done in 0.016 seconds
  Memory usage: 27624 (+17204)
  drop_whitespace.parse done in 0.015 seconds
  Memory usage: 24468 (+14052)
  lxml.etree.XMLParser.feed(): 25317 nodes read in 0.022 seconds
  Memory usage: 29844 (+19424)
  minidom tree read in 0.288 seconds
  Memory usage: 28788 (+18368)

As can be seen from the sizes, both lxml.etree and cElementTree are
rather memory friendly compared to the pure Python libraries
ElementTree and (especially) minidom.  Comparing to older CPython
versions, the memory footprint of the minidom library was considerably
reduced in CPython 3.3, by about a factor of 4 in this case.

For plain parser performance, lxml.etree and cElementTree tend to stay
rather close to each other, usually within a factor of two, with
winners well distributed over both sides.  Similar timings can be
observed for the ``iterparse()`` function::

  lxe: iterparse_bytesIO   (SAXR T1)   17.9198 msec/pass
  cET: iterparse_bytesIO   (SAXR T1)   14.4982 msec/pass

  lxe: iterparse_bytesIO   (UAXR T3)    8.8522 msec/pass
  cET: iterparse_bytesIO   (UAXR T3)   12.9857 msec/pass

However, if you benchmark the complete round-trip of a serialise-parse
cycle, the numbers will look similar to these::

  lxe: write_utf8_parse_bytesIO   (S-TR T1)   19.8867 msec/pass
  cET: write_utf8_parse_bytesIO   (S-TR T1)   80.7259 msec/pass

  lxe: write_utf8_parse_bytesIO   (UATR T2)   23.7896 msec/pass
  cET: write_utf8_parse_bytesIO   (UATR T2)   98.0766 msec/pass

  lxe: write_utf8_parse_bytesIO   (S-TR T3)    3.0684 msec/pass
  cET: write_utf8_parse_bytesIO   (S-TR T3)   24.6122 msec/pass

  lxe: write_utf8_parse_bytesIO   (SATR T4)    0.3495 msec/pass
  cET: write_utf8_parse_bytesIO   (SATR T4)    1.9610 msec/pass

For applications that require a high parser throughput of large files,
and that do little to no serialization, both cET and lxml.etree are a
good choice.  The cET library is particularly fast for iterparse
applications that extract small amounts of data or aggregate
information from large XML data sets that do not fit into memory.  If
it comes to round-trip performance, however, lxml is multiple times
faster in total.  So, whenever the input documents are not
considerably larger than the output, lxml is the clear winner.

Regarding HTML parsing, Ian Bicking has done some `benchmarking on
lxml's HTML parser`_, comparing it to a number of other famous HTML
parser tools for Python.  lxml wins this contest by quite a length.
To give an idea, the numbers suggest that lxml.html can run a couple
of parse-serialise cycles in the time that other tools need for
parsing alone.  The comparison even shows some very favourable results
regarding memory consumption.

.. _`benchmarking on lxml's HTML parser`: http://blog.ianbicking.org/2008/03/30/python-html-parser-performance/

Liza Daly has written an article that presents a couple of tweaks to
get the most out of lxml's parser for very large XML documents.  She
quite favourably positions ``lxml.etree`` as a tool for
`high-performance XML parsing`_.

.. _`high-performance XML parsing`: http://www.ibm.com/developerworks/xml/library/x-hiperfparse/

Finally, `xml.com`_ has a couple of publications about XML parser
performance.  Farwick and Hafner have written two interesting articles
that compare the parser of libxml2 to some major Java based XML
parsers.  One deals with `event-driven parser performance`_, the other
one presents `benchmark results comparing DOM parsers`_.  Both
comparisons suggest that libxml2's parser performance is largely
superiour to all commonly used Java parsers in almost all cases.  Note
that the C parser benchmark results are based on xmlbench_, which uses
a simpler setup for libxml2 than lxml does.

.. _`xml.com`: http://www.xml.com/
.. _`event-driven parser performance`: http://www.xml.com/lpt/a/1702
.. _`benchmark results comparing DOM parsers`: http://www.xml.com/lpt/a/1703
.. _xmlbench: http://xmlbench.sourceforge.net/


The ElementTree API
===================

Since all three libraries implement the same API, their performance is
easy to compare in this area.  A major disadvantage for lxml's
performance is the different tree model that underlies libxml2.  It
allows lxml to provide parent pointers for elements and full XPath
support, but also increases the overhead of tree building and
restructuring.  This can be seen from the tree setup times of the
benchmark (given in seconds)::

  lxe:       --     S-     U-     -A     SA     UA
       T1: 0.0299 0.0343 0.0344 0.0293 0.0345 0.0342
       T2: 0.0368 0.0423 0.0418 0.0427 0.0474 0.0459
       T3: 0.0088 0.0084 0.0086 0.0251 0.0258 0.0261
       T4: 0.0002 0.0002 0.0002 0.0005 0.0006 0.0006
  cET:       --     S-     U-     -A     SA     UA
       T1: 0.0050 0.0045 0.0093 0.0044 0.0043 0.0043
       T2: 0.0073 0.0075 0.0074 0.0201 0.0075 0.0074
       T3: 0.0033 0.0213 0.0032 0.0034 0.0033 0.0035
       T4: 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

The timings are somewhat close to each other, although cET can be
several times faster than lxml for larger trees.  One of the
reasons is that lxml must encode incoming string data and tag names
into UTF-8, and additionally discard the created Python elements
after their use, when they are no longer referenced.  ElementTree
represents the tree itself through these objects, which reduces
the overhead in creating them.


Child access
------------

The same tree overhead makes operations like collecting children as in
``list(element)`` more costly in lxml.  Where cET can quickly create
a shallow copy of their list of children, lxml has to create a Python
object for each child and collect them in a list::

  lxe: root_list_children        (--TR T1)    0.0038 msec/pass
  cET: root_list_children        (--TR T1)    0.0010 msec/pass

  lxe: root_list_children        (--TR T2)    0.0455 msec/pass
  cET: root_list_children        (--TR T2)    0.0050 msec/pass

This handicap is also visible when accessing single children::

  lxe: first_child               (--TR T2)    0.0424 msec/pass
  cET: first_child               (--TR T2)    0.0384 msec/pass

  lxe: last_child                (--TR T1)    0.0477 msec/pass
  cET: last_child                (--TR T1)    0.0467 msec/pass

... unless you also add the time to find a child index in a bigger
list.  ET and cET use Python lists here, which are based on arrays.
The data structure used by libxml2 is a linked tree, and thus, a
linked list of children::

  lxe: middle_child              (--TR T1)    0.0710 msec/pass
  cET: middle_child              (--TR T1)    0.0420 msec/pass

  lxe: middle_child              (--TR T2)    1.7393 msec/pass
  cET: middle_child              (--TR T2)    0.0396 msec/pass


Element creation
----------------

As opposed to ET, libxml2 has a notion of documents that each element must be
in.  This results in a major performance difference for creating independent
Elements that end up in independently created documents::

  lxe: create_elements           (--TC T2)    1.0045 msec/pass
  cET: create_elements           (--TC T2)    0.0753 msec/pass

Therefore, it is always preferable to create Elements for the document they
are supposed to end up in, either as SubElements of an Element or using the
explicit ``Element.makeelement()`` call::

  lxe: makeelement               (--TC T2)    1.0586 msec/pass
  cET: makeelement               (--TC T2)    0.1483 msec/pass

  lxe: create_subelements        (--TC T2)    0.8826 msec/pass
  cET: create_subelements        (--TC T2)    0.0827 msec/pass

So, if the main performance bottleneck of an application is creating large XML
trees in memory through calls to Element and SubElement, cET is the best
choice.  Note, however, that the serialisation performance may even out this
advantage, especially for smaller trees and trees with many attributes.


Merging different sources
-------------------------

A critical action for lxml is moving elements between document contexts.  It
requires lxml to do recursive adaptations throughout the moved tree structure.

The following benchmark appends all root children of the second tree to the
root of the first tree::

  lxe: append_from_document      (--TR T1,T2)    1.0812 msec/pass
  cET: append_from_document      (--TR T1,T2)    0.1104 msec/pass

  lxe: append_from_document      (--TR T3,T4)    0.0155 msec/pass
  cET: append_from_document      (--TR T3,T4)    0.0060 msec/pass

Although these are fairly small numbers compared to parsing, this easily shows
the different performance classes for lxml and (c)ET.  Where the latter do not
have to care about parent pointers and tree structures, lxml has to deep
traverse the appended tree.  The performance difference therefore increases
with the size of the tree that is moved.

This difference is not always as visible, but applies to most parts of the
API, like inserting newly created elements::

  lxe: insert_from_document         (--TR T1,T2)    3.9763 msec/pass
  cET: insert_from_document         (--TR T1,T2)    0.1459 msec/pass

or replacing the child slice by a newly created element::

  lxe: replace_children_element   (--TC T1)    0.0749 msec/pass
  cET: replace_children_element   (--TC T1)    0.0081 msec/pass

as opposed to replacing the slice with an existing element from the
same document::

  lxe: replace_children           (--TC T1)    0.0052 msec/pass
  cET: replace_children           (--TC T1)    0.0036 msec/pass

While these numbers are too small to provide a major performance
impact in practice, you should keep this difference in mind when you
merge very large trees.  Note that Elements have a ``makeelement()``
method that allows to create an Element within the same document,
thus avoiding the merge overhead when inserting it into that tree.


deepcopy
--------

Deep copying a tree is fast in lxml::

  lxe: deepcopy_all              (--TR T1)    3.1650 msec/pass
  cET: deepcopy_all              (--TR T1)   53.9973 msec/pass

  lxe: deepcopy_all              (-ATR T2)    3.7365 msec/pass
  cET: deepcopy_all              (-ATR T2)   61.6267 msec/pass

  lxe: deepcopy_all              (S-TR T3)    0.7913 msec/pass
  cET: deepcopy_all              (S-TR T3)   13.6220 msec/pass

So, for example, if you have a database-like scenario where you parse in a
large tree and then search and copy independent subtrees from it for further
processing, lxml is by far the best choice here.


Tree traversal
--------------

Another important area in XML processing is iteration for tree
traversal.  If your algorithms can benefit from step-by-step
traversal of the XML tree and especially if few elements are of
interest or the target element tag name is known, the ``.iter()``
method is a good choice::

  lxe: iter_all             (--TR T1)    1.2021 msec/pass
  cET: iter_all             (--TR T1)    0.2649 msec/pass

  lxe: iter_islice          (--TR T2)    0.0119 msec/pass
  cET: iter_islice          (--TR T2)    0.0050 msec/pass

  lxe: iter_tag             (--TR T2)    0.0112 msec/pass
  cET: iter_tag             (--TR T2)    0.0112 msec/pass

  lxe: iter_tag_all         (--TR T2)    0.1838 msec/pass
  cET: iter_tag_all         (--TR T2)    0.5472 msec/pass

This translates directly into similar timings for ``Element.findall()``::

  lxe: findall              (--TR T2)    2.6150 msec/pass
  cET: findall              (--TR T2)    0.9973 msec/pass

  lxe: findall              (--TR T3)    0.5975 msec/pass
  cET: findall              (--TR T3)    0.2525 msec/pass

  lxe: findall_tag          (--TR T2)    0.2692 msec/pass
  cET: findall_tag          (--TR T2)    0.5770 msec/pass

  lxe: findall_tag          (--TR T3)    0.1111 msec/pass
  cET: findall_tag          (--TR T3)    0.1919 msec/pass

Note that all three libraries currently use the same Python
implementation for ``.findall()``, except for their native tree
iterator (``element.iter()``).


XPath
=====

The following timings are based on the benchmark script `bench_xpath.py`_.

This part of lxml does not have an equivalent in ElementTree.  However, lxml
provides more than one way of accessing it and you should take care which part
of the lxml API you use.  The most straight forward way is to call the
``xpath()`` method on an Element or ElementTree::

  lxe: xpath_method         (--TC T1)    0.5553 msec/pass
  lxe: xpath_method         (--TC T2)    8.1232 msec/pass
  lxe: xpath_method         (--TC T3)    0.0479 msec/pass
  lxe: xpath_method         (--TC T4)    0.3920 msec/pass

This is well suited for testing and when the XPath expressions are as diverse
as the trees they are called on.  However, if you have a single XPath
expression that you want to apply to a larger number of different elements,
the ``XPath`` class is the most efficient way to do it::

  lxe: xpath_class          (--TC T1)    0.2112 msec/pass
  lxe: xpath_class          (--TC T2)    1.0867 msec/pass
  lxe: xpath_class          (--TC T3)    0.0203 msec/pass
  lxe: xpath_class          (--TC T4)    0.0594 msec/pass

Note that this still allows you to use variables in the expression, so you can
parse it once and then adapt it through variables at call time.  In other
cases, where you have a fixed Element or ElementTree and want to run different
expressions on it, you should consider the ``XPathEvaluator``::

  lxe: xpath_element        (--TR T1)    0.1118 msec/pass
  lxe: xpath_element        (--TR T2)    5.1293 msec/pass
  lxe: xpath_element        (--TR T3)    0.0262 msec/pass
  lxe: xpath_element        (--TR T4)    0.1111 msec/pass

While it looks slightly slower, creating an XPath object for each of the
expressions generates a much higher overhead here::

  lxe: xpath_class_repeat           (--TC T1   )    0.5620 msec/pass
  lxe: xpath_class_repeat           (--TC T2   )    7.5161 msec/pass
  lxe: xpath_class_repeat           (--TC T3   )    0.0451 msec/pass
  lxe: xpath_class_repeat           (--TC T4   )    0.3688 msec/pass

Also note that tree iteration can well be faster than XPath, and
sometimes substantially so.  Above all, it can efficiently
short-circuit after the first couple of elements were found.  The
XPath engine will always return the complete result set, regardless
of how much of it actually will be used.

Here is an example where only the first matching element is being
searched, a case for which XPath has syntax support as well::

  lxe: find_single                (--TR T2)    0.0184 msec/pass
  cET: find_single                (--TR T2)    0.0052 msec/pass

  lxe: iter_single                (--TR T2)    0.0029 msec/pass
  cET: iter_single                (--TR T2)    0.0007 msec/pass

  lxe: xpath_single               (--TR T2)    0.0226 msec/pass


A longer example
================

... based on lxml 1.3.

A while ago, Uche Ogbuji posted a `benchmark proposal`_ that would
read in a 3MB XML version of the `Old Testament`_ of the Bible and
look for the word *begat* in all verses.  Apparently, it is contained
in 120 out of almost 24000 verses.  This is easy to implement in
ElementTree using ``findall()``.  However, the fastest and most memory
friendly way to do this is obviously ``iterparse()``, as most of the
data is not of any interest.

.. _`benchmark proposal`: http://www.onlamp.com/pub/wlg/6291
.. _`Old Testament`: http://www.ibiblio.org/bosak/xml/eg/religion.2.00.xml.zip

Now, Uche's original proposal was more or less the following:

.. sourcecode:: python

  def bench_ET():
      tree = ElementTree.parse("ot.xml")
      result = []
      for v in tree.findall("//v"):
          text = v.text
          if 'begat' in text:
              result.append(text)
      return len(result)

which takes about one second on my machine today.  The faster ``iterparse()``
variant looks like this:

.. sourcecode:: python

  def bench_ET_iterparse():
      result = []
      for event, v in ElementTree.iterparse("ot.xml"):
          if v.tag == 'v':
              text = v.text
              if 'begat' in text:
                  result.append(text)
          v.clear()
      return len(result)

The improvement is about 10%.  At the time I first tried (early 2006), lxml
didn't have ``iterparse()`` support, but the ``findall()`` variant was already
faster than ElementTree.  This changes immediately when you switch to
cElementTree.  The latter only needs 0.17 seconds to do the trick today and
only some impressive 0.10 seconds when running the iterparse version.  And
even back then, it was quite a bit faster than what lxml could achieve.

Since then, lxml has matured a lot and has gotten much faster.  The iterparse
variant now runs in 0.14 seconds, and if you remove the ``v.clear()``, it is
even a little faster (which isn't the case for cElementTree).

One of the many great tools in lxml is XPath, a swiss army knife for finding
things in XML documents.  It is possible to move the whole thing to a pure
XPath implementation, which looks like this:

.. sourcecode:: python

  def bench_lxml_xpath_all():
      tree = etree.parse("ot.xml")
      result = tree.xpath("//v[contains(., 'begat')]/text()")
      return len(result)

This runs in about 0.13 seconds and is about the shortest possible
implementation (in lines of Python code) that I could come up with.  Now, this
is already a rather complex XPath expression compared to the simple "//v"
ElementPath expression we started with.  Since this is also valid XPath, let's
try this instead:

.. sourcecode:: python

  def bench_lxml_xpath():
      tree = etree.parse("ot.xml")
      result = []
      for v in tree.xpath("//v"):
          text = v.text
          if 'begat' in text:
              result.append(text)
      return len(result)

This gets us down to 0.12 seconds, thus showing that a generic XPath
evaluation engine cannot always compete with a simpler, tailored solution.
However, since this is not much different from the original findall variant,
we can remove the complexity of the XPath call completely and just go with
what we had in the beginning.  Under lxml, this runs in the same 0.12 seconds.

But there is one thing left to try.  We can replace the simple ElementPath
expression with a native tree iterator:

.. sourcecode:: python

  def bench_lxml_getiterator():
      tree = etree.parse("ot.xml")
      result = []
      for v in tree.getiterator("v"):
          text = v.text
          if 'begat' in text:
              result.append(text)
      return len(result)

This implements the same thing, just without the overhead of parsing and
evaluating a path expression.  And this makes it another bit faster, down to
0.11 seconds.  For comparison, cElementTree runs this version in 0.17 seconds.

So, what have we learned?

* Python code is not slow.  The pure XPath solution was not even as fast as
  the first shot Python implementation.  In general, a few more lines in
  Python make things more readable, which is much more important than the last
  5% of performance.

* It's important to know the available options - and it's worth starting with
  the most simple one.  In this case, a programmer would then probably have
  started with ``getiterator("v")`` or ``iterparse()``.  Either of them would
  already have been the most efficient, depending on which library is used.

* It's important to know your tool.  lxml and cElementTree are both very fast
  libraries, but they do not have the same performance characteristics.  The
  fastest solution in one library can be comparatively slow in the other.  If
  you optimise, optimise for the specific target platform.

* It's not always worth optimising.  After all that hassle we got from 0.12
  seconds for the initial implementation to 0.11 seconds.  Switching over to
  cElementTree and writing an ``iterparse()`` based version would have given
  us 0.10 seconds - not a big difference for 3MB of XML.

* Take care what operation is really dominating in your use case.  If we split
  up the operations, we can see that lxml is slightly slower than cElementTree
  on ``parse()`` (both about 0.06 seconds), but more visibly slower on
  ``iterparse()``: 0.07 versus 0.10 seconds.  However, tree iteration in lxml
  is increadibly fast, so it can be better to parse the whole tree and then
  iterate over it rather than using ``iterparse()`` to do both in one step.
  Or, you can just wait for the lxml developers to optimise iterparse in one
  of the next releases...


lxml.objectify
==============

The following timings are based on the benchmark script `bench_objectify.py`_.

Objectify is a data-binding API for XML based on lxml.etree, that was added in
version 1.1.  It uses standard Python attribute access to traverse the XML
tree.  It also features ObjectPath, a fast path language based on the same
meme.

Just like lxml.etree, lxml.objectify creates Python representations of
elements on the fly.  To save memory, the normal Python garbage collection
mechanisms will discard them when their last reference is gone.  In cases
where deeply nested elements are frequently accessed through the objectify
API, the create-discard cycles can become a bottleneck, as elements have to be
instantiated over and over again.


ObjectPath
----------

ObjectPath can be used to speed up the access to elements that are deep in the
tree.  It avoids step-by-step Python element instantiations along the path,
which can substantially improve the access time::

  lxe: attribute                  (--TR T1)    4.5464 msec/pass
  lxe: attribute                  (--TR T2)   18.6505 msec/pass
  lxe: attribute                  (--TR T4)    4.4115 msec/pass

  lxe: objectpath                 (--TR T1)    0.9568 msec/pass
  lxe: objectpath                 (--TR T2)   13.3109 msec/pass
  lxe: objectpath                 (--TR T4)    1.0018 msec/pass

  lxe: attributes_deep            (--TR T1)    6.4061 msec/pass
  lxe: attributes_deep            (--TR T2)   20.8254 msec/pass
  lxe: attributes_deep            (--TR T4)    6.2237 msec/pass

  lxe: objectpath_deep            (--TR T1)    1.5173 msec/pass
  lxe: objectpath_deep            (--TR T2)   14.9171 msec/pass
  lxe: objectpath_deep            (--TR T4)    1.5221 msec/pass

Note, however, that parsing ObjectPath expressions is not for free either, so
this is most effective for frequently accessing the same element.


Caching Elements
----------------

A way to improve the normal attribute access time is static instantiation of
the Python objects, thus trading memory for speed.  Just create a cache
dictionary and run:

.. sourcecode:: python

    cache[root] = list(root.iter())

after parsing and:

.. sourcecode:: python

    del cache[root]

when you are done with the tree.  This will keep the Python element
representations of all elements alive and thus avoid the overhead of repeated
Python object creation.  You can also consider using filters or generator
expressions to be more selective.  By choosing the right trees (or even
subtrees and elements) to cache, you can trade memory usage against access
speed::

  lxe: attribute_cached           (--TR T1)    3.8185 msec/pass
  lxe: attribute_cached           (--TR T2)   17.1666 msec/pass
  lxe: attribute_cached           (--TR T4)    3.6592 msec/pass

  lxe: attributes_deep_cached     (--TR T1)    4.3907 msec/pass
  lxe: attributes_deep_cached     (--TR T2)   18.0719 msec/pass
  lxe: attributes_deep_cached     (--TR T4)    4.3812 msec/pass

  lxe: objectpath_deep_cached     (--TR T1)    0.7939 msec/pass
  lxe: objectpath_deep_cached     (--TR T2)   13.5620 msec/pass
  lxe: objectpath_deep_cached     (--TR T4)    0.8042 msec/pass

Things to note: you cannot currently use ``weakref.WeakKeyDictionary`` objects
for this as lxml's element objects do not support weak references (which are
costly in terms of memory).  Also note that new element objects that you add
to these trees will not turn up in the cache automatically and will therefore
still be garbage collected when all their Python references are gone, so this
is most effective for largely immutable trees.  You should consider using a
set instead of a list in this case and add new elements by hand.


Further optimisations
---------------------

Here are some more things to try if optimisation is required:

* A lot of time is usually spent in tree traversal to find the addressed
  elements in the tree.  If you often work in subtrees, do what you would also
  do with deep Python objects: assign the parent of the subtree to a variable
  or pass it into functions instead of starting at the root.  This allows
  accessing its descendents more directly.

* Try assigning data values directly to attributes instead of passing them
  through DataElement.

* If you use custom data types that are costly to parse, try running
  ``objectify.annotate()`` over read-only trees to speed up the attribute type
  inference on read access.

Note that none of these measures is guaranteed to speed up your application.
As usual, you should prefer readable code over premature optimisations and
profile your expected use cases before bothering to apply optimisations at
random.