summaryrefslogtreecommitdiff
path: root/src/test/regress/expected/join.out
Commit message (Collapse)AuthorAgeFilesLines
* Fix some issues with improper placement of outer join clauses.Tom Lane2023-05-171-0/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | After applying outer-join identity 3 in the forward direction, it was possible for the planner to mistakenly apply a qual clause from above the two outer joins at the now-lower join level. This can give the wrong answer, since a value that would get nulled by the now-upper join might not yet be null. To fix, when we perform such a transformation, consider that the now-lower join hasn't really completed the outer join it's nominally responsible for and thus its relid set should not include that OJ's relid (nor should its output Vars have that nullingrel bit set). Instead we add those bits when the now-upper join is performed. The existing rules for qual placement then suffice to prevent higher qual clauses from dropping below the now-upper join. There are a few complications from needing to consider transitive closures in case multiple pushdowns have happened, but all in all it's not a very complex patch. This is all new logic (from 2489d76c4) so no need to back-patch. The added test cases all have the same results as in v15. Tom Lane and Richard Guo Discussion: https://postgr.es/m/0b819232-4b50-f245-1c7d-c8c61bf41827@postgrespro.ru
* Undo faulty attempt at not relying on RINFO_IS_PUSHED_DOWN.Tom Lane2023-05-111-0/+16
| | | | | | | | | | | | | | I've had a bee in my bonnet for some time about getting rid of RestrictInfo.is_pushed_down, because it's squishily defined and requires not-inexpensive extra tests to use (cf RINFO_IS_PUSHED_DOWN). In commit 2489d76c4, I tried to make remove_rel_from_query() not depend on that macro; but the replacement test is buggy, as exposed by a report from Rushabh Lathia and Robert Haas. That change was pretty incidental to the main goal of 2489d76c4, so let's just revert it for now. (Getting rid of is_pushed_down is still far away, anyway.) Discussion: https://postgr.es/m/CA+TgmoYco=hmg+iX1CW9Y1_CzNoSL81J03wUG-d2_3=rue+L2A@mail.gmail.com
* Work around implementation restriction in adjust_appendrel_attrs.Tom Lane2023-03-121-0/+22
| | | | | | | | | | | | | | | | | | | | | | | adjust_appendrel_attrs can't transfer nullingrel labeling to a non-Var translation expression (mainly because it's too late to wrap such an expression in a PlaceHolderVar). I'd supposed in commit 2489d76c4 that that restriction was unreachable because we'd not attempt to push problematic clauses down to an appendrel child relation. I forgot that set_append_rel_size blindly converts all the parent rel's joininfo clauses to child clauses, and that list could well contain clauses from above a nulling outer join. We might eventually have to devise a direct fix for this implementation restriction, but for now it seems enough to filter out troublesome clauses while constructing the child's joininfo list. Such clauses are certainly not useful while constructing paths for the child rel; they'll have to be applied later when we join the completed appendrel to something else. So we don't need them here, and omitting them from the list should save a few cycles while processing the child rel. Per bug #17832 from Marko Tiikkaja. Discussion: https://postgr.es/m/17832-d0a8106cdf1b722e@postgresql.org
* Fix mis-handling of outer join quals generated by EquivalenceClasses.Tom Lane2023-02-231-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible, in admittedly-rather-contrived cases, for an eclass to generate a derived "join" qual that constrains the post-outer-join value(s) of some RHS variable(s) without mentioning the LHS at all. While the mechanisms were set up to work for this, we fell foul of the "get_common_eclass_indexes" filter installed by commit 3373c7155: it could decide that such an eclass wasn't relevant to the join, so that the required qual clause wouldn't get emitted there or anywhere else. To fix, apply get_common_eclass_indexes only at inner joins, where its rule is still valid. At an outer join, fall back to examining all eclasses that mention either input (or the OJ relid, though it should be impossible for an eclass to mention that without mentioning either input). Perhaps we can improve on that later, but the cost/benefit of adding more complexity to skip some irrelevant eclasses is dubious. To allow cheaply distinguishing outer from inner joins, pass the ojrelid to generate_join_implied_equalities as a separate argument. This also allows cleaning up some sloppiness that had crept into the definition of its join_relids argument, and it allows accurate calculation of nominal_join_relids for a child outer join. (The latter oversight seems not to have been a live bug, but it certainly could have caused problems in future.) Also fix what might be a live bug in check_index_predicates: it was being sloppy about what it passed to generate_join_implied_equalities. Per report from Richard Guo. Discussion: https://postgr.es/m/CAMbWs4-DsTBfOvXuw64GdFss2=M5cwtEhY=0DCS7t2gT7P6hSA@mail.gmail.com
* Fix some issues with wrong placement of pseudo-constant quals.Tom Lane2023-02-221-0/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | initsplan.c figured that it could push Var-free qual clauses to the top of the current JoinDomain, which is okay in the abstract. But if the current domain is inside some outer join, and we later commute an inside-the-domain outer join with one outside it, we end up placing the pushed-up qual clause incorrectly. In distribute_qual_to_rels, avoid this by using the syntactic scope of the qual clause; with the exception that if we're in the top-level join domain we can still use the full query relid set, ensuring the resulting gating Result node goes to the top of the plan. (This is approximately as smart as the pre-v16 code was. Perhaps we can do better later, but it's not clear that such cases are worth a lot of sweat.) In process_implied_equality, we don't have a clear notion of syntactic scope, but we do have the results of SpecialJoinInfo construction. Thumb through those and remove any lower outer joins that might get commuted to above the join domain. Again, we can make an exception for the top-level join domain. It'd be possible to work harder here (for example, by keeping outer joins that aren't shown as potentially commutable), but I'm going to stop here for the moment. This issue has convinced me that the current representation of join domains probably needs further refinement, so I'm disinclined to write inessential dependent logic just yet. In passing, tighten the qualscope passed to process_implied_equality by generate_base_implied_equalities_no_const; there's no need for it to be larger than the rel we are currently considering. Tom Lane and Richard Guo, per report from Tender Wang. Discussion: https://postgr.es/m/CAHewXNk9eJ35ru5xATWioTV4+xZPHptjy9etdcNPjUfY9RQ+uQ@mail.gmail.com
* Fix buggy recursion in flatten_rtes_walker().Tom Lane2023-02-131-0/+12
| | | | | | | | | | Must save-and-restore the context we are modifying. Oversight in commit a61b1f748. Tender Wang Discussion: https://postgr.es/m/CAHewXNnnNySD_YcKNuFpQDV2gxWA7_YLWqHmYVcyoOYxn8kY2A@mail.gmail.com Discussion: https://postgr.es/m/20230212233711.GA1316@telsasoft.com
* Fix thinkos in have_unsafe_outer_join_ref; reduce to Assert check.Tom Lane2023-02-131-0/+26
| | | | | | | | | | | | | | | | | Late in the development of commit 2489d76c4, I (tgl) incorrectly concluded that the new function have_unsafe_outer_join_ref couldn't ever reach its inner loop. That should be the case if the inner rel's parameterization is based on just one Var, but it could be based on Vars from several relations, and then not only is the inner loop reachable but it's wrongly coded. Despite those errors, it still appears that the whole thing is redundant given previous join_is_legal checks, so let's arrange to only run it in assert-enabled builds. Diagnosis and patch by Richard Guo, per fuzz testing by Justin Pryzby. Discussion: https://postgr.es/m/20230212235823.GW1653@telsasoft.com
* Fix join removal logic to clean up sub-RestrictInfos of OR clauses.Tom Lane2023-02-101-0/+20
| | | | | | | | | | | | | | | | | analyzejoins.c took care to clean out removed relids from the clause_relids and required_relids of RestrictInfos associated with the doomed rel ... but it paid no attention to the fact that if such a RestrictInfo contains an OR clause, there will be sub-RestrictInfos containing similar fields. I'm more than a bit surprised that this oversight hasn't caused visible problems before. In any case, it's certainly broken now, so add logic to clean out the sub-RestrictInfos recursively. We might need to back-patch this someday. Per bug #17786 from Robins Tharakan. Discussion: https://postgr.es/m/17786-f1ea7fbdab97daec@postgresql.org
* Further fixes in qual nullingrel adjustment for outer join commutation.Tom Lane2023-02-101-0/+25
| | | | | | | | | | | | | | | | | | | | | | One of the add_nulling_relids calls in deconstruct_distribute_oj_quals added an OJ relid to too few Vars, while the other added it to too many. We should consider the syntactic structure not min_left/righthand while deciding which Vars to decorate, and when considering pushing up a lower outer join pursuant to transforming the second form of OJ identity 3 to the first form, we only want to decorate Vars coming from its LHS. In a related bug, I realized that make_outerjoininfo was failing to check a very basic property that's needed to apply OJ identity 3: the syntactically-upper outer join clause can't refer to the lower join's LHS. This didn't break the join order restriction logic, but it led to setting bogus commute_xxx bits, possibly resulting in bogus nullingrel markings in modified quals. Richard Guo and Tom Lane Discussion: https://postgr.es/m/CAMbWs497CmBruMx1SOjepWEz+T5NWa4scqbdE9v7ZzSXqH_gQw@mail.gmail.com Discussion: https://postgr.es/m/CAEP4nAx9C5gXNBfEA0JBfz7B+5f1Bawt-RWQWyhev-wdps8BZA@mail.gmail.com
* remove_rel_from_query() must clean up PlaceHolderVar.phrels fields.Tom Lane2023-02-081-0/+16
| | | | | | | | | | While we got away with this sloppiness before, it's not okay now that fee7b77b9 caused build_joinrel_tlist() to make use of phrels. Per report from Robins Tharakan. Richard Guo (some cosmetic tweaks by me) Discussion: https://postgr.es/m/CAMbWs4_ngw9sKxpTE8hqk=-ooVX_CQP3DarA4HzkRMz_JKpTrA@mail.gmail.com
* Rethink nullingrel marking rules in build_joinrel_tlist().Tom Lane2023-02-071-0/+44
| | | | | | | | | | | | | | | | | | The logic for when to add the current outer join's own relid to the nullingrels sets of output Vars and PHVs was overly complicated and underly correct. Not sure why I didn't think of this before, but since what we want is marking per the syntactic structure, we can just consult our records about the syntactic structure, ie syn_righthand/syn_lefthand. Also, tighten the rule about when to add the commute_above_r bits, in hopes of eliminating some squishy reasoning. I do not know of a reason to think that that's broken as-is, but this way seems better. Per bug #17781 from Robins Tharakan. Discussion: https://postgr.es/m/17781-c0405c8b3cd5e072@postgresql.org
* Remove leftover code in deconstruct_distribute_oj_quals().Tom Lane2023-02-071-0/+26
| | | | | | | | | | The initial "put back OJ relids" adjustment of ojscope was incorrect and unnecessary; it seems to be a leftover from when I (tgl) was trying to get this function to work at all. Richard Guo Discussion: https://postgr.es/m/CAMbWs4-L2C47ZGZPabBAi5oDZsKmsbvhYcGCy5o=gCjsaG_ZQA@mail.gmail.com
* Fix up join removal's interaction with PlaceHolderVars.Tom Lane2023-02-061-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The portion of join_is_removable() that checks PlaceHolderVars can be made a little more accurate and intelligible than it was. The key point is that we can allow join removal even if a PHV mentions the target rel in ph_eval_at, if that mention was only added as a consequence of forcing the PHV up to a join level that's at/above the outer join we're trying to get rid of. We can check that by testing for the OJ's relid appearing in ph_eval_at, indicating that it's supposed to be evaluated after the outer join, plus the existing test that the contained expression doesn't actually mention the target rel. While here, add an explicit check that there'll be something left in ph_eval_at after we remove the target rel and OJ relid. There is an Assert later on about that, and I'm not too sure that the case could happen for a PHV satisfying the other constraints, but let's just check. (There was previously a bms_is_subset test that meant to cover this risk, but it's broken now because it doesn't account for the fact that we'll also remove the OJ relid.) The real reason for revisiting this code though is that the Assert I left behind in 8538519db turns out to be easily reachable, because if a PHV of this sort appears in an upper-level qual clause then that clause's clause_relids will include the PHV's ph_eval_at relids. This is a mirage though: we have or soon will remove these relids from the PHV's ph_eval_at, and therefore they no longer belong in qual clauses' clause_relids either. Remove that Assert in join_is_removable, and replace the similar one in remove_rel_from_query with code to remove the deleted relids from clause_relids. Per bug #17773 from Robins Tharakan. Discussion: https://postgr.es/m/17773-a592e6cedbc7bac5@postgresql.org
* Fix over-optimistic updating of info about commutable outer joins.Tom Lane2023-02-051-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | make_outerjoininfo was set up to update SpecialJoinInfo's commute_below, commute_above_l, commute_above_r fields as soon as it found a pair of outer joins that look like they can commute. However, this decision could be negated later in the same loop due to finding an intermediate outer join that prevents commutation. That left us with commute_xxx fields that were contradictory to the join order restrictions expressed in min_lefthand/min_righthand. The latter fields would keep us from actually choosing a bad join order; but the inconsistent commute_xxx fields could bollix details such as the varnullingrels values created for intermediate join relation targetlists, ending in an assertion failure in setrefs.c. To fix, wait till the end of make_outerjoininfo where we have accurate values for min_lefthand/min_righthand, and then insert only relids not present in those sets into the commute_xxx fields. Per SQLSmith testing by Robins Tharakan. Note that while Robins bisected the failure to commit b448f1c8d, it's really the fault of 2489d76c4. The outerjoin_delayed logic removed in the later commit was keeping us from deciding that troublesome join pairs commute, at least in the specific example seen here. Discussion: https://postgr.es/m/CAEP4nAyAORgE8K_RHSmvWbE9UaChhjbEL1RrDU3neePwwRUB=A@mail.gmail.com
* Fix thinko in qual distribution.Tom Lane2023-02-041-0/+18
| | | | | | | | | | | | | | | | deconstruct_distribute tweaks the outer join scope (ojscope) it passes to distribute_qual_to_rels when considering an outer join qual that's above potentially-commutable outer joins. However, if the current join is *not* potentially commutable, we shouldn't do that. The argument that distribute_qual_to_rels will not do something wrong with the bogus ojscope falls flat if we don't pass it non-null postponed_oj_qual_list. Moreover, there's no need to play games in this case since we aren't going to commute anything. Per SQLSmith testing by Robins Tharakan. Discussion: https://postgr.es/m/CAEP4nAw74k4b-=93gmfCNX3MOY3y4uPxqbk_MnCVEpdsqHJVsg@mail.gmail.com
* Fix thinko in outer-join removal.Tom Lane2023-02-041-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have a RestrictInfo that mentions both the removal-candidate relation and the outer join's relid, then that is a pushed-down condition not a join condition, so it should be grounds for deciding that we can't remove the outer join. In commit 2489d76c4, I'd blindly included the OJ's relid into "joinrelids" as per the new standard convention, but the checks of attr_needed and ph_needed should only allow the join's input rels to be mentioned. Having done that, the check for references in pushed-down quals a few lines further down should be redundant. I left it in place as an Assert, though. While researching this I happened across a couple of comments that worried about the effects of update_placeholder_eval_levels. That's gone as of b448f1c8d, so we can remove some worry. Per bug #17769 from Robins Tharakan. The submitted test case triggers this more or less accidentally because we flatten out a LATERAL sub-select after we've done join strength reduction; if we did that in the other order, this problem would be masked because the outer join would get simplified to an inner join. To ensure that the committed test case will continue to test what it means to even if we make that happen someday, use a test clause involving COALESCE(), which will prevent us from using it to do join strength reduction. Patch by me, but thanks to Richard Guo for initial investigation. Discussion: https://postgr.es/m/17769-e4f7a5c9d84a80a7@postgresql.org
* Rethink treatment of "postponed" quals in deconstruct_jointree().Tom Lane2023-02-041-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | After pulling up LATERAL subqueries, we may have qual clauses that refer to relations outside their syntactic scope. Before doing any such pullup, prepjointree.c checks to make sure that it wouldn't create a semantically-invalid situation; but we leave it to deconstruct_jointree() to actually move these quals up the join tree to a place where they can be evaluated. In commit 2489d76c4, I (tgl) refactored deconstruct_jointree() in a way that caused assertion failures while moving such quals, because the new logic failed to distinguish "this jointree node is a parent of the source one" from "this jointree node is processed after the source one in depth-first order". Fix this, and at the same time reduce the overhead a bit, by getting rid of the common PostponedQual list and instead making each JoinTreeItem contain a list of quals that needed to be postponed to its level. We can help distribute_qual_to_rels find the appropriate JoinTreeItem efficiently by adding parent-item links to the JoinTreeItem data structure. This ends up being the same number of relid subset checks as the original (pre-bug) logic, but less list manipulation is required during multi-level postponements. Richard Guo and Tom Lane, per bug #17768 from Robins Tharakan. Discussion: https://postgr.es/m/17768-5ac8730ece54478f@postgresql.org
* Remove over-optimistic Assert.Tom Lane2023-01-311-0/+14
| | | | | | | | | | | | | | | | | | | In commit 2489d76c4, I'd thought it'd be safe to assert that a PlaceHolderVar appearing in a scan-level expression has empty nullingrels. However this is not so, as when we determine that a join relation is certainly empty we'll put its targetlist into a Result-with-constant-false-qual node, and nothing is done to adjust the nullingrels of the Vars or PHVs therein. (Arguably, a Result used in this way isn't really a scan-level node, but it certainly isn't an upper node either ...) It's not clear this is worth any close analysis, so let's just take out the faulty Assert. Per report from Robins Tharakan. I added a test case based on his example, just in case somebody tries to tighten this up. Discussion: https://postgr.es/m/CAEP4nAz7Enq3+DEthGG7j27DpuwSRZnW0Nh6jtNh75yErQ_nbA@mail.gmail.com
* Do assorted mop-up in the planner.Tom Lane2023-01-301-19/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove RestrictInfo.nullable_relids, along with a good deal of infrastructure that calculated it. One use-case for it was in join_clause_is_movable_to, but we can now replace that usage with a check to see if the clause's relids include any outer join that can null the target relation. The other use-case was in join_clause_is_movable_into, but that test can just be dropped entirely now that the clause's relids include outer joins. Furthermore, join_clause_is_movable_into should now be accurate enough that it will accept anything returned by generate_join_implied_equalities, so we can restore the Assert that was diked out in commit 95f4e59c3. Remove the outerjoin_delayed mechanism. We needed this before to prevent quals from getting evaluated below outer joins that should null some of their vars. Now that we consider varnullingrels while placing quals, that's taken care of automatically, so throw the whole thing away. Teach remove_useless_result_rtes to also remove useless FromExprs. Having done that, the delay_upper_joins flag serves no purpose any more and we can remove it, largely reverting 11086f2f2. Use constant TRUE for "dummy" clauses when throwing back outer joins. This improves on a hack I introduced in commit 6a6522529. If we have a left-join clause l.x = r.y, and a WHERE clause l.x = constant, we generate r.y = constant and then don't really have a need for the join clause. But we must throw the join clause back anyway after marking it redundant, so that the join search heuristics won't think this is a clauseless join and avoid it. That was a kluge introduced under time pressure, and after looking at it I thought of a better way: let's just introduce constant-TRUE "join clauses" instead, and get rid of them at the end. This improves the generated plans for such cases by not having to test a redundant join clause. We can also get rid of the ugly hack used to mark such clauses as redundant for selectivity estimation. Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Make Vars be outer-join-aware.Tom Lane2023-01-301-6/+157
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Traditionally we used the same Var struct to represent the value of a table column everywhere in parse and plan trees. This choice predates our support for SQL outer joins, and it's really a pretty bad idea with outer joins, because the Var's value can depend on where it is in the tree: it might go to NULL above an outer join. So expression nodes that are equal() per equalfuncs.c might not represent the same value, which is a huge correctness hazard for the planner. To improve this, decorate Var nodes with a bitmapset showing which outer joins (identified by RTE indexes) may have nulled them at the point in the parse tree where the Var appears. This allows us to trust that equal() Vars represent the same value. A certain amount of klugery is still needed to cope with cases where we re-order two outer joins, but it's possible to make it work without sacrificing that core principle. PlaceHolderVars receive similar decoration for the same reason. In the planner, we include these outer join bitmapsets into the relids that an expression is considered to depend on, and in consequence also add outer-join relids to the relids of join RelOptInfos. This allows us to correctly perceive whether an expression can be calculated above or below a particular outer join. This change affects FDWs that want to plan foreign joins. They *must* follow suit when labeling foreign joins in order to match with the core planner, but for many purposes (if postgres_fdw is any guide) they'd prefer to consider only base relations within the join. To support both requirements, redefine ForeignScan.fs_relids as base+OJ relids, and add a new field fs_base_relids that's set up by the core planner. Large though it is, this commit just does the minimum necessary to install the new mechanisms and get check-world passing again. Follow-up patches will perform some cleanup. (The README additions and comments mention some stuff that will appear in the follow-up.) Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Allow left join removals and unique joins on partitioned tablesDavid Rowley2023-01-091-0/+10
| | | | | | | | | | | | This allows left join removals and unique joins to work with partitioned tables. The planner just lacked sufficient proofs that a given join would not cause any row duplication. Unique indexes currently serve as that proof, so have get_relation_info() populate the indexlist for partitioned tables too. Author: Arne Roland Reviewed-by: Alvaro Herrera, Zhihong Yu, Amit Langote, David Rowley Discussion: https://postgr.es/m/c3b2408b7a39433b8230bbcd02e9f302@index.de
* Fix bit-rotted planner test case.Tom Lane2022-12-171-3/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | While fooling with my pet outer-join-variables patch, I discovered that the test case I added in commit 11086f2f2 no longer demonstrates what it's supposed to. The idea is to tempt the planner to reverse the order of the two outer joins, which would leave noplace to correctly evaluate the WHERE clause that's inserted between them. Before the addition of the delay_upper_joins mechanism, it would have taken the bait. However, subsequent improvements broke the test in two different ways. First, we now recognize the IS NULL coding pattern as an antijoin, and we won't re-order antijoins; even if we did, the IS NULL test clauses get removed so there would be no opportunity for them to misbehave. Second, the planner now discovers that nested parameterized indexscans are a lot cheaper than the double hash join it used back in the day, and that approach doesn't want to re-order the joins anyway. Thus, in HEAD the test passes even if one dikes out delay_upper_joins. To fix, change the IS NULL tests to COALESCE clauses, which produce the same results but the planner isn't smart enough to convert them to antijoins. It'll still go for parameterized indexscans though, so drop the index enabling that (don't know why I added that in the first place), and disable nestloop joining just to be sure. This time around, add an EXPLAIN to make the choice of plan visible.
* Remove bogus Assert and dead code in remove_useless_results_recurse().Tom Lane2022-11-291-0/+20
| | | | | | | | | | | | | | | | | | | | The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that need to be evaluated at the semijoin's RHS, which is wrong because there could be some in the semijoin's qual condition. However, there could not be any references further up than that, and within the qual there is not any way that such a PHV could have gone to null yet, so we don't really need the PHV and there is no need to avoid making the RHS-removal optimization. The upshot is that there's no actual bug in production code, and we ought to just remove this misguided Assert. While we're here, also drop the JOIN_RIGHT case, which is dead code because reduce_outer_joins() already got rid of JOIN_RIGHT. Per bug #17700 from Xin Wen. Uselessness of the JOIN_RIGHT case pointed out by Richard Guo. Back-patch to v12 where this code was added. Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
* Give better hints for ambiguous or unreferenceable columns.Tom Lane2022-11-221-10/+21
| | | | | | | | | | | | | | | | | | | | | | | | | Examine ParseNamespaceItem flags to detect whether a column name is unreferenceable for lack of LATERAL, or could be referenced if a qualified name were used, and give better hints for such cases. Also, don't phrase the message to imply that there's only one matching column when there is really more than one. Many of the regression test output changes are not very interesting, but just reflect reclassifying the "There is a column ... but it cannot be referenced from this part of the query" messages as DETAIL rather than HINT. They are details per our style guide, in the sense of being factual rather than offering advice; and this change provides room to offer actual HINTs about what to do. While here, adjust the fuzzy-name-matching code to be a shade less impenetrable. It was overloading the meanings of FuzzyAttrMatchState fields way too much IMO, so splitting them into multiple fields seems to make it clearer. It's not like we need to shave bytes in that struct. Per discussion of bug #17233 from Alexander Korolev. Discussion: https://postgr.es/m/17233-afb9d806aaa64b17@postgresql.org
* Improve ruleutils' printout of LATERAL references within subplans.Tom Lane2022-11-161-4/+4
| | | | | | | | | | | | | | | | | | | Commit 1cc29fe7c, which taught EXPLAIN to print PARAM_EXEC Params as the referenced expressions, included some checks to prevent matching Params found in SubPlans or InitPlans to NestLoopParams of upper query levels. At the time, this seemed possibly necessary to avoid false matches because of the planner's habit of re-using the same PARAM_EXEC slot in multiple places in a plan. Furthermore, in the absence of LATERAL no such reference could be valid anyway. But it's possible now that we have LATERAL, and in the wake of 46c508fbc and 1db5667ba I believe the false-match hazard is gone. Hence, remove the in_same_plan_level checks. As shown in the regression test changes, this provides a useful improvement in readability for EXPLAIN of LATERAL-using subplans. Richard Guo, reviewed by Greg Stark and myself Discussion: https://postgr.es/m/CAMbWs4-YSOcQXAagJetP95cAeZPqzOy5kM5yijG0PVW5ztRb4w@mail.gmail.com
* Handle SubPlan cases in find_nonnullable_rels/vars.Tom Lane2022-11-051-0/+28
| | | | | | | | | We can use some variants of SubPlan to deduce that Vars appearing in the testexpr must be non-null. Richard Guo Discussion: https://postgr.es/m/CAMbWs4-jV=199A2Y_6==99dYnpnmaO_Wz_RGkRTTaCB=Pihw2w@mail.gmail.com
* Add basic regression tests for semi/antijoin recognition.Tom Lane2022-10-311-0/+63
| | | | | | | | | | | Add some simple tests that the planner recognizes all the standard idioms for SEMI and ANTI joins. Failure to optimize in this way won't necessarily cause any visible change in query results, so check the plans. We had no similar coverage before, at least for some variants of antijoin, as noted by Richard Guo. Discussion: https://postgr.es/m/CAMbWs4-mvPPCJ1W6iK6dD5HiNwoJdi6mZp=-7mE8N9Sh+cd0tQ@mail.gmail.com
* Avoid making commutatively-duplicate clauses in EquivalenceClasses.Tom Lane2022-10-271-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we decide we need to make a derived clause equating a.x and b.y, we already will re-use a previously-made clause "a.x = b.y". But we might instead have "b.y = a.x", which is perfectly usable because equivclass.c has never promised anything about the operand order in clauses it builds. Saving construction of a new RestrictInfo doesn't matter all that much in itself --- but because we cache selectivity estimates and so on per-RestrictInfo, there's a possibility of saving a fair amount of duplicative effort downstream. Hence, check for commutative matches as well as direct ones when seeing if we have a pre-existing clause. This changes the visible clause order in several regression test cases, but they're all clearly-insignificant changes. Checking for the reverse operand order is simple enough, but if we wanted to check for operator OID match we'd need to call get_commutator here, which is not so cheap. I concluded that we don't really need the operator check anyway, so I just removed it. It's unlikely that an opfamily contains more than one applicable operator for a given pair of operand datatypes; and if it does they had better give the same answers, so there seems little need to insist that we use exactly the one select_equality_operator chose. Using the current core regression suite as a test case, I see this change reducing the number of new join clauses built by create_join_clause from 9673 to 5142 (out of 26652 calls). So not quite 50% savings, but pretty close to it. Discussion: https://postgr.es/m/78062.1666735746@sss.pgh.pa.us
* Revert "Optimize order of GROUP BY keys".Tom Lane2022-10-031-24/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit db0d67db2401eb6238ccc04c6407a4fd4f985832 and several follow-on fixes. The idea of making a cost-based choice of the order of the sorting columns is not fundamentally unsound, but it requires cost information and data statistics that we don't really have. For example, relying on procost to distinguish the relative costs of different sort comparators is pretty pointless so long as most such comparator functions are labeled with cost 1.0. Moreover, estimating the number of comparisons done by Quicksort requires more than just an estimate of the number of distinct values in the input: you also need some idea of the sizes of the larger groups, if you want an estimate that's good to better than a factor of three or so. That's data that's often unknown or not very reliable. Worse, to arrive at estimates of the number of calls made to the lower-order-column comparison functions, the code needs to make estimates of the numbers of distinct values of multiple columns, which are necessarily even less trustworthy than per-column stats. Even if all the inputs are perfectly reliable, the cost algorithm as-implemented cannot offer useful information about how to order sorting columns beyond the point at which the average group size is estimated to drop to 1. Close inspection of the code added by db0d67db2 shows that there are also multiple small bugs. These could have been fixed, but there's not much point if we don't trust the estimates to be accurate in-principle. Finally, the changes in cost_sort's behavior made for very large changes (often a factor of 2 or so) in the cost estimates for all sorting operations, not only those for multi-column GROUP BY. That naturally changes plan choices in many situations, and there's precious little evidence to show that the changes are for the better. Given the above doubts about whether the new estimates are really trustworthy, it's hard to summon much confidence that these changes are better on the average. Since we're hard up against the release deadline for v15, let's revert these changes for now. We can always try again later. Note: in v15, I left T_PathKeyInfo in place in nodes.h even though it's unreferenced. Removing it would be an ABI break, and it seems a bit late in the release cycle for that. Discussion: https://postgr.es/m/TYAPR01MB586665EB5FB2C3807E893941F5579@TYAPR01MB5866.jpnprd01.prod.outlook.com
* Refactor addition of PlaceHolderVars to joinrel targetlists.Tom Lane2022-08-171-2/+2
| | | | | | | | | | | | | | | | | | | Make build_joinrel_tlist() responsible for adding PHVs that were already computed in one or the other input relation, and therefore change add_placeholders_to_joinrel() to only add PHVs that will be newly computed in this joinrel's output. This makes the handling of PHVs in build_joinrel_tlist() more like its handling of plain Vars, which seems like a good thing on intelligibility grounds and will simplify planned future changes. There is a purely cosmetic side-effect that the order of entries in the joinrel's tlist may change; but since it becomes more like the order of entries in the input tlists, that's not bad. The reason it wasn't done like this originally was the potential cost of looking up PlaceHolderInfo entries to consult ph_needed. Now that that's O(1) it shouldn't hurt. Discussion: https://postgr.es/m/1405792.1660677844@sss.pgh.pa.us
* Relax overly strict rules in select_outer_pathkeys_for_merge()David Rowley2022-08-021-0/+31
| | | | | | | | | | | | | | | | | | The select_outer_pathkeys_for_merge function made an attempt to build the merge join pathkeys in the same order as query_pathkeys. This was done as it may have led to no sort being required for an ORDER BY or GROUP BY clause in the upper planner. However, this restriction seems overly strict as it required that we match the query_pathkeys entirely or we don't bother putting the merge join pathkeys in that order. Here we relax this rule so that we use a prefix of the query_pathkeys providing that prefix matches all of the join quals. This may provide the upper planner with partially sorted input which will allow the use of incremental sorts instead of full sorts. Author: David Rowley Reviewed-by: Richard Guo Discussion: https://postgr.es/m/CAApHDvrtZu0PHVfDPFM4Yx3jNR2Wuwosv+T2zqa7LrhhBr2rRg@mail.gmail.com
* Estimate cost of elided SubqueryScan, Append, MergeAppend nodes better.Tom Lane2022-07-191-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setrefs.c contains logic to discard no-op SubqueryScan nodes, that is, ones that have no qual to check and copy the input targetlist unchanged. (Formally it's not very nice to be applying such optimizations so late in the planner, but there are practical reasons for it; mostly that we can't unify relids between the subquery and the parent query until we flatten the rangetable during setrefs.c.) This behavior falsifies our previous cost estimates, since we would've charged cpu_tuple_cost per row just to pass data through the node. Most of the time that's little enough to not matter, but there are cases where this effect visibly changes the plan compared to what you would've gotten with no sub-select. To improve the situation, make the callers of cost_subqueryscan tell it whether they think the targetlist is trivial. cost_subqueryscan already has the qual list, so it can check the other half of the condition easily. It could make its own determination of tlist triviality too, but doing so would be repetitive (for callers that may call it several times) or unnecessarily expensive (for callers that can determine this more cheaply than a general test would do). This isn't a 100% solution, because createplan.c also does things that can falsify any earlier estimate of whether the tlist is trivial. However, it fixes nearly all cases in practice, if results for the regression tests are anything to go by. setrefs.c also contains logic to discard no-op Append and MergeAppend nodes. We did have knowledge of that behavior at costing time, but somebody failed to update it when a check on parallel-awareness was added to the setrefs.c logic. Fix that while we're here. These changes result in two minor changes in query plans shown in our regression tests. Neither is relevant to the purposes of its test case AFAICT. Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/2581077.1651703520@sss.pgh.pa.us
* Fix alias matching in transformLockingClause().Dean Rasheed2022-07-071-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | When locking a specific named relation for a FOR [KEY] UPDATE/SHARE clause, transformLockingClause() finds the relation to lock by scanning the rangetable for an RTE with a matching eref->aliasname. However, it failed to account for the visibility rules of a join RTE. If a join RTE doesn't have a user-supplied alias, it will have a generated eref->aliasname of "unnamed_join" that is not visible as a relation name in the parse namespace. Such an RTE needs to be skipped, otherwise it might be found in preference to a regular base relation with a user-supplied alias of "unnamed_join", preventing it from being locked. In addition, if a join RTE doesn't have a user-supplied alias, but does have a join_using_alias, then the RTE needs to be matched using that alias rather than the generated eref->aliasname, otherwise a misleading "relation not found" error will be reported rather than a "join cannot be locked" error. Backpatch all the way, except for the second part which only goes back to 14, where JOIN USING aliases were added. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCUY_KOBnqxbTSPf=7fz9HWPnZ5Xgb9SwYzZ8rFXe7nb=w@mail.gmail.com
* Check column list length in XMLTABLE/JSON_TABLE aliasAlvaro Herrera2022-05-181-0/+3
| | | | | | | | | | | | | | | | | We weren't checking the length of the column list in the alias clause of an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was possible to make the server crash by passing an overly long one. Fix it by throwing an error in that case, like the other places that deal with alias lists. In passing, modify the equivalent test used for join RTEs to look like the other ones, which was different for no apparent reason. This bug came in when XMLTABLE was born in version 10; backpatch to all stable versions. Reported-by: Wang Ke <krking@zju.edu.cn> Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
* Fix incorrect row estimates used for Memoize costingDavid Rowley2022-05-161-16/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | In order to estimate the cache hit ratio of a Memoize node, one of the inputs we require is the estimated number of times the Memoize node will be rescanned. The higher this number, the large the cache hit ratio is likely to become. Unfortunately, the value being passed as the number of "calls" to the Memoize was incorrectly using the Nested Loop's outer_path->parent->rows instead of outer_path->rows. This failed to account for the fact that the outer_path might be parameterized by some upper-level Nested Loop. This problem could lead to Memoize plans appearing more favorable than they might actually be. It could also lead to extended executor startup times when work_mem values were large due to the planner setting overly large MemoizePath->est_entries resulting in the Memoize hash table being initially made much larger than might be required. Fix this simply by passing outer_path->rows rather than outer_path->parent->rows. Also, adjust the expected regression test output for a plan change. Reported-by: Pavel Stehule Author: David Rowley Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com Backpatch-through: 14, where Memoize was introduced
* Optimize order of GROUP BY keysTomas Vondra2022-03-311-27/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When evaluating a query with a multi-column GROUP BY clause using sort, the cost may be heavily dependent on the order in which the keys are compared when building the groups. Grouping does not imply any ordering, so we're allowed to compare the keys in arbitrary order, and a Hash Agg leverages this. But for Group Agg, we simply compared keys in the order as specified in the query. This commit explores alternative ordering of the keys, trying to find a cheaper one. In principle, we might generate grouping paths for all permutations of the keys, and leave the rest to the optimizer. But that might get very expensive, so we try to pick only a couple interesting orderings based on both local and global information. When planning the grouping path, we explore statistics (number of distinct values, cost of the comparison function) for the keys and reorder them to minimize comparison costs. Intuitively, it may be better to perform more expensive comparisons (for complex data types etc.) last, because maybe the cheaper comparisons will be enough. Similarly, the higher the cardinality of a key, the lower the probability we’ll need to compare more keys. The patch generates and costs various orderings, picking the cheapest ones. The ordering of group keys may interact with other parts of the query, some of which may not be known while planning the grouping. E.g. there may be an explicit ORDER BY clause, or some other ordering-dependent operation, higher up in the query, and using the same ordering may allow using either incremental sort or even eliminate the sort entirely. The patch generates orderings and picks those minimizing the comparison cost (for various pathkeys), and then adds orderings that might be useful for operations higher up in the plan (ORDER BY, etc.). Finally, it always keeps the ordering specified in the query, on the assumption the user might have additional insights. This introduces a new GUC enable_group_by_reordering, so that the optimization may be disabled if needed. The original patch was proposed by Teodor Sigaev, and later improved and reworked by Dmitry Dolgov. Reviews by a number of people, including me, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed and Zhihong Yu. Author: Dmitry Dolgov, Teodor Sigaev, Tomas Vondra Reviewed-by: Tomas Vondra, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed, Zhihong Yu Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru Discussion: https://postgr.es/m/CA%2Bq6zcW_4o2NC0zutLkOJPsFt80megSpX_dVRo6GK9PC-Jx_Ag%40mail.gmail.com
* Rearrange core regression tests to reduce cross-script dependencies.Tom Lane2022-02-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea behind this patch is to make it possible to run individual test scripts without running the entire core test suite. Making all the scripts completely independent would involve a massive rewrite, and would probably be worse for coverage of things like concurrent DDL. So this patch just does what seems practical with limited changes. The net effect is that any test script can be run after running limited earlier dependencies: * all scripts depend on test_setup * many scripts depend on create_index * other dependencies are few in number, and are documented in the parallel_schedule file. To accomplish this, I chose a small number of commonly-used tables and moved their creation and filling into test_setup. Later scripts are expected not to modify these tables' data contents, for fear of affecting other scripts' results. Also, our former habit of declaring all C functions in one place is now gone in favor of declaring them where they're used, if that's just one script, or in test_setup if necessary. There's more that could be done to remove some of the remaining inter-script dependencies, but significantly more-invasive changes would be needed, and at least for now it doesn't seem worth it. Discussion: https://postgr.es/m/1114748.1640383217@sss.pgh.pa.us
* Allow Memoize to operate in binary comparison modeDavid Rowley2021-11-241-9/+19
| | | | | | | | | | | | | | | | | | | | | | Memoize would always use the hash equality operator for the cache key types to determine if the current set of parameters were the same as some previously cached set. Certain types such as floating points where -0.0 and +0.0 differ in their binary representation but are classed as equal by the hash equality operator may cause problems as unless the join uses the same operator it's possible that whichever join operator is being used would be able to distinguish the two values. In which case we may accidentally return in the incorrect rows out of the cache. To fix this here we add a binary mode to Memoize to allow it to the current set of parameters to previously cached values by comparing bit-by-bit rather than logically using the hash equality operator. This binary mode is always used for LATERAL joins and it's used for normal joins when any of the join operators are not hashable. Reported-by: Tom Lane Author: David Rowley Discussion: https://postgr.es/m/3004308.1632952496@sss.pgh.pa.us Backpatch-through: 14, where Memoize was added
* Fix pull_varnos to cope with translated PlaceHolderVars.Tom Lane2021-09-171-0/+28
| | | | | | | | | | | | | | | | | | Commit 55dc86eca changed pull_varnos to use (if possible) the associated ph_eval_at for a PlaceHolderVar. I missed a fine point though: we might be looking at a PHV in the quals or tlist of a child appendrel, in which case we need to compute a ph_eval_at value that's been translated in the same way that the PHV itself has been (cf. adjust_appendrel_attrs). Fortunately, enough info is available in the PlaceHolderInfo to make such translation possible without additional outside data, so we don't need another round of uglification of planner APIs. This is a little bit complicated, but since it's a hard-to-hit corner case, I'm not much worried about adding cycles here. Per report from Jaime Casanova. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
* Change the name of the Result Cache node to MemoizeDavid Rowley2021-07-141-12/+12
| | | | | | | | | | | "Result Cache" was never a great name for this node, but nobody managed to come up with another name that anyone liked enough. That was until David Johnston mentioned "Node Memoization", which Tom Lane revised to just "Memoize". People seem to like "Memoize", so let's do the rename. Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us Backpatch-through: 14, where Result Cache was introduced
* Avoid creating a RESULT RTE that's marked LATERAL.Tom Lane2021-07-091-0/+8
| | | | | | | | | | | | | | | Commit 7266d0997 added code to pull up simple constant function results, converting the RTE_FUNCTION RTE to a dummy RTE_RESULT RTE since it no longer need be scanned. But I forgot to clear the LATERAL flag if the RTE has it set. If the function reduced to a constant, it surely contains no lateral references so this simplification is logically OK. It's needed because various other places will Assert that RESULT RTEs aren't LATERAL. Per bug #17097 from Yaoguang Chen. Back-patch to v13 where the faulty code came in. Discussion: https://postgr.es/m/17097-3372ef9f798fc94f@postgresql.org
* Fix setrefs.c code for Result Cache nodesDavid Rowley2021-05-251-13/+13
| | | | | | | | | | | | | | Result Cache, added in 9eacee2e6 neglected to properly adjust the plan references in setrefs.c. This could lead to the following error during EXPLAIN: ERROR: cannot decompile join alias var in plan tree Fix that. Bug: 17030 Reported-by: Hans Buschmann Discussion: https://postgr.es/m/17030-5844aecae42fe223@postgresql.org
* Add Result Cache executor node (take 2)David Rowley2021-04-021-53/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here we add a new executor node type named "Result Cache". The planner can include this node type in the plan to have the executor cache the results from the inner side of parameterized nested loop joins. This allows caching of tuples for sets of parameters so that in the event that the node sees the same parameter values again, it can just return the cached tuples instead of rescanning the inner side of the join all over again. Internally, result cache uses a hash table in order to quickly find tuples that have been previously cached. For certain data sets, this can significantly improve the performance of joins. The best cases for using this new node type are for join problems where a large portion of the tuples from the inner side of the join have no join partner on the outer side of the join. In such cases, hash join would have to hash values that are never looked up, thus bloating the hash table and possibly causing it to multi-batch. Merge joins would have to skip over all of the unmatched rows. If we use a nested loop join with a result cache, then we only cache tuples that have at least one join partner on the outer side of the join. The benefits of using a parameterized nested loop with a result cache increase when there are fewer distinct values being looked up and the number of lookups of each value is large. Also, hash probes to lookup the cache can be much faster than the hash probe in a hash join as it's common that the result cache's hash table is much smaller than the hash join's due to result cache only caching useful tuples rather than all tuples from the inner side of the join. This variation in hash probe performance is more significant when the hash join's hash table no longer fits into the CPU's L3 cache, but the result cache's hash table does. The apparent "random" access of hash buckets with each hash probe can cause a poor L3 cache hit ratio for large hash tables. Smaller hash tables generally perform better. The hash table used for the cache limits itself to not exceeding work_mem * hash_mem_multiplier in size. We maintain a dlist of keys for this cache and when we're adding new tuples and realize we've exceeded the memory budget, we evict cache entries starting with the least recently used ones until we have enough memory to add the new tuples to the cache. For parameterized nested loop joins, we now consider using one of these result cache nodes in between the nested loop node and its inner node. We determine when this might be useful based on cost, which is primarily driven off of what the expected cache hit ratio will be. Estimating the cache hit ratio relies on having good distinct estimates on the nested loop's parameters. For now, the planner will only consider using a result cache for parameterized nested loop joins. This works for both normal joins and also for LATERAL type joins to subqueries. It is possible to use this new node for other uses in the future. For example, to cache results from correlated subqueries. However, that's not done here due to some difficulties obtaining a distinct estimation on the outer plan to calculate the estimated cache hit ratio. Currently we plan the inner plan before planning the outer plan so there is no good way to know if a result cache would be useful or not since we can't estimate the number of times the subplan will be called until the outer plan is generated. The functionality being added here is newly introducing a dependency on the return value of estimate_num_groups() during the join search. Previously, during the join search, we only ever needed to perform selectivity estimations. With this commit, we need to use estimate_num_groups() in order to estimate what the hit ratio on the result cache will be. In simple terms, if we expect 10 distinct values and we expect 1000 outer rows, then we'll estimate the hit ratio to be 99%. Since cache hits are very cheap compared to scanning the underlying nodes on the inner side of the nested loop join, then this will significantly reduce the planner's cost for the join. However, it's fairly easy to see here that things will go bad when estimate_num_groups() incorrectly returns a value that's significantly lower than the actual number of distinct values. If this happens then that may cause us to make use of a nested loop join with a result cache instead of some other join type, such as a merge or hash join. Our distinct estimations have been known to be a source of trouble in the past, so the extra reliance on them here could cause the planner to choose slower plans than it did previous to having this feature. Distinct estimations are also fairly hard to estimate accurately when several tables have been joined already or when a WHERE clause filters out a set of values that are correlated to the expressions we're estimating the number of distinct value for. For now, the costing we perform during query planning for result caches does put quite a bit of faith in the distinct estimations being accurate. When these are accurate then we should generally see faster execution times for plans containing a result cache. However, in the real world, we may find that we need to either change the costings to put less trust in the distinct estimations being accurate or perhaps even disable this feature by default. There's always an element of risk when we teach the query planner to do new tricks that it decides to use that new trick at the wrong time and causes a regression. Users may opt to get the old behavior by turning the feature off using the enable_resultcache GUC. Currently, this is enabled by default. It remains to be seen if we'll maintain that setting for the release. Additionally, the name "Result Cache" is the best name I could think of for this new node at the time I started writing the patch. Nobody seems to strongly dislike the name. A few people did suggest other names but no other name seemed to dominate in the brief discussion that there was about names. Let's allow the beta period to see if the current name pleases enough people. If there's some consensus on a better name, then we can change it before the release. Please see the 2nd discussion link below for the discussion on the "Result Cache" name. Author: David Rowley Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie Tested-By: Konstantin Knizhnik Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
* Revert b6002a796David Rowley2021-04-011-78/+53
| | | | | | | | | | | | | This removes "Add Result Cache executor node". It seems that something weird is going on with the tracking of cache hits and misses as highlighted by many buildfarm animals. It's not yet clear what the problem is as other parts of the plan indicate that the cache did work correctly, it's just the hits and misses that were being reported as 0. This is especially a bad time to have the buildfarm so broken, so reverting before too many more animals go red. Discussion: https://postgr.es/m/CAApHDvq_hydhfovm4=izgWs+C5HqEeRScjMbOgbpC-jRAeK3Yw@mail.gmail.com
* Add Result Cache executor nodeDavid Rowley2021-04-011-53/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here we add a new executor node type named "Result Cache". The planner can include this node type in the plan to have the executor cache the results from the inner side of parameterized nested loop joins. This allows caching of tuples for sets of parameters so that in the event that the node sees the same parameter values again, it can just return the cached tuples instead of rescanning the inner side of the join all over again. Internally, result cache uses a hash table in order to quickly find tuples that have been previously cached. For certain data sets, this can significantly improve the performance of joins. The best cases for using this new node type are for join problems where a large portion of the tuples from the inner side of the join have no join partner on the outer side of the join. In such cases, hash join would have to hash values that are never looked up, thus bloating the hash table and possibly causing it to multi-batch. Merge joins would have to skip over all of the unmatched rows. If we use a nested loop join with a result cache, then we only cache tuples that have at least one join partner on the outer side of the join. The benefits of using a parameterized nested loop with a result cache increase when there are fewer distinct values being looked up and the number of lookups of each value is large. Also, hash probes to lookup the cache can be much faster than the hash probe in a hash join as it's common that the result cache's hash table is much smaller than the hash join's due to result cache only caching useful tuples rather than all tuples from the inner side of the join. This variation in hash probe performance is more significant when the hash join's hash table no longer fits into the CPU's L3 cache, but the result cache's hash table does. The apparent "random" access of hash buckets with each hash probe can cause a poor L3 cache hit ratio for large hash tables. Smaller hash tables generally perform better. The hash table used for the cache limits itself to not exceeding work_mem * hash_mem_multiplier in size. We maintain a dlist of keys for this cache and when we're adding new tuples and realize we've exceeded the memory budget, we evict cache entries starting with the least recently used ones until we have enough memory to add the new tuples to the cache. For parameterized nested loop joins, we now consider using one of these result cache nodes in between the nested loop node and its inner node. We determine when this might be useful based on cost, which is primarily driven off of what the expected cache hit ratio will be. Estimating the cache hit ratio relies on having good distinct estimates on the nested loop's parameters. For now, the planner will only consider using a result cache for parameterized nested loop joins. This works for both normal joins and also for LATERAL type joins to subqueries. It is possible to use this new node for other uses in the future. For example, to cache results from correlated subqueries. However, that's not done here due to some difficulties obtaining a distinct estimation on the outer plan to calculate the estimated cache hit ratio. Currently we plan the inner plan before planning the outer plan so there is no good way to know if a result cache would be useful or not since we can't estimate the number of times the subplan will be called until the outer plan is generated. The functionality being added here is newly introducing a dependency on the return value of estimate_num_groups() during the join search. Previously, during the join search, we only ever needed to perform selectivity estimations. With this commit, we need to use estimate_num_groups() in order to estimate what the hit ratio on the result cache will be. In simple terms, if we expect 10 distinct values and we expect 1000 outer rows, then we'll estimate the hit ratio to be 99%. Since cache hits are very cheap compared to scanning the underlying nodes on the inner side of the nested loop join, then this will significantly reduce the planner's cost for the join. However, it's fairly easy to see here that things will go bad when estimate_num_groups() incorrectly returns a value that's significantly lower than the actual number of distinct values. If this happens then that may cause us to make use of a nested loop join with a result cache instead of some other join type, such as a merge or hash join. Our distinct estimations have been known to be a source of trouble in the past, so the extra reliance on them here could cause the planner to choose slower plans than it did previous to having this feature. Distinct estimations are also fairly hard to estimate accurately when several tables have been joined already or when a WHERE clause filters out a set of values that are correlated to the expressions we're estimating the number of distinct value for. For now, the costing we perform during query planning for result caches does put quite a bit of faith in the distinct estimations being accurate. When these are accurate then we should generally see faster execution times for plans containing a result cache. However, in the real world, we may find that we need to either change the costings to put less trust in the distinct estimations being accurate or perhaps even disable this feature by default. There's always an element of risk when we teach the query planner to do new tricks that it decides to use that new trick at the wrong time and causes a regression. Users may opt to get the old behavior by turning the feature off using the enable_resultcache GUC. Currently, this is enabled by default. It remains to be seen if we'll maintain that setting for the release. Additionally, the name "Result Cache" is the best name I could think of for this new node at the time I started writing the patch. Nobody seems to strongly dislike the name. A few people did suggest other names but no other name seemed to dominate in the brief discussion that there was about names. Let's allow the beta period to see if the current name pleases enough people. If there's some consensus on a better name, then we can change it before the release. Please see the 2nd discussion link below for the discussion on the "Result Cache" name. Author: David Rowley Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu Tested-By: Konstantin Knizhnik Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
* Allow an alias to be attached to a JOIN ... USINGPeter Eisentraut2021-03-311-0/+52
| | | | | | | | | | | | | | | This allows something like SELECT ... FROM t1 JOIN t2 USING (a, b, c) AS x where x has the columns a, b, c and unlike a regular alias it does not hide the range variables of the tables being joined t1 and t2. Per SQL:2016 feature F404 "Range variable for common column names". Reviewed-by: Vik Fearing <vik.fearing@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/454638cf-d563-ab76-a585-2564428062af@2ndquadrant.com
* Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.Tom Lane2021-01-211-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, pull_varnos() took the relids of a PlaceHolderVar as being equal to the relids in its contents, but that fails to account for the possibility that we have to postpone evaluation of the PHV due to outer joins. This could result in a malformed plan. The known cases end up triggering the "failed to assign all NestLoopParams to plan nodes" sanity check in createplan.c, but other symptoms may be possible. The right value to use is the join level we actually intend to evaluate the PHV at. We can get that from the ph_eval_at field of the associated PlaceHolderInfo. However, there are some places that call pull_varnos() before the PlaceHolderInfos have been created; in that case, fall back to the conservative assumption that the PHV will be evaluated at its syntactic level. (In principle this might result in missing some legal optimization, but I'm not aware of any cases where it's an issue in practice.) Things are also a bit ticklish for calls occurring during deconstruct_jointree(), but AFAICS the ph_eval_at fields should have reached their final values by the time we need them. The main problem in making this work is that pull_varnos() has no way to get at the PlaceHolderInfos. We can fix that easily, if a bit tediously, in HEAD by passing it the planner "root" pointer. In the back branches that'd cause an unacceptable API/ABI break for extensions, so leave the existing entry points alone and add new ones with the additional parameter. (If an old entry point is called and encounters a PHV, it'll fall back to using the syntactic level, again possibly missing some valid optimization.) Back-patch to v12. The computation is surely also wrong before that, but it appears that we cannot reach a bad plan thanks to join order restrictions imposed on the subquery that the PlaceHolderVar came from. The error only became reachable when commit 4be058fe9 allowed trivial subqueries to be collapsed out completely, eliminating their join order restrictions. Per report from Stephan Springl. Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
* Clean up ancient test stylePeter Eisentraut2020-12-151-1673/+1673
| | | | | | | | | | | | | | | | Many older tests where written in a style like SELECT '' AS two, i.* FROM INT2_TBL where the first column indicated the number of expected result rows. This has gotten increasingly out of date, as the test data fixtures have expanded, so a lot of these were wrong and misleading. Moreover, this style isn't really necessary, since the psql output already shows the number of result rows. To clean this up, remove all those extra columns. Discussion: https://www.postgresql.org/message-id/flat/1a25312b-2686-380d-3c67-7a69094a999f%40enterprisedb.com
* Fix missed step in removal of useless RESULT RTEs in the planner.Tom Lane2020-12-051-0/+36
| | | | | | | | | | | | Commit 4be058fe9 forgot that the append_rel_list would already be populated at the time we remove useless result RTEs, and it might contain PlaceHolderVars that need to be adjusted like the ones in the main parse tree. This could lead to "no relation entry for relid N" failures later on, when the planner tries to do something with an unadjusted PHV. Per report from Tom Ellis. Back-patch to v12 where the bug came in. Discussion: https://postgr.es/m/20201205173056.GF30712@cloudinit-builder
* Fix miscomputation of direct_lateral_relids for join relations.Tom Lane2020-11-301-0/+64
| | | | | | | | | | | | | | If a PlaceHolderVar is to be evaluated at a join relation, but its value is only needed there and not at higher levels, we neglected to update the joinrel's direct_lateral_relids to include the PHV's source rel. This causes problems because join_is_legal() then won't allow joining the joinrel to the PHV's source rel at all, leading to "failed to build any N-way joins" planner failures. Per report from Andreas Seltenreich. Back-patch to 9.5 where the problem originated. Discussion: https://postgr.es/m/87blfgqa4t.fsf@aurora.ydns.eu