Wednesday, 19 February 2020

Parallelism, what next?

This blog post is about the journey of parallelism in PostgreSQL till now and what is in store for the future.  Since PostgreSQL 9.6 where the first feature of parallel query has arrived, each release improves it.  Below is a brief overview of the parallel query features added in each release.

PG9.6 has added Parallel execution of sequential scans, joins, and aggregates.

PG10 has added (a) Support parallel B-tree index scans, (b) Support parallel bitmap heap scans, (c) Allow merge joins to be performed in parallel, (d) Allow non-correlated subqueries to be run in parallel, (e) Improve ability of parallel workers to return pre-sorted data and (f) Increase parallel query usage in procedural language functions.

PG11 has added (a) Allow parallel building of a btree index, (b) Allow hash joins to be performed in parallel using a shared hash table, (c) Allow parallelization of commands CREATE TABLE ... AS, SELECT INTO, and CREATE MATERIALIZED VIEW, (d) Allow UNION to run each SELECT in parallel if the individual SELECTs cannot be parallelized, (e) Allow partition scans to more efficiently use parallel workers, (f) Allow LIMIT to be passed to parallel workers, this allows workers to reduce returned results and use targeted index scans, (g) Allow single-evaluation queries, e.g. WHERE clause aggregate queries, and functions in the target list to be parallelized.

PG12 has added Allow parallelized queries when in SERIALIZABLE isolation mode.

The progress for PG13 with respect to parallelism.  Some of the important advancements are:
(a) Parallel vacuum - This feature allows the vacuum to leverage multiple CPUs in order to process indexes.  This enables us to perform index vacuuming and index cleanup with background workers.  This adds a PARALLEL option to VACUUM command where the user can specify the number of workers that can be used to perform the command which is limited by the number of indexes on a table.  Specifying zero as a number of workers will disable parallelism.  For more information, see commit.

(b) Improve EXPLAIN's handling of per-worker details.  This allows displaying the worker information in a much better way.  The few visible side-effects as mentioned in the commit

* In text format, instead of something like

  Sort Method: external merge  Disk: 4920kB
  Worker 0:  Sort Method: external merge  Disk: 5880kB
  Worker 1:  Sort Method: external merge  Disk: 5920kB
  Buffers: shared hit=682 read=10188, temp read=1415 written=2101
  Worker 0:  actual time=130.058..130.324 rows=1324 loops=1
    Buffers: shared hit=337 read=3489, temp read=505 written=739
  Worker 1:  actual time=130.273..130.512 rows=1297 loops=1
    Buffers: shared hit=345 read=3507, temp read=505 written=744

you get

  Sort Method: external merge  Disk: 4920kB
  Buffers: shared hit=682 read=10188, temp read=1415 written=2101
  Worker 0:  actual time=130.058..130.324 rows=1324 loops=1
    Sort Method: external merge  Disk: 5880kB
    Buffers: shared hit=337 read=3489, temp read=505 written=739
  Worker 1:  actual time=130.273..130.512 rows=1297 loops=1
    Sort Method: external merge  Disk: 5920kB
    Buffers: shared hit=345 read=3507, temp read=505 written=744

(c) Avoid unnecessary shm writes in Parallel Hash Join.  This improves the performance of Parallel Hash Join by a significant amount on large systems running many-core joins.  Though this work has been back-patched to v11 where Parallel Hash Join was introduced, I mentioned it here as it is done during PG13 development.  For more information, see commit.

What is being discussed for the future:
(a) Parallel grouping sets - PostgreSQL already supports parallel aggregation by aggregating in two stages. First, each process participating in the parallel portion of the query performs an aggregation step, producing a partial result for each group of which that process is aware. Second, the partial results are transferred to the leader via the Gather node. Finally, the leader re-aggregates the results across all workers in order to produce the final result.

Next, there has been a discussion in the community to parallelize queries containing grouping sets in much the same way as we do parallel aggregation.
Basically, the aim is to parallelize queries like SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ());
This feature has been proposed for PG13, but yet not committed.

(b) Parallel copy - We also want to parallelize the Copy command, in particular "Copy <table_name> from .. ;" command.  This will help improve the bulk load operation in PostgreSQL.  Currently, we do a lot of work during the Copy command.  We read the file in 64KB chunks, then find the line endings and process that data line by line, where each line corresponds to one tuple.  We first form the tuple (in form of value/null array) from that line, check if it qualifies the where condition and if it qualifies, then perform constraint check and few other checks and then finally store it in local tuple array.  Once we reach 1000 tuples or consumed 64KB (whichever occurred first), we insert them together and then for each tuple insert into the index(es) and execute after row triggers.  The aim of this work is to parallelize as much as possible the work done during the copy.  There is an ongoing discussion in the community on this topic.

There is a small caveat here that to achieve parallel copy, we need to work on relation extension lock where parallel workers block each other while extending the relation which is not the case currently.  There is already a discussion on this topic in the community.

(c) Parallel file_fdw - The proposed work in this area allows file_fdw to divide its scan up for parallel workers, much like a parallel seq scan.

There are more areas where parallelism can be used like parallel DMLs (inserts, updates, deletes).  During a discussion with Thomas Munro, it came up that it would be beneficial if we can parallelize index creation and index scans for indexes other than btree especially gin and gist.  Note that we already support parallel index scans and parallel index creation for btree. I would not like to go in detail of these operations as till now we haven't seen any proposal for those.  Similarly, we can improve few things in our current parallel infrastructure (a) As of now, for each query the parallel workers are created and destroyed, instead we can have some pool of parallel query workers which can avoid the overhead of starting them for each query, (b) As of now, each worker can use up to work_mem of memory which might increase the overall memory usage of query.  We might want to improve this, but currently, there is no proposal for this.

16 comments:

  1. Thank you very much for the interesting article!

    I think it would be another big step towards the goal of achieving more parallelism if the following aggregate functions became "parallel safe":

    array_agg(...)
    json*_agg(...)
    json*_object_agg(...)
    count(distinct ...)

    I have seen several cases where these functions have prevented a parallel plan.

    ReplyDelete
    Replies
    1. I could see that array_agg(...), json*_agg(...), json*_object_agg(...) are marked as parallel safe. You can check by executing statement: select proname, proparallel from pg_proc where proname like 'array%'; and similarly for json functions. I see no reason for those to prevent a parallel plan. I checked these in HEAD. If you can share your exact query for count(distinct ...), I might be able to help better. Feel free to discuss such things on pgsql-hackers (https://www.postgresql.org/list/pgsql-hackers/) or other PG mailing list.

      Delete
  2. What about parallel CTE? Each CTE node on the separate worker

    ReplyDelete
    Replies
    1. The first point is CTE can contain DML statements like update/delete, so we won't be able to parallelise those as we still don't have parallelism for DML statements. Now, if the CTE contains read-only statements (Select queries), we can think of parallelising such statements, but I think this area would require more thoughts.

      Delete
    2. Note that CTEs can finally be inlined in PostgreSQL 12, which means that using WITH syntax doesn't necessarily prevent parallelism anymore! Parallelising materialised CTEs would require a bunch more machinery.

      Delete
    3. I was thinking about this kind or parallelism:

      WITH a (
      select from x
      ), b (
      select from y
      ), c (
      select from z
      ), d (
      select from a
      ), e (
      select from a,c
      ) /* f */
      SELECT FROM e,d

      Thread1: a,d,
      Thread2: b,-,
      Thread3: c,e,f

      Is it useful? I think so:)
      But is it possible?

      Delete
    4. I think it depends if the CTE can be inlined, then it can use parallelism for the entire statement. Based on your example, I have constructed a simple test and result is as below:
      postgres=# Explain (Costs off) WITH a AS (select * from t1), b AS (Select * from t2) Select * from a, b;
      QUERY PLAN
      -------------------------------------
      Gather
      Workers Planned: 2
      -> Nested Loop
      -> Parallel Seq Scan on t1
      -> Seq Scan on t2
      (5 rows)
      So such CTEs would use parallel plans and can parallelize the entire query. However if due to some reason, it can't inline the query, then the parallelism can be used only for the part of the statement. See below example:
      postgres=# Explain (Costs off) WITH a AS Materialized (select * from t1), b AS Materialized (Select * from t2) Select * from a, b;
      QUERY PLAN
      ---------------------------------------
      Nested Loop
      CTE a
      -> Gather
      Workers Planned: 2
      -> Parallel Seq Scan on t1
      CTE b
      -> Seq Scan on t2
      -> CTE Scan on a
      -> CTE Scan on b
      (9 rows)
      The CTE scan itself can't be parallelized as of now.

      Does this help?

      Delete
  3. This comment has been removed by the author.

    ReplyDelete
  4. This article is old but shows how to use skip scan index search algorithm on any query. It has a side benefit that any query can be run in a minimum of two parallel processes. https://waa.ai/TsW0

    ReplyDelete
  5. Any thoughts on parallel queries that write data (Insert/Update)?

    ReplyDelete
    Replies
    1. To make them parallel, we need to first do some infrastructure work like
      a. change locking mechanism in some way so that parallel processes block each other for certain type of heavy-weight locks like relation extension lock and page lock. This can allow inserts.
      b. For updates/deletes, I think we need to have shared combo CID hash, so that all participating processes know about them.
      c. Then, we might need to do something about tuple locks depending on how we implement parallel update/deletes.

      After that, the actual work to make writes parallel might not be much.

      Delete
    2. Thanks for the insight. Would be a very useful feature to have.

      Delete
  6. I havent seen any update on zheap for a long while and github doesnt look active. Is the project still being actively worked on? and last I read PG14 might be earliest to be able to take it for a spin?

    ReplyDelete
    Replies
    1. As far as I know nobody is actively working on it at this stage.

      Delete
    2. That is unfortunate and the first time I have ever heard bad news from PG developers. Thank you and everyone else that has tried to tackle the big and hard problems that make a big difference. Hopefully in the future there might be a blog on whats left remaining and how others can help get it over the line as it seems like such a critical piece of work. Again, thank you and to everyone for their hard and amazing work on this and all things PG over the years.

      Delete