A short review
The book starts out by taking a cursory look at the theoretical
foundations of database replication like the impact of communication
delays caused by physical distance among other things. This is followed
by a detailed introduction of the different types of replication, in
particular of synchronous and asynchronous replication and sharding.
Continuing on with an in depth look into the technical intrinsics of
PostgreSQL, the author explains clearly the technology behind WAL and
thus the technical foundation of the replication solutions that already
come with PostgreSQL, namely PITR and streaming replication.
Hands on guides to aide in the actual setup of the various solutions
coupled with solutions to common setup problems round off the topic.
Monitoring and high availability aspects, so important for real life
applications, are also covered.
The final chapters cover additional systems for replication and sharding
that are not themselves part of the PostgreSQL software distribution,
such as pgbounce, pgpool, Slony, Skytools and PL/Proxy.
The book is suitable both beginners wanting to start out with a well
explained in-depth look into the workings of WAL/PostgreSQL as well as
experienced database administrators looking to verify their existing
setups and to get pointers for further optimizations.
Regards, Andreas
my blog
about me and my world
Montag, 9. Dezember 2013
Freitag, 9. März 2012
Linux New Media Awards 2012
Our PostgreSQL got the Award in the category "Best Open Source Database Solution".
Here a link, in german: Linux New Media Awards 2012
Thanks to all users of PostgreSQL!
(and thanks to internet24)
Greetings to the worldwide community behind PostgreSQL! Yes!
Regards, Andreas 'akretschmer' Kretschmer
Here a link, in german: Linux New Media Awards 2012
Thanks to all users of PostgreSQL!
(and thanks to internet24)
Greetings to the worldwide community behind PostgreSQL! Yes!
Regards, Andreas 'akretschmer' Kretschmer
Montag, 15. März 2010
PostgreSQL @ LinuxDays Chemnitz
Hi all,
I'm back from Chemnitz, the german PG User Group was there with a booth, a workshop (performance tuning) and a talk about the upcoming 9.0-features.
The workshop and the talk was given by ads, so i think, he will write a blog-posting too. But i can say there was a LOT of attendees to listen his talk.
From my point of view i can say it was a success, many people comes to us and ask us 'how can we migrate from database XYZ to PostgreSQL?'. Great!
And yes, we had also a LOT of happy pg-users and we had heard much positive stories about PG.
But one disadvantage with PG comes up, particular in a hosting environment: every user can see all databases. I had heard that more than once, maybe we should think about a switch in postgresql.conf to disable this.
Regards.
I'm back from Chemnitz, the german PG User Group was there with a booth, a workshop (performance tuning) and a talk about the upcoming 9.0-features.
The workshop and the talk was given by ads, so i think, he will write a blog-posting too. But i can say there was a LOT of attendees to listen his talk.
From my point of view i can say it was a success, many people comes to us and ask us 'how can we migrate from database XYZ to PostgreSQL?'. Great!
And yes, we had also a LOT of happy pg-users and we had heard much positive stories about PG.
But one disadvantage with PG comes up, particular in a hosting environment: every user can see all databases. I had heard that more than once, maybe we should think about a switch in postgresql.conf to disable this.
Regards.
Mittwoch, 10. Februar 2010
JOIN REMOVAL - a short performance test
As i said yesterday, I'm back from FOSDEM where I have learned a LOT about our new version 9.0. For instance: join removal.
Explanation:
(copied from PostgreSQL Weekly News - September 20 2009)
Let's make a short performance test:
First I have created 2 tables:
Let me populate this 2 tables, using another nice new feature: writeable CTE:
Writeable CTE is a nice feature, isn't it? I love it ;-)
Okay. Now i have 2 populated tables. I have analysed this tables and I have created a VIEW:
Okay, first test case without an index on table_b.id:
Okay, as expected an seq-scan on table b. The plan doesn't change even if I create a non-unique index on table_b(id):
Okay, let me drop this index and recreate it again, but with the unique-option, and re-run the select:
Nice. Well, I think, this feature isn't a big thing. The new version 9.0 contains a LOT of really big things, this one is just a optimisation. But it works and it is useful. Thanks to Robert Haas and Tom Lane for this feature.
Andreas
Explanation:
- Implement "join removal" for cases where the inner side of a left join is unique and is not referenced above the join. In this case the inner side doesn't affect the query result and can be thrown away entirely. Although perhaps nobody would ever write such a thing by hand, it's a reasonably common case in machine-generated SQL.
(copied from PostgreSQL Weekly News - September 20 2009)
Let's make a short performance test:
First I have created 2 tables:
test=# \d table_a Table "public.table_a" Column | Type | Modifiers --------+---------+----------- id | integer | not null value | text | Indexes: "table_a_pkey" PRIMARY KEY, btree (id) test=# \d table_b Table "public.table_b" Column | Type | Modifiers --------+---------+----------- id | integer | value | text |
Let me populate this 2 tables, using another nice new feature: writeable CTE:
test=# with a as (insert into table_a select s, md5(s::text) from generate_series(0,250000) s returning *), b as (insert into table_b select id, md5(value) from a where substring(value,1,1) between '0' and '9') select true; bool ------ t (1 row)
Writeable CTE is a nice feature, isn't it? I love it ;-)
Okay. Now i have 2 populated tables. I have analysed this tables and I have created a VIEW:
test=*# create view view_table_ab as select a.id, a.value as value_a, b.value as value_b from table_a a left join table_b b on a.id=b.id; CREATE VIEW
Okay, first test case without an index on table_b.id:
test=*# explain analyse select id, value_a from view_table_ab; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------- Hash Left Join (cost=7532.53..28639.73 rows=250001 width=37) (actual time=183.941..603.520 rows=250001 loops=1) Hash Cond: (a.id = b.id) -> Seq Scan on table_a a (cost=0.00..12917.01 rows=250001 width=37) (actual time=36.073..125.789 rows=250001 loops=1) -> Hash (cost=5047.68..5047.68 rows=156068 width=4) (actual time=147.752..147.752 rows=156068 loops=1) -> Seq Scan on table_b b (cost=0.00..5047.68 rows=156068 width=4) (actual time=0.019..72.371 rows=156068 loops=1) Total runtime: 661.780 ms (6 rows)
Okay, as expected an seq-scan on table b. The plan doesn't change even if I create a non-unique index on table_b(id):
test=*# create index idx_table_b_id on table_b(id); CREATE INDEX Time: 137,121 ms test=*# explain analyse select id, value_a from view_table_ab; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------- Hash Left Join (cost=5346.53..18120.73 rows=250001 width=37) (actual time=141.921..548.674 rows=250001 loops=1) Hash Cond: (a.id = b.id) -> Seq Scan on table_a a (cost=0.00..4584.01 rows=250001 width=37) (actual time=0.006..84.054 rows=250001 loops=1) -> Hash (cost=2861.68..2861.68 rows=156068 width=4) (actual time=141.856..141.856 rows=156068 loops=1) -> Seq Scan on table_b b (cost=0.00..2861.68 rows=156068 width=4) (actual time=0.005..66.048 rows=156068 loops=1) Total runtime: 607.385 ms (6 rows)
Okay, let me drop this index and recreate it again, but with the unique-option, and re-run the select:
test=*# drop index idx_table_b_id; DROP INDEX Time: 16,038 ms test=*# create unique index idx_table_b_id on table_b(id); CREATE INDEX Time: 122,602 ms test=*# explain analyse select id, value_a from view_table_ab; QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Seq Scan on table_a a (cost=0.00..4584.01 rows=250001 width=37) (actual time=0.007..74.825 rows=250001 loops=1) Total runtime: 130.887 ms (2 rows) Time: 131,348 ms
Nice. Well, I think, this feature isn't a big thing. The new version 9.0 contains a LOT of really big things, this one is just a optimisation. But it works and it is useful. Thanks to Robert Haas and Tom Lane for this feature.
Andreas
Dienstag, 9. Februar 2010
Back from FOSDEM
Hi all,
I'm back from FOSDEM, Brussels. It was a great and impressive event and i meet there a lot of PostgreSQL-people. The talks given in our devroom were imposing, i have learned a LOT about our upcoming new release 9.0, in particular streaming replication, exclusion constraints and (with thanks to the talk given from Sergey Petrunya about MariaDB) join removal (the MariaDB calls this feature "table elimination" and after his talk i have discussed that feature with Stefan 'mastermind' Kaltenbrunner. He informed me that we have this feature in 9.0, but we called it 'join removal').
Okay, i don't want peeve you with my really poor english. Last but not least, i have made a lot of pictures. You can see almost all pictures here:
http://a-kretschmer.de/bruessel/
Not all pictures are in a reasonable quality, sorry. The pictures are reduced to 50%, if someone want individual pictures in full resolution, just ask me.
I'm looking forward to see you in Amsterdam...
I'm back from FOSDEM, Brussels. It was a great and impressive event and i meet there a lot of PostgreSQL-people. The talks given in our devroom were imposing, i have learned a LOT about our upcoming new release 9.0, in particular streaming replication, exclusion constraints and (with thanks to the talk given from Sergey Petrunya about MariaDB) join removal (the MariaDB calls this feature "table elimination" and after his talk i have discussed that feature with Stefan 'mastermind' Kaltenbrunner. He informed me that we have this feature in 9.0, but we called it 'join removal').
Okay, i don't want peeve you with my really poor english. Last but not least, i have made a lot of pictures. You can see almost all pictures here:
http://a-kretschmer.de/bruessel/
Not all pictures are in a reasonable quality, sorry. The pictures are reduced to 50%, if someone want individual pictures in full resolution, just ask me.
I'm looking forward to see you in Amsterdam...
Mittwoch, 23. Dezember 2009
Happy Xmas
select array_to_string(array_agg(chr(a[s])),'') "I wish you" from (
select a, generate_series(1,array_length(foo.a,1)) s from (
select array[72,97,112,112,121,32,88,109,97,115,33] as a
) foo
) bar;
Many thanks to all the people who developed and are developing PostgreSQL!
select a, generate_series(1,array_length(foo.a,1)) s from (
select array[72,97,112,112,121,32,88,109,97,115,33] as a
) foo
) bar;
Many thanks to all the people who developed and are developing PostgreSQL!
Sonntag, 15. November 2009
writeable CTE - a short performance test
Inspired by the talk from David Fetter and Mirko Tiikkaja performed on the PGDAY 2009 in Paris about writeable CTE i want to analyse a special aspect: minimizing I/O-operations with query clustering.
The talk shows an example that contains 7 tables: one person-table, 3 tables for details for a person (street, im and phone), which contains contact-data for the person, and 3 additional tables to map each person to multiple streets, im and phone.
Assumed, there are 2 entries for street, im and phone for a person, this results in 1 insert into the person-table, respectively 2 inserts into street, im and phone. In addition it needs 2 inserts for every joining-table.
With the new feature 'writeable CTE', the effort is reduced to single, (but complex) SQL-command, which contains - among others - all inserts. Of course, this procedure is complex.
In the following posting i want to analyse, if this single, complex command is faster than many simple traditional commands.
For this purpose i wrote 2 plpgsql-functions, which generate proper SQL-commands, one for the old style with many inserts
and one for the new syntax, both functions with exactly the same amount of data for the inserts.
Here are the two functions:
And:
And here the output of this functions (for example, for one person):
And, with writeable CTE:
The talk shows an example that contains 7 tables: one person-table, 3 tables for details for a person (street, im and phone), which contains contact-data for the person, and 3 additional tables to map each person to multiple streets, im and phone.
Assumed, there are 2 entries for street, im and phone for a person, this results in 1 insert into the person-table, respectively 2 inserts into street, im and phone. In addition it needs 2 inserts for every joining-table.
With the new feature 'writeable CTE', the effort is reduced to single, (but complex) SQL-command, which contains - among others - all inserts. Of course, this procedure is complex.
In the following posting i want to analyse, if this single, complex command is faster than many simple traditional commands.
For this purpose i wrote 2 plpgsql-functions, which generate proper SQL-commands, one for the old style with many inserts
and one for the new syntax, both functions with exactly the same amount of data for the inserts.
Here are the two functions:
create or replace function fill_tables(IN c int, OUT sql text) returns text as $$ declare c_im int; c_phone int; c_street int; count int; begin c_im:= 1 + strpos(md5(c::text),'a'); c_phone := 1 + strpos(md5(c::text),'b'); c_street := 1 + strpos(md5(c::text),'c'); sql := E'INSERT INTO person (first_name, last_name, birthdate) VALUES (\'' || md5(c::text) || E'\', \'' || md5((c+1)::text) || E'\', \'' || ('1970-01-01'::date + ('0' || regexp_replace(substring(md5((c)::text),0,8),'[^0-9]','','g'))::int * '1 minute'::interval)::date || E'\');\n'; for count in 0 .. c_im loop sql := sql || E'INSERT INTO im (provider, handle) VALUES (\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\');\n'; sql := sql || E'INSERT INTO person_im(person_id, im_id) VALUES (' || E'currval(\'person_id_seq\'),currval(\'im_id_seq\'));\n'; end loop; for count in 0 .. c_phone loop sql := sql || E'INSERT INTO phone (country_code, phone_number, extension) VALUES (\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\');\n'; sql := sql || E'INSERT INTO person_phone(person_id, phone_id) VALUES (' || E'currval(\'person_id_seq\'),currval(\'phone_id_seq\'));\n'; end loop; for count in 0 .. c_street loop sql = sql || E'INSERT INTO street (street1,city,state,country,post_code) VALUES (\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\');\n'; sql := sql || E'INSERT INTO person_street(person_id, street_id) VALUES (' || E'currval(\'person_id_seq\'),currval(\'street_id_seq\'));\n'; end loop; end; $$ language plpgsql volatile;
And:
create or replace function fill_tables_new(IN c int, OUT sql text) returns text as $$ declare c_im int; c_phone int; c_street int; count int; begin c_im:= 1 + strpos(md5(c::text),'a'); c_phone := 1 + strpos(md5(c::text),'b'); c_street := 1 + strpos(md5(c::text),'c'); sql := E'WITH t_person AS (\n ' || E'INSERT INTO person (first_name, last_name, birthdate) VALUES (\n \'' || md5(c::text) || E'\', \'' || md5((c+1)::text) || E'\', \'' || ('1970-01-01'::date + ('0' || regexp_replace(substring(md5((c)::text),0,10),'[^0-9]','','g'))::int * '1 minute'::interval)::date || E'\') RETURNING id),\n sql := sql || E't_im AS (INSERT INTO im (provider, handle) VALUES '; for count in 0 .. c_im loop sql := sql || E'\n (\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\'),'; end loop; sql := trim(sql,','); sql := sql || E' RETURNING id),\n'; sql := sql || E't_person_im AS (INSERT INTO person_im SELECT * FROM t_person CROSS JOIN t_im),\n'; sql := sql || E't_phone AS (INSERT INTO phone (country_code, phone_number, extension) VALUES '; for count in 0 .. c_phone loop sql := sql || E'\n (\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\'),'; end loop; sql := trim(sql,','); sql := sql || E' RETURNING id),\n'; sql := sql || E't_person_phone AS (INSERT INTO person_phone SELECT * FROM t_person CROSS JOIN t_phone),\n'; sql := sql || E't_street AS (INSERT INTO street (street1, city, state, country, post_code) VALUES '; for count in 0 .. c_street loop sql := sql || E'\n (\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\',\'' || md5((c+count)::text) || E'\'),'; end loop; sql := trim(sql,','); sql := sql || E' RETURNING id),\n'; sql := sql || E't_person_street AS (INSERT INTO person_street SELECT * FROM t_person CROSS JOIN t_street)\n'; sql := sql || 'VALUES (true);'; end; $$ language plpgsql volatile;
And here the output of this functions (for example, for one person):
INSERT INTO person (first_name, last_name, birthdate) VALUES ('c4ca4238a0b923820dcc509a6f75849b', 'c81e728d9d4c2f636f067f89cc14862c', '1970-01-04'); INSERT INTO im (provider, handle) VALUES ('c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b'); INSERT INTO person_im(person_id, im_id) VALUES (currval('person_id_seq'),currval('im_id_seq')); INSERT INTO im (provider, handle) VALUES ('c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c'); INSERT INTO person_im(person_id, im_id) VALUES (currval('person_id_seq'),currval('im_id_seq')); INSERT INTO im (provider, handle) VALUES ('eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3'); INSERT INTO person_im(person_id, im_id) VALUES (currval('person_id_seq'),currval('im_id_seq')); INSERT INTO im (provider, handle) VALUES ('a87ff679a2f3e71d9181a67b7542122c','a87ff679a2f3e71d9181a67b7542122c'); INSERT INTO person_im(person_id, im_id) VALUES (currval('person_id_seq'),currval('im_id_seq')); INSERT INTO im (provider, handle) VALUES ('e4da3b7fbbce2345d7772b0674a318d5','e4da3b7fbbce2345d7772b0674a318d5'); INSERT INTO person_im(person_id, im_id) VALUES (currval('person_id_seq'),currval('im_id_seq')); INSERT INTO im (provider, handle) VALUES ('1679091c5a880faf6fb5e6087eb1b2dc','1679091c5a880faf6fb5e6087eb1b2dc'); INSERT INTO person_im(person_id, im_id) VALUES (currval('person_id_seq'),currval('im_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('a87ff679a2f3e71d9181a67b7542122c','a87ff679a2f3e71d9181a67b7542122c','a87ff679a2f3e71d9181a67b7542122c'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('e4da3b7fbbce2345d7772b0674a318d5','e4da3b7fbbce2345d7772b0674a318d5','e4da3b7fbbce2345d7772b0674a318d5'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('1679091c5a880faf6fb5e6087eb1b2dc','1679091c5a880faf6fb5e6087eb1b2dc','1679091c5a880faf6fb5e6087eb1b2dc'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('8f14e45fceea167a5a36dedd4bea2543','8f14e45fceea167a5a36dedd4bea2543','8f14e45fceea167a5a36dedd4bea2543'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('c9f0f895fb98ab9159f51fd0297e236d','c9f0f895fb98ab9159f51fd0297e236d','c9f0f895fb98ab9159f51fd0297e236d'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('45c48cce2e2d7fbdea1afc51c7c6ad26','45c48cce2e2d7fbdea1afc51c7c6ad26','45c48cce2e2d7fbdea1afc51c7c6ad26'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('d3d9446802a44259755d38e6d163e820','d3d9446802a44259755d38e6d163e820','d3d9446802a44259755d38e6d163e820'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('6512bd43d9caa6e02c990b0a82652dca','6512bd43d9caa6e02c990b0a82652dca','6512bd43d9caa6e02c990b0a82652dca'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('c20ad4d76fe97759aa27a0c99bff6710','c20ad4d76fe97759aa27a0c99bff6710','c20ad4d76fe97759aa27a0c99bff6710'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO phone (country_code, phone_number, extension) VALUES ('c51ce410c124a10e0db5e4b97fc2af39','c51ce410c124a10e0db5e4b97fc2af39','c51ce410c124a10e0db5e4b97fc2af39'); INSERT INTO person_phone(person_id, phone_id) VALUES (currval('person_id_seq'),currval('phone_id_seq')); INSERT INTO street (street1,city,state,country,post_code) VALUES ('c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b'); INSERT INTO person_street(person_id, street_id) VALUES (currval('person_id_seq'),currval('street_id_seq')); INSERT INTO street (street1,city,state,country,post_code) VALUES ('c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c'); INSERT INTO person_street(person_id, street_id) VALUES (currval('person_id_seq'),currval('street_id_seq')); INSERT INTO street (street1,city,state,country,post_code) VALUES ('eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3'); INSERT INTO person_street(person_id, street_id) VALUES (currval('person_id_seq'),currval('street_id_seq'));
And, with writeable CTE:
WITH t_person AS ( INSERT INTO person (first_name, last_name, birthdate) VALUES ( 'c4ca4238a0b923820dcc509a6f75849b', 'c81e728d9d4c2f636f067f89cc14862c', '1970-01-31') RETURNING id), t_im AS (INSERT INTO im (provider, handle) VALUES ('c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b'), ('c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c'), ('eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3'), ('a87ff679a2f3e71d9181a67b7542122c','a87ff679a2f3e71d9181a67b7542122c'), ('e4da3b7fbbce2345d7772b0674a318d5','e4da3b7fbbce2345d7772b0674a318d5'), ('1679091c5a880faf6fb5e6087eb1b2dc','1679091c5a880faf6fb5e6087eb1b2dc') RETURNING id), t_person_im AS (INSERT INTO person_im SELECT * FROM t_person CROSS JOIN t_im), t_phone AS (INSERT INTO phone (country_code, phone_number, extension) VALUES ('c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b'), ('c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c'), ('eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3'), ('a87ff679a2f3e71d9181a67b7542122c','a87ff679a2f3e71d9181a67b7542122c','a87ff679a2f3e71d9181a67b7542122c'), ('e4da3b7fbbce2345d7772b0674a318d5','e4da3b7fbbce2345d7772b0674a318d5','e4da3b7fbbce2345d7772b0674a318d5'), ('1679091c5a880faf6fb5e6087eb1b2dc','1679091c5a880faf6fb5e6087eb1b2dc','1679091c5a880faf6fb5e6087eb1b2dc'), ('8f14e45fceea167a5a36dedd4bea2543','8f14e45fceea167a5a36dedd4bea2543','8f14e45fceea167a5a36dedd4bea2543'), ('c9f0f895fb98ab9159f51fd0297e236d','c9f0f895fb98ab9159f51fd0297e236d','c9f0f895fb98ab9159f51fd0297e236d'), ('45c48cce2e2d7fbdea1afc51c7c6ad26','45c48cce2e2d7fbdea1afc51c7c6ad26','45c48cce2e2d7fbdea1afc51c7c6ad26'), ('d3d9446802a44259755d38e6d163e820','d3d9446802a44259755d38e6d163e820','d3d9446802a44259755d38e6d163e820'), ('6512bd43d9caa6e02c990b0a82652dca','6512bd43d9caa6e02c990b0a82652dca','6512bd43d9caa6e02c990b0a82652dca'), ('c20ad4d76fe97759aa27a0c99bff6710','c20ad4d76fe97759aa27a0c99bff6710','c20ad4d76fe97759aa27a0c99bff6710'), ('c51ce410c124a10e0db5e4b97fc2af39','c51ce410c124a10e0db5e4b97fc2af39','c51ce410c124a10e0db5e4b97fc2af39') RETURNING id), t_person_phone AS (INSERT INTO person_phone SELECT * FROM t_person CROSS JOIN t_phone), t_street AS (INSERT INTO street (street1, city, state, country, post_code) VALUES ('c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b','c4ca4238a0b923820dcc509a6f75849b'), ('c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c','c81e728d9d4c2f636f067f89cc14862c'), ('eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3','eccbc87e4b5ce2fe28308fd9f2a7baf3') RETURNING id), t_person_street AS (INSERT INTO person_street SELECT * FROM t_person CROSS JOIN t_street) VALUES (true);
First i created 2 sql-files with the output of my functions, each contains 10.000 rows for persons, in total for all tables 727104 rows.kretschmer@tux:~/cte$ psql -X -t test -c "select fill_tables(s) from generate_series(1,10000) s;" > data_old.sql kretschmer@tux:~/cte$ psql -X -t test -c "select fill_tables_new(s) from generate_series(1,10000) s;" > data_new.sql kretschmer@tux:~/cte$ ls -l data* -rw-r--r-- 1 kretschmer kretschmer 51113563 Nov 13 18:43 data_new.sql -rw-r--r-- 1 kretschmer kretschmer 102391561 Nov 13 18:43 data_old.sqlThe file with the new syntax is just half the size. Okay, lets rock'n'roll, my 8.5devel is listening on port 5434, i'm using the default configuration on a quadcore with 4 GByte RAM, the operating system is Ubuntu. I'm executing the files with the following command(s):time /usr/local/pgsql/bin/psql -X -p 5434 -h localhost test -f data_old.sql real 9m2.466s user 0m10.789s sys 0m11.357s time /usr/local/pgsql/bin/psql -X -p 5434 -h localhost test -f data_new.sql real 0m34.294s user 0m1.620s sys 0m0.528sVery impressive, isn't it? (i know, i could optimize both versions with prepared statements etc.) I hope, the upcoming 8.5 will contain this feature. This feature has a small disadvantage: it's not portable, since no other RDBMS knows about writable CTE! Many thanks to all the people who developed and are developing PostgreSQL!
Abonnieren
Posts (Atom)
Labels
- Linux (1)
- PostgreSQL (8)
- Tamara (2)